Nicolas Papernot

Top projects

How to Steal Modern NLP Systems with Gibberish?
It’s possible to steal BERT-based models without any real training data, even using gibberish word sequences.
bert adversarial-attacks computer-security adversarial-learning
How to Know When Machine Learning Does Not Now
It is becoming increasingly important to understand how a prediction made by a Machine Learning model is informed by its training data.
adversarial-learning interpretability uncertainty adversarial-examples
In Model Extraction, Don’t Just Ask ‘How?’: Ask ‘Why?’
Designing an effective extraction attack requires that one first settle on a few critical details—the adversary’s goal, capabilities, and knowledge.
model-extraction adversarial-attacks adversarial-learning tutorial

Top collections