Generalized Zero & Few-Shot Transfer for Facial Forgery Detection
Deep Distribution Transfer (DDT), a new transfer learning approach to address the problem of zero and few-shot transfer in the context of facial forgery.
zero-shot-learning few-shot-learning facial-forgery-detection fraud-detection deep-distribution-transfer wasserstein-distance faceforensics++ dessa anomaly-detection research article paper arxiv:2006.11863

We propose Deep Distribution Transfer (DDT), a new transfer learning approach to address the problem of zero and few-shot transfer in the context of facial forgery detection. We examine how well a model (pre-)trained with one forgery creation method generalizes towards a previously unseen manipulation technique or different dataset. To facilitate this transfer, we introduce a new mixture model-based loss formulation that learns a multi-modal distribution, with modes corresponding to class categories of the underlying data of the source forgery method. Our core idea is to first pre-train an encoder neural network, which maps each mode of this distribution to the respective class labels, i.e., real or fake images in the source domain by minimizing wasserstein distance between them. In order to transfer this model to a new domain, we associate a few target samples with one of the previously trained modes. In addition, we propose a spatial mixup augmentation strategy that further helps generalization across domains. We find this learning strategy to be surprisingly effective at domain transfer compared to a traditional classification or even state-of-the-art domain adaptation/few-shot learning methods. For instance, compared to the best baseline, our method improves the classification accuracy by 4.88% for zero-shot and by 8.38% for the few-shot case transferred from the FaceForensics++ to Dessa dataset.

Don't forget to tag @shivangi-aneja in your comment, otherwise they may not be notified.

Authors community post
Student at TU Munich
Share this project
Similar projects
Torchmeta
A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch
Exploring Knowledge Captured in Probability of Strings
An exploration of simple knowledge captured by language models with code examples
Zero Shot Topic Classification
Bart with a classification head trained on MNLI.
Zero-shot Text Classification With Generative Language Models
An overview of a text generation approach to zero-shot text classification with GPT-2