Diverse Image Generation via Self-Conditioned GANs
A simple but effective unsupervised method for generating realistic & diverse images using a class-conditional GAN model without using manually annotated ...
image-generation generative-adversarial-networks self-conditioned-gans unsupervised-learning data-augmentation computer-vision article code paper cvpr-2020 research arxiv:2006.10728

Despite the remarkable progress in Generative Adversarial Networks (GANs), unsupervised models fail to generalize to diverse datasets, such as ImageNet or Places365. To tackle such datasets, we rely on class-conditional GANs, which require class labels to train. These labels are often not available or are expensive to obtain.

We propose to increase unsupervised GAN quality by inferring class labels in a fully unsupervised manner. By periodically clustering already present discriminator features, we improve generation quality on large-scale datasets such as ImageNet and Places365. Besides increasing generation quality, we also automatically infer semantically meaningful clusters.

Don't forget to tag @stevliu in your comment, otherwise they may not be notified.

Authors community post
Share this project
Similar projects
Adversarial Latent Autoencoders
Introducing the Adversarial Latent Autoencoder (ALAE), a general architecture that can leverage recent improvements on GAN training procedures.
Synthesizing High-Resolution Images with StyleGAN2
Developed by NVIDIA Researchers, StyleGAN2 yields state-of-the-art results in data-driven unconditional generative image modeling.
MixNMatch
Multifactor Disentanglement and Encoding for Conditional Image Generation
GANSpace: Discovering Interpretable GAN Controls
This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis.