We identify important latent directions based on Principal Components Analysis (PCA) applied in activation space. Then, we show that interpretable edits can be defined based on layer-wise application of these edit directions. Moreover, we show that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner. A user may identify a large number of interpretable controls with these mechanisms. We demonstrate results on GANs from various datasets.

Don't forget to tag @harskish in your comment, otherwise they may not be notified.

Share this project
Similar projects
Adversarial Latent Autoencoders
Introducing the Adversarial Latent Autoencoder (ALAE), a general architecture that can leverage recent improvements on GAN training procedures.
GANs in Computer Vision Free Ebook / Article-series
This free ebook/article-series follows the chronological order of 20 peer-reviewed highly-cited papers as they presented in a series of 6 articles.
TailorGAN: Making User-Defined Fashion Designs
Generate a photo-realistic image which combines the texture from reference A and the new attribute from reference B.
Cycle Text-To-Image GAN with BERT
Image generation from their respective captions, building on state-of-the-art GAN architectures.
Top collections