Adversarial Latent Autoencoders
Harnessing the latent power of autoencoders, one disentanglement at a time.
autoencoders generative-adversarial-networks adversarial-latent-autoencoders tutorial paper wandb article research arxiv:2004.04467

Even though the autoencoders have been extensively studied, some issues have not been fully discussed and they are:

Can autoencoders have the same generative power as GAN? Can autoencoders learn disentangled representations? Points in the latent space holds relevant information about the input data distribution. If these points are less entangled amongst themselves, we would then have more control over the generated data, as each point contributes to one relevant feature in the data domain. The authors of Adversarial Latent Autoencoder have designed an autoencoder which can address both the issues mentioned above jointly. Next, let's take a closer look at the architecture.

Don't forget to tag @ayulockin , @ssundar6087 in your comment, otherwise they may not be notified.

Authors community post
Deep Learning for Computer Vision
Caffeinate, Code, Rinse and Repeat
Share this project
Similar projects
PyTorch Tutorial for Deep Learning Researchers
This repository provides tutorial code for deep learning researchers to learn PyTorch.
Adversarial Latent Autoencoders
Introducing the Adversarial Latent Autoencoder (ALAE), a general architecture that can leverage recent improvements on GAN training procedures.
Generating music in the waveform domain
Overview of generative models applied to music generation.
Deep Learning for Anomaly Detection
Techniques and applications of anomaly detection.