Even though the autoencoders have been extensively studied, some issues have not been fully discussed and they are:
Can autoencoders have the same generative power as GAN? Can autoencoders learn disentangled representations? Points in the latent space holds relevant information about the input data distribution. If these points are less entangled amongst themselves, we would then have more control over the generated data, as each point contributes to one relevant feature in the data domain. The authors of Adversarial Latent Autoencoder have designed an autoencoder which can address both the issues mentioned above jointly. Next, let's take a closer look at the architecture.