A Tutorial on VAEs: From Bayes' Rule to Lossless Compression
An overview of the VAE and a tour through various derivations and interpretations of the VAE objective.
variational-autoencoders autoencoders bayes-rule loseless-compression tutorial research code paper notebook arxiv:2006.10273

The Variational Auto-Encoder (VAE) is a simple, efficient, and popular deep maximum likelihood model. Though usage of VAEs is widespread, the derivation of the VAE is not as widely understood. In this tutorial, we will provide an overview of the VAE and a tour through various derivations and interpretations of the VAE objective. From a probabilistic standpoint, we will examine the VAE through the lens of Bayes' Rule, importance sampling, and the change-of-variables formula. From an information theoretic standpoint, we will examine the VAE through the lens of lossless compression and transmission through a noisy channel. We will then identify two common misconceptions over the VAE formulation and their practical consequences. Finally, we will visualize the capabilities and limitations of VAEs using a code example (with an accompanying Jupyter notebook) on toy 2D data.

Don't forget to tag @ronaldiscool in your comment, otherwise they may not be notified.

Authors community post
Share this project
Similar projects
Variational Autoencoders
An introduction to VAEs with Pyro.
Lecture 13 | Generative Models - CS231n
A look at the motivation and concepts behind variational autoencoders.
Understanding Variational Autoencoders (VAEs)
Building, step by step, the reasoning that leads to VAEs.
Implementing an Autoencoder in PyTorch
Building an autoencoder model for reconstruction.