Variational Auto-Encoder (VAE)
A Variational Auto-Encoder is a generative model that learns latent variables from input data to generate new data. It works by maximizing the ELBO (Evidence Lower Bound) and uses Variational Inference to estimate the posterior distribution implicitly.
Key Components
- Regularization: VAE regularizes the latent space to follow a Standard Normal Distribution using the Reparameterization trick
- Loss Function: Typically combines Reconstruction Loss and KL Divergence Loss in a ratio of 10:1 to 100:1
- Latent Space: Unlike standard autoencoders that store discrete values of z, VAE stores density parameters (mean and variance) to generate from a distribution
Advantages
- Enables generation of highly plausible results due to probabilistic latent space
- The encoder can be leveraged for Semi-supervised Learning tasks
Limitations
- Tends to generate blurry and lower-quality outputs compared to state-of-the-art models like GANs
VAE Notion
VAE Variations

Seonglae Cho