AutoEncoder

Created
Created
2021 Oct 6 10:4
Editor
Creator
Creator
Seonglae ChoSeonglae Cho
Edited
Edited
2024 Nov 11 14:25

Autoencoding Model

Neural Network structure which map to the latent space with
Latent Variable
. The latent feature should capture most information about data. Then, the data can be almost perfectly reconstructed from its latent feature (reconstructable).
Linear AEs define rotations and linear combinations of dimensions of x. It assumes that there is a latent space, ๐‘, which contains enough structure to sufficiently represent. If MSE of AEs are 0, it would not be useful. Useful in the sense that we want to find a transformation where we observe the most information in the least dimensions possible.

Reconstruction (
Loss Function
design)

Guide the latent space based on the desired result. Latent dimension should be as informative of the downstream task as possible.
Reconstruction Error is Loss that the difference between decoded data and original data. It holds significance in the fact that it was able to make breakthroughs in text embedding by restoring the original data without labeling through Loss.
Posterior can be approximated by using encoder network. The decoder of AE can be used for data generation.
Kernel PCA and Isomap require to explicitly describe the distance among data points reduction
We can learn a kernel mapping function from the data itself so we donโ€™t need label. That is why we call them AutoEncoder.
Encoder and Decoder can be parametric functions like neural network including convolution.
ย 
Auto Encoders
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Recommendations