Autoencoding Model
Neural Network structure which map to the latent space
Posterior can be approximated by using encoder network
Kernel PCA and Isomap require to explicitly describe the distance among data points reduction
Unsupervised learning We can learn a kernel mapping function from the data itself so we don’t need label. That is why we call them AutoEncoder
The latent feature should capture most information about data. Then, the data can be almost perfectly reconstructed from its latent feature (reconstructable)
Encoder and Decoder can be parametric functions like neural network including convolution
The decoder of AE can be used for data generation
Reconstruction Error is Loss that the difference between decoded data and original data. It holds significance in the fact that it was able to make breakthroughs in text embedding by restoring the original data without labeling through Loss.
Auto Encoders