Model Fitting, Fitting Probability Distribution, Parametric Learning
Methods find the most likely parameter that explain the data and boil down to
if ,
statistical experiment be a sample … , of i.i.d.
random variables in some measurable space Ω, usually Ω ⊆ ℝ
hyperparameter , is data set
- While performing MLE estimation, we update the weights through back propagation to maximize the likelihood of the data, obtaining the optimal point estimation
- While performing MAP estimation, we update the weights through back propagation to maximize the posterior probability, obtaining the optimal point estimation
- While performing Bayesian inference, we update the weights through back propagation to calculate the posterior probability distribution, obtaining the optimal density estimation
MLE is intuitive, MAP is a generalized MLE with non-constant log-prior, and ERM is a generalized form with any loss function and regularization term.
Parameter Estimation Notion
Variational Inference 알아보기 - MLE, MAP부터 ELBO까지
확률 분포를 근사 추정하는 기법인 Variational Inference를 이해하고 싶은 사람들을 위해, 확률 분포를 추정하는 근본적인 이유를 알려드립니다. 또한 MLE, MAP, KL divergence, ELBO 등 자주 등장하는 용어들을 설명합니다.
https://modulabs.co.kr/blog/variational-inference-intro/


Seonglae Cho