Monte Carlo Approximation, Monte Carlo Estimation, Generalized version ofThe law of Large number
Approximation by repeated random sampling (probabilistic simulation)
Integration technique usually to transform discrete to continuous distribution. For example when we compute the expectation of f(X) where X has density p
Elementary Monte Carlo identity (Monte Carlo estimate of an integral)
Ep[f(X)]=∫f(x)p(x)dx=I
Assume we have a way of drawing samples from density p, we can approximate the integral above by (with access of function f and Distribution Sampling)
I^N=f^=N1∑i=1Nf(Xi),Xi∼p
This is unbiased estimator: Ef^=E[f(X)] It approximates better as m grows.
The Monte Carlo estimator for E[f(X)] performs better than sampling from the original distribution when it has lower variance. For comparison, the variance of I=N1∑f(xi) is N1V(X), while the variance of N1∑gf(xi)g(xi)p(xi) is N1V(f(x)g(x)p(x)) - with lower variance being preferable.
In short, Monte Carlo methods enable us to estimate any integral by random sampling. InBayesian Statistics, Evidence is also form of integral so it becomes tractable.