Monte Carlo Method

Creator
Creator
Seonglae Cho
Created
Created
2022 Apr 3 15:45
Editor
Edited
Edited
2025 May 1 0:7

Monte Carlo Approximation, Monte Carlo Estimation, Generalized version of
The law of Large number

Approximation by repeated random sampling (probabilistic simulation)
Integration technique usually to transform discrete to continuous distribution. For example when we compute the expectation of f(X)f(X) where XX has density pp

Elementary Monte Carlo identity (Monte Carlo estimate of an integral)

Ep[f(X)]=f(x)p(x)dx=I\mathbb{E}_p[f(X)] = \int f(x)p(x)dx = I
Assume we have a way of drawing samples from density pp, we can approximate the integral above by (with access of function ff and
Distribution Sampling
)
I^N=f^=1Ni=1Nf(Xi),Xip\hat I_N= \hat{f} = \frac{1}{N}\sum_{i=1}^N f(X_i), \quad X_i \sim p
This is unbiased estimator: Ef^=E[f(X)]\mathbb{E}\hat{f} = \mathbb{E}[f(X)] It approximates better as mm grows.
Var[I^N]=1NVar[f(X)]=1N(E[f(X)2]I2).\mathrm{Var}\bigl[\hat I_N\bigr]= \frac{1}{N}\,\mathrm{Var}\bigl[f(X)\bigr]= \frac{1}{N}\Bigl(\mathbb{E}\bigl[f(X)^2\bigr] - I^2\Bigr).

The stochasticity trick of Monte Carlo method (
Importance sampling
)

Assume you have a difficult integral to compute
f(x)dx=f(x)g(x)g(x)dx=Eg[f(X)g(X)]1mi=1mf(Xi)g(Xi),Xig\int f(x)dx = \int \frac{f(x)}{g(x)} g(x) dx = \mathbb{E}_g [\frac{f(X)}{g(X)}] \approx \frac{1}{m}\sum_{i=1}^m \frac{f(X_i)}{g(X_i)}, X_i \sim gEg[w(X)]1mi=1mw(Xi),w(X)=f(X)g(X)\mathbb{E}_g[w(X)] \approx \frac{1}{m}\sum_{i=1}^m w(X_i), w(X) = \frac{f(X)}{g(X)}
The Monte Carlo estimator for E[f(X)]E[f(X)] performs better than sampling from the original distribution when it has lower variance. For comparison, the variance of I=1Nf(xi)I=\frac{1}{N}\sum f(x_i) is 1NV(X)\frac{1}{N}V(X), while the variance of 1Ngf(xi)p(xi)g(xi)\frac{1}{N}\sum_g{f(x_i)\frac{p(x_i)}{g(x_i)}} is 1NV(f(x)p(x)g(x))\frac{1}{N}V(f(x)\frac{p(x)}{g(x)}) - with lower variance being preferable.
p(y)=p(x,y)dx=g(x)p(x,y)g(x)dx=Eg[w(X)]p(y) = \int p(x,y)dx = \int g(x) \frac{p(x,y)}{g(x)}dx = \mathbb{E}_g[w(X)]Ep(XY)[f(X)]=p(xY)f(x)dx=p(x,Y)p(Y)f(x)dx=1p(Y)p(x,Y)f(x)dx=1p(Y)g(x)g(x)p(x,Y)f(x)dx=Eg[w(X)f(X)]Eg[w(X)]1mi=1mw(Xi)f(Xi)1mi=1mw(Xi)\mathbb{E}_{p(X|Y)}[f(X)] = \int p(x|Y)f(x)dx \\= \int \frac{p(x, Y)}{p(Y)} f(x)dx \\= \frac{1}{p(Y)} \int p(x, Y)f(x)dx\\ = \frac{1}{p(Y)} \int \frac{g(x)}{g(x)} p(x, Y)f(x)dx \\= \frac{\mathbb{E}_{g}[w(X)f(X)]}{\mathbb{E}_{g}[w(X)]} \approx \frac{\frac{1}{m}\sum_{i=1}^m w(X_i)f(X_i)}{\frac{1}{m}\sum_{i=1}^m w(X_i)} Wi=w(Xi)j=1mw(Xj),Ep(XY)(f(X))i=1mWif(Xi)W_i = \frac{w(X_i)}{\sum_{j=1}^mw(X_j)}, E_{p(X|Y)}(f(X)) \approx \sum_{i=1}^m W_if(X_i)p(xy)=i=1mWiδxi(x)p(x|y) = \sum_{i=1}^m W_i \delta_{x_i}(x)
In short, Monte Carlo methods enable us to estimate any integral by random sampling. In
Bayesian Statistics
,
Evidence
is also form of integral so it becomes tractable.
 

Monte Carlo variance

V(f^)=V(1ni=1nf(Xi))=1n2V(i=1nf(Xi))\mathbb{V}(\hat{f}) = \mathbb{V}(\frac{1}{n}\sum_{i=1}^nf(X_i)) \\ = \frac{1}{n^2} \mathbb{V}(\sum_{i=1}^nf(X_i))
Since it is
iid
=1n2nV(f(X))=1nV(f(X)) = \frac{1}{n^2} n \mathbb{V}(f(X)) \\ = \frac{1}{n} \mathbb{V}(f(X))
In other words, as mm\rightarrow \infty, the variance goes to zero.
Monte Carlo Methods
 
 
 
 
 
 

Recommendations