LLNWhen iid, which E[X1]=⋯=μ\mathbb{E}[X_1] = \dots = \muE[X1]=⋯=μXˉn:=1n∑i=1nXi→n→∞μ\bar{X}_n := \frac{1}{n} \sum_{i=1}^n X_i \xrightarrow{n \to \infty} \muXˉn:=n1∑i=1nXin→∞μVar(Xˉn)=σ2nVar(\bar{X}_n) = \frac{\sigma^2}{n}Var(Xˉn)=nσ2ConditionBasically requires iid however Ergodicity or VarianceV(f^)=V(1m∑i=1mf(Xi))=1m2V(∑i=1mf(Xi))\mathbb{V}(\hat{f}) = \mathbb{V}(\frac{1}{m}\sum_{i=1}^mf(X_i)) \\ = \frac{1}{m^2} \mathbb{V}(\sum_{i=1}^mf(X_i))V(f^)=V(m1∑i=1mf(Xi))=m21V(∑i=1mf(Xi))Since it is iid =1m2mV(f(X))=1mV(f(X)) = \frac{1}{m^2} m \mathbb{V}(f(X)) \\ = \frac{1}{m} \mathbb{V}(f(X))=m21mV(f(X))=m1V(f(X))In other words, as m→∞m\rightarrow \inftym→∞, the variance goes to zero. De Moivre–Laplace theoremIn probability theory, the de Moivre–Laplace theorem, which is a special case of the central limit theorem, states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions. In particular, the theorem shows that the probability mass function of the random number of "successes" observed in a series of independent Bernoulli trials, each having probability of success, converges to the probability density function of the normal distribution with mean and standard deviation , as grows large, assuming is not or .https://en.wikipedia.org/wiki/De_Moivre–Laplace_theorem