Diffusion Probabilistic model (DPM), Variational Diffusion Model
The high level intuition is that the Denoising model is specialized for generating low-frequency content (forward process) and also for generating high-frequency content (reverse process).
The model is trained to predict masked portions of images, using structural tricks to minimize the number of pixels that need to be predicted, thereby preventing quality degradation. Diffusion models generate images by adding noise and then reversing it, which evenly removes information across the entire image, reducing pixel correlations.
Gradually add Gaussian noise with Markov Chain to model increasing noise process. Generate images by sampling from Gaussian noise. After that, the decoder learns to reverse noise into images by denoising process with modeling noise distribution. Due to Explicit likelihood Modeling, it solves the drawback of GAN where it covers less of the generation space.
Marginalization
Forward process
which means
When we consider it as a form of Fourier Transform, we can interpret as a Linear Combination of and Gaussian Noise
Reverse process with
variational posterior
Reverse KL Divergence for model seeking and to generate more expressive data. 하지만 Gaussian noise 를 저차원 형태로 모인 집합으로 reduction한 뒤 reconstruction 하므로 복잡한 차원에 대한 이해가 부족할 수 있다
is the neural network we are training.
When it is normal distribution
Reparameterization trick
We compute KL of forward process and reverse process to get the Loss. To define the loss, the problem is reparameterized to predict the noise at step t rather than the structure, which empirically demonstrated to improve performance. Also, we leverage forward process's sampled reparameterized to ensure the variational posterior differentiable.
- is often fixed at 1 regardless of the step
- is a neural network instead of
Network Architecture
For the Encoder, UNet like CNN image to image model to add noise but we can add Attention Mechanism between during later compression/decompression stages such as Cross-Attention and image patches. Diffusion uses Positional Embedding for each time step. which prevents effective extrapolation like transformer do not.
Diffusion Model Notion
Diffusion Model Usages
Diffusion Models
Tutorial
smalldiffusionyuanchenyang • Updated 2025 Jun 19 18:10
smalldiffusion
yuanchenyang • Updated 2025 Jun 19 18:10
Through noise prediction, we can mathematically prove that the denoiser can be viewed as an "approximate projection" onto the data manifold, equivalent to the gradient of a smoothed distance function (Moreau envelope). Gradient of a smoothed distance function to the manifold is equivalent as denoiser output, as a metaphor, trained denoiser generates force vectors that gradually bend towards the data manifold.
Then, DDIM can be interpreted as gradient descent, combining momentum and DDPM pddtechniques to improve convergence speed and image quality.