The high level intuition is that the Denoising model is specialized for generating low-frequency content (forward process) and also for generating high-frequency content (reverse process).
이미지 생성을 위해 이미지의 가려진 일부를 예측하도록 학습되는데, 구조적인 트릭으로 모델이 예측해야 하는 픽셀의 수를 최소화하여 품질 저하를 막는다. 확산모델은 이미지에 노이즈를 추가하고 되돌리는 방식으로 이미지를 생성하는데, 이를 통해 이미지 전체에 대한 정보를 골고루 제거하여 픽셀 간의 상간관계가 줄어든다.
Gradually add Gaussian noise with Markov Chain to model increasing noise process. Generate images by sampling from Gaussian noise. After that, the decoder learns to reverse noise into images by denoising process with modeling noise distribution. Due to Explicit likelihood Modeling, it solves the drawback of GAN where it covers less of the generation space.
Reverse KL Divergence for model seeking and to generate more expressive data. 하지만 Gaussian noise 를 저차원 형태로 모인 집합으로 reduction한 뒤 reconstruction 하므로 복잡한 차원에 대한 이해가 부족할 수 있다
pθ(xt−1∣xt)=N(μθ(xt,t),Σθ(xt,t)) is the neural network we are training.
We compute KL of forward process and reverse process to get the Loss. To define the loss, the problem is reparameterized to predict the noise at step t rather than the structure, which empirically demonstrated to improve performance. Also, we leverage forward process's sampled xt reparameterized to ensure the variational posterior differentiable.
For the Encoder, UNet like CNN image to image model to add noise but we can add Attention Mechanism between during later compression/decompression stages such as Cross-Attention and image patches. Diffusion uses Positional Embedding for each time step. which prevents effective extrapolation like transformer do not.