Flow Matching

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2024 Dec 28 14:28
Editor
Edited
Edited
2026 Jan 15 17:53
Flow Matching (FM) is a training objective (loss/formulation), not a model architecture.
Presents a method to train
CNF
without simulation, training the model by regressing the
Vector Field
of fixed conditional probability paths. The path is generally expressed as a probability flow (
Vector Field
) that varies with time .
This approach improves upon the sampling efficiency issues present in existing diffusion models and enables a more efficient generation process by utilizing diverse probability paths.
Flow Matching
with
ODE
→ "following a map and driving in a consistent direction" while
Diffusion Model
with
SDE
→ "following a map, but random wind blows at each segment". Solver: the rule that determines how often and how precisely to apply steering in reverse direction (deterministic for ODE, stochastic for SDE). Note that diffusion models can also be sampled using ODE solvers (e.g., probability flow ODE).
Flow Matchings
 
 
 

flow-GRPO
flow_grpo
yifan123Updated 2026 Jan 24 12:56
, NeurIPS 2025

Flow Matching
is ODE-based, making it deterministic (lacking sample diversity), while RL requires stochastic exploration. Additionally, RL data collection is expensive (many denoising steps), making it inefficient. By replacing the
ODE
sampler with an
SDE
that maintains the same marginal distribution, noise is injected (enabling exploration). This makes the policy at each step Gaussian, allowing to be calculated in closed form.
During RL training, even with significantly reduced denoising steps (e.g., T=10), the reward signal is sufficient for effective learning. At inference time, the original steps (e.g., T=40) are restored to maintain final quality → reducing sampling cost by 4×+. SD3.5-M improved significantly on GenEval from 63% → 95%, and text rendering from 59% → 92%. Adding KL regularization suppresses
AI Reward Hacking
(quality/diversity collapse) while maintaining performance gains.
 
 
 

Recommendations