Diffusion Language Model

Created
Created
2025 Mar 4 1:15
Editor
Creator
Creator
Seonglae ChoSeonglae Cho
Edited
Edited
2026 Jan 21 15:41
So fast for simple tasks?
Diffusion Language Models
 
 
 

Bidirectional LM
is a Single Text diffusion

Better utilizes web data with incomplete causal structures, showed improved performance in iterative learning
Inception Labs API
Non-autoregressive LLMs like Diffusion/Flow models (DLLMs) learn the joint distribution of prompts and responses, allowing attackers to reverse-sample prompts from given a desired target response to quickly generate jailbreak prompts. This effectively converts expensive discrete prompt search into amortized inference.
Prompts generated on JailbreakBench have low perplexity (natural-sounding) and strong transferability. They transfer particularly well to robustly trained models (LAT, Circuit Breakers, etc.) and proprietary models (GPT-5). Using guidance further increases ASR. As DLLMs become more powerful, the threat of "natural" low-cost jailbreak generators may grow.
 
 
 

Recommendations