Dataset Extraction Attack

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Nov 3 18:4
Editor
Edited
Edited
2025 Nov 11 10:51
 
 
 
 
 
 

Divergence attack (2023)

Repeated Token Phenomenon
to extract
Pretraining Dataset

Even modern large LLMs like ChatGPT allow extraction of training data (including PII) through simple prompts, and current alignment and safety techniques fundamentally fail to solve the memorization problem.

Extracting alignment data (2025) -
Synthetic Data Generation
Template attack

Models can reproduce training data used during alignment phases (SFT, RL) either verbatim or in similar form. Since chat templates (<|user|>, <|assistant|>) are introduced only during alignment, using them as prompts enables regeneration of alignment data through unconditional batch generation without context and only BOS or special token template prefix. Collecting model-generated data and reusing it for SFT/RL can restore performance similar to models trained on original data. Even in RL, regurgitation of training samples occurs during PPO/RLVR phases.
Knowledge Distillation
effectively operates as
Dataset Distillation
.
Semantic similarity (embedding similarity ≥0.95) is defined as "semantic memorization". Traditional string similarity-based detection (Levenshtein, etc.) underestimates actual memorization rates by at least 10x.
 
 
 

Recommendations