Dataset Distillation

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Apr 26 21:29
Editor
Edited
Edited
2025 May 20 12:9
Refs
Refs
Dataset
Synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data
For example, we show that it is possible to compress 60, 000 MNIST training images into just 10 synthetic distilled images (one per class) and achieve close to original performance with only a few gradient descent steps, given a fixed network initialization.
Instead of matching the data distribution, this method creates samples that "quickly move model parameters in a good direction." This approach is effective because with random initialization, each model's initial weights create different loss landscapes. To make robust updates across all initializations, directly optimizing a synthetic dataset is more efficient than matching the distribution.
 
 
 
 
 

Recommendations