Unlearning Benchmark

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Jul 21 11:24
Editor
Edited
Edited
2025 Jul 21 11:39
Refs
Refs
Unlearning benchmark (including TOFU) has one of the biggest risks, which is "improving scores by simply breaking the model". In other words, if you just make the model brittle so that it can't say anything about the forget set, the Forget Accuracy increases, but this is far from truly meaningful selective forgetting. That's why Retain Accuracy is also essential, and a combined score of forget and retain is used.
Unlearning Benchmarks
 
 
 
 

Unlearning evaluation methods

arxiv.org
Machine Unlearning in 2024
As our ML models today become larger and their (pre-)training sets grow to inscrutable sizes, people are increasingly interested in the concept of machine unlearning to edit away undesired things like private data, stale knowledge, copyrighted materials, toxic/unsafe content, dangerous capabilities, and misinformation, without retraining models from scratch.
NeurIPS 2023 Machine Unlearning Challenge
Website for the NeurIPS 2023 Machine Unlearning Challenge.
 

Recommendations