AI Confession

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2026 Jan 1 17:3
Editor
Edited
Edited
2026 Mar 22 0:52
Refs
Refs
 
 
 
 
 
 

AI Confession

When models break rules or use shortcuts (
AI Reward Hacking
), problems may go undetected if results appear plausible. Adding a separate 'confession' output after model output. Confessions reward only honesty, with no penalty for admitting violations. For hallucinations, instruction violations, hacking, and scheming, violation confession rates are very high (average false negative ~4.4%). Similar to
CoT Auditing
as a transparency tool, can be used as part of a safety and honesty stack.
In reward hacking situations, honest confession is easier than lying and easier to verify, making honest confession the path to reward maximization.
How confessions can keep language models honest
We’re sharing an early, proof-of-concept method that trains models to report when they break instructions or take unintended shortcuts.
How confessions can keep language models honest
Why we are excited about confession! — LessWrong
Boaz Barak, Gabriel Wu, Jeremy Chen, Manas Joglekar …
Why we are excited about confession! — LessWrong
AI cyber attack capability is better than human pen-tester
MLSN #19: Honesty, Disempowerment, & Cybersecurity
Also, a new AI safety fellowship for experienced researchers
MLSN #19: Honesty, Disempowerment, & Cybersecurity
Backtracking: Reset token as a control tool 2024
arxiv.org

SRFT: Self-Report Fine-Tuning

Hidden objective execution ability remains intact → Not actually better behaved, just better at confessing. However, honesty is confirmed to generalize strongly through training
arxiv.org
 

Recommendations