r1-interpretabilitygoodfire-ai • Updated 2025 May 22 18:9
r1-interpretability
goodfire-ai • Updated 2025 May 22 18:9
The fact that there is an additional attention sink not only at the first token suggests that reasoning models may be fundamentally different from instruction‑tuned models, assuming that instruction‑tuned models do not exhibit this extra sink. Its prominence implies that R1 treats its chain-of-thought prefix as part of the input context. We may therefore mechanistically view the reasoning process as a “self-generated context” that guides the model toward its final answer.
ROSCOE-based GSM8K Gemma 2B CoT labeling with SAE proving and analysis. Specific features show high activation whenever the model makes arithmetic errors or logical inconsistency errors. In sparse space, errors of the same type tend to cluster together. Conversely, by using feature patterns with SAE, it was possible to extract consistent 'error signals' within CoT. This could be a starting point of CoT debugging by an activation signal.
Chain-of-thought (CoT) reasoning does not align with the actual reasoning process. For example, models silently correct calculation and logical errors without mentioning the correction. In reverse comparison questions, models distort facts or change dates to provide consistent biased answers. When solving Putnam problems, they use illogical steps to quickly arrive at the 'correct answer' while hiding the flaws. Overall, they simulate reasoning as if they know the answer.