671B parameter DeepSeek R1
- Backtracking feature
- Answer Quickly feature which cuts thought trace
- Self-correction feature
- Attention Sink feature
www.goodfire.ai
We have trained the first ever sparse autoencoders (SAEs) on the 671B parameter DeepSeek R1 model and open-sourced the SAEs.
https://www.goodfire.ai/blog/under-the-hood-of-a-reasoning-model

Goodfire/DeepSeek-R1-SAE-l37 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
https://huggingface.co/Goodfire/DeepSeek-R1-SAE-l37
Deepseek R1 Qwen 1.5b every layer mlp
transcoder
EleutherAI/skip-transcoder-DeepSeek-R1-Distill-Qwen-1.5B-65k · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
https://huggingface.co/EleutherAI/skip-transcoder-DeepSeek-R1-Distill-Qwen-1.5B-65k
sae
Manifold Steering Manifold_SteeringAries-iai • Updated 2025 Dec 3 9:40
Manifold_Steering
Aries-iai • Updated 2025 Dec 3 9:40
LLM overthinking exists in a low-dimensional manifold of the activation space, and by aligning and intervening along it. tokens can be significantly reduced while maintaining accuracy. Manifold Steering: Estimate the low-dimensional subspace of reasoning activations using PCA, and steer only along it. Overthinking is not a single direction but a phenomenon bound to a low-dimensional manifold. Results: Token reduction of up to ~71% across math, code, and QA tasks, with accuracy maintained or slightly improved.

Seonglae Cho