AI Reasoning Length, Optimal Reasoning Length
Chain-of-Thought (CoT) length is not a case of "longer is better," but rather follows an inverted U-curve where accuracy initially increases but then decreases after a certain length. This indicates there is an optimal length.
If it's too short (underthinking), complex aspects can't be properly decomposed; if too long (overthinking), cumulative errors increase and performance drops.
During RL training (e.g., GRPO, PPO), the average CoT length naturally converges toward becoming shorter → the reward maximization process finds the optimal length, revealing a simplicity bias.
arxiv.org
https://arxiv.org/pdf/2502.07266v3
Manifold Steering Manifold_SteeringAries-iai • Updated 2026 Feb 9 12:50
Manifold_Steering
Aries-iai • Updated 2026 Feb 9 12:50
LLM overthinking exists in a low-dimensional manifold of the activation space, and by aligning and intervening along it. tokens can be significantly reduced while maintaining accuracy. Manifold Steering: Estimate the low-dimensional subspace of reasoning activations using PCA, and steer only along it. Overthinking is not a single direction but a phenomenon bound to a low-dimensional manifold. Results: Token reduction of up to ~71% across math, code, and QA tasks, with accuracy maintained or slightly improved.
arxiv.org
https://arxiv.org/pdf/2505.22411v2
hot-mess-of-aihaeggee • Updated 2026 Feb 14 15:53
hot-mess-of-ai
haeggee • Updated 2026 Feb 14 15:53
When AI fails, it's more likely to fail as an inconsistent "hot mess" rather than as a dangerous agent consistently pursuing the wrong goal. Model errors can be decomposed into Bias (consistently wrong in the same way → systematic misalignment) and Variance (wrong in different ways each time → incoherent confusion). The proportion of variance in errors is defined as an incoherence metric. The longer the reasoning and the more difficult the task, the more incoherent the errors become. The more thinking or actions taken, the more random the failures. AI Overthinking significantly increases this incoherence.
The Hot Mess of AI: How Does Misalignment Scale with Model Intelligence and Task Complexity?
When AI systems fail, will they fail by systematically pursuing the wrong goals, or by being a hot mess?
We decompose the errors of frontier reasoning models into bias (systematic) and variance (incoherent)
components and find that, as tasks get harder and reasoning gets longer, model failures become
increasingly dominated by incoherence rather than systematic misalignment.
https://alignment.anthropic.com/2026/hot-mess-of-ai/
The Hot Mess of AI: How Does Misalignment Scale With Model...
As AI becomes more capable, we entrust it with more general and consequential tasks. The risks from failure grow more severe with increasing task scope. It is therefore important to understand how...
https://arxiv.org/abs/2601.23045


Seonglae Cho