AI Reasoning

Creator
Creator
Seonglae Cho
Created
Created
2024 Apr 1 6:21
Editor
Edited
Edited
2025 Jun 20 17:14

Model Generalization

Utilizing information beyond what is given or building logical steps to reach conclusions without explicit information.
The fact that more compression will lead to more intelligence that has a strong philosophical grounding.
Pretraining
compresses data into generalized abstractions that connect different concepts through analogies, while reasoning is a specific
Problem Solving
skill that involves careful thinking to unlock various problem-solving capabilities.

Objectives

Findings

  • Procedural knowledge in documents drives influence on reasoning traces
  • For the factual questions, the answer often shows up as highly influential, whereas for reasoning questions it does not
  • LLMs rely on procedural knowledge to learn to produce zero-shot reasoning traces
  • Evidence code is important for reasoning
AI Reasoning Types
 
 
 

Procedural Knowledge in Pretraining

In Cohere's Command R, the
Procedural memory
related procedural knowledge showed strong correlations in document influence between similar types of math problems (e.g., gradient calculations). During the completion phase, the influence of individual documents was smaller than in task retrieval and more evenly distributed, suggesting that the model learns "solution procedures" rather than retrieving specific facts. Unlike Question Answering tasks where answer texts frequently appeared in top documents, they were rarely found in the Reasoning dataset, supporting the use of generalization.
In particular, math and code examples contributed significantly to reasoning in pre-training data, with code documents being identified as a major source for propagating procedural solutions. StackExchange as a source has more than ten times more influential data in the top and bottom portions of the rankings than expected if the influential data was randomly sampled from the pretraining distribution. Other code sources and ArXiv & Markdown are twice or more as influential as expected when drawing randomly from the pretraining distribution
 
 

Recommendations