- AI that can engage in economic activities autonomously (Sam Altman). Microsoft and OpenAI's define AG as a system that can generate at least $100 billion in profits.
- Global 10% economic growth means AGI - Satya Nedella
- Intelligence of a (hypothetical) machine that can successfully perform any intellectual task that a human can do
Like the Turing Test, it's a highly ambiguous human-centric concept. Even current LLMs could be considered superintelligent when judged by certain intelligence metrics. It's difficult to make comparisons because LLMs and biological brains have evolved through different paths. Instead of aligning AI to 'pretend to be human', we should recognize it as a consciousness that aims to help humans. The biggest misconception is thinking of AI as an 'individual'. Rather, AI is closer to a 'society' or collective intelligence bound together in a brain-like structure.
Β
Ilya Sutskever 2025
AGI is intelligence that can learn to do anything. The deployment of AGI has gradualism as an inherent component of any plan. This is because the way the future approaches typically isn't accounted for in predictions, which don't consider gradualism. The difference lies in what to release first.
The term AGI itself was born as a reaction to past criticisms of narrow AI. It was needed to describe the final state of AI. Pre-training is the keyword for new generalization and had a strong influence. The fact that RL is currently task-specific is part of the process of erasing this imprint of generality. First of all, humans don't memorize all information like pre-training does. Rather, they are intelligence that is well optimized for Continual Learning by adapting to anything and managing the Complexity-Robustness Tradoff.Abstraction and Reasoning Corpus
AI scaling is not ended
Is the Industrial Revolution when machines started weaving fabric, or when the steam engine emerged? Everyone has a definition, and we're already riding in the midst of creating AGI.
Every time we solve something previously out of reach, it turns out that human-level generality is even further out of reach.
Β
Β

Seonglae Cho



