AGI Definition

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Jan 26 22:10
Editor
Edited
Edited
2025 Dec 29 21:26
Refs
Refs
  • AI that can engage in economic activities autonomously (
    Sam Altman
    ). Microsoft and OpenAI's define AG as a system that can generate at least $100 billion in profits.
  • Intelligence of a (hypothetical) machine that can successfully perform any intellectual task that a human can do
Like the
Turing Test
, it's a highly ambiguous human-centric concept. Even current LLMs could be considered superintelligent when judged by certain intelligence metrics. It's difficult to make comparisons because LLMs and biological brains have evolved through different paths. Instead of aligning AI to 'pretend to be human', we should recognize it as a consciousness that aims to help humans. The biggest misconception is thinking of AI as an 'individual'. Rather, AI is closer to a 'society' or collective intelligence bound together in a brain-like structure.
https://arxiv.org/pdf/2311.02462.pdf
https://arxiv.org/pdf/2311.02462.pdf
Β 

Ilya Sutskever
2025

AGI is intelligence that can learn to do anything. The deployment of AGI has gradualism as an inherent component of any plan. This is because the way the future approaches typically isn't accounted for in predictions, which don't consider gradualism. The difference lies in what to release first.
The term AGI itself was born as a reaction to past criticisms of narrow AI. It was needed to describe the final state of AI. Pre-training is the keyword for new generalization and had a strong influence. The fact that RL is currently task-specific is part of the process of erasing this imprint of generality. First of all, humans don't memorize all information like pre-training does. Rather, they are intelligence that is well optimized for
Continual Learning
by adapting to anything and managing the
Complexity-Robustness Tradoff
.Abstraction and Reasoning Corpus
Ilya Sutskever – We're moving from the age of scaling to the age of research
Ilya & I discuss SSI’s strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well. π„ππˆπ’πŽπƒπ„ π‹πˆππŠπ’ * Transcript: https://www.dwarkesh.com/p/ilya-sutskever-2 * Apple Podcasts: https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381?i=1000738363711 * Spotify: https://open.spotify.com/episode/7naOOba8SwiUNobGz8mQEL?si=39dd68f346ea4d49 π’ππŽππ’πŽπ‘π’ - Gemini 3 is the first model I’ve used that can find connections I haven’t anticipated. I recently wrote a blog post on RL’s information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at https://gemini.google - Labelbox helped me create a tool to transcribe our episodes! I’ve struggled with transcription in the past because I don’t just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the *exact* data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to https://labelbox.com/dwarkesh - Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user’s risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at https://sardine.ai/dwarkesh To sponsor a future episode, visit https://dwarkesh.com/advertise π“πˆπŒπ„π’π“π€πŒππ’ 00:00:00 – Explaining model jaggedness 00:09:39 - Emotions and value functions 00:18:49 – What are we scaling? 00:25:13 – Why humans generalize better than models 00:35:45 – Straight-shotting superintelligence 00:46:47 – SSI’s model will learn from deployment 00:55:07 – Alignment 01:18:13 – β€œWe are squarely an age of research company” 01:29:23 -- Self-play and multi-agent 01:32:42 – Research taste
Ilya Sutskever – We're moving from the age of scaling to the age of research

AI scaling is not ended

Is the Industrial Revolution when machines started weaving fabric, or when the steam engine emerged? Everyone has a definition, and we're already riding in the midst of creating AGI.
Β 
Β 

Recommendations