- AI that can engage in economic activities autonomously (Sam Altman). Microsoft and OpenAI's define AG as a system that can generate at least $100 billion in profits.
- Global 10% economic growth means AGI - Satya Nedella
- Intelligence of a (hypothetical) machine that can successfully perform any intellectual task that a human can do
Like the Turing Test, it's a highly ambiguous human-centric concept. Even current LLMs could be considered superintelligent when judged by certain intelligence metrics. It's difficult to make comparisons because LLMs and biological brains have evolved through different paths. Instead of aligning AI to 'pretend to be human', we should recognize it as a consciousness that aims to help humans. The biggest misconception is thinking of AI as an 'individual'. Rather, AI is closer to a 'society' or collective intelligence bound together in a brain-like structure.
Β
Ilya Sutskever 2025
AGI is intelligence that can learn to do anything. The deployment of AGI has gradualism as an inherent component of any plan. This is because the way the future approaches typically isn't accounted for in predictions, which don't consider gradualism. The difference lies in what to release first. w
The term AGI itself was born as a reaction to past criticisms of narrow AI. It was needed to describe the final state of AI. Pre-training is the keyword for new generalization and had a strong influence. The fact that RL is currently task-specific is part of the process of erasing this imprint of generality. First of all, humans don't memorize all information like pre-training does. Rather, they are intelligence that is well optimized for Continual Learning by adapting to anything and managing the Complexity-Robustness Tradoff.Abstraction and Reasoning Corpus
Ilya Sutskever β We're moving from the age of scaling to the age of research
Ilya & I discuss SSIβs strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well.
πππππππ πππππ
* Transcript: https://www.dwarkesh.com/p/ilya-sutskever-2
* Apple Podcasts: https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381?i=1000738363711
* Spotify: https://open.spotify.com/episode/7naOOba8SwiUNobGz8mQEL?si=39dd68f346ea4d49
ππππππππ
- Gemini 3 is the first model Iβve used that can find connections I havenβt anticipated. I recently wrote a blog post on RLβs information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at https://gemini.google
- Labelbox helped me create a tool to transcribe our episodes! Iβve struggled with transcription in the past because I donβt just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the *exact* data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to https://labelbox.com/dwarkesh
- Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a userβs risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at https://sardine.ai/dwarkesh
To sponsor a future episode, visit https://dwarkesh.com/advertise
ππππππππππ
00:00:00 β Explaining model jaggedness
00:09:39 - Emotions and value functions
00:18:49 β What are we scaling?
00:25:13 β Why humans generalize better than models
00:35:45 β Straight-shotting superintelligence
00:46:47 β SSIβs model will learn from deployment
00:55:07 β Alignment
01:18:13 β βWe are squarely an age of research companyβ
01:29:23 -- Self-play and multi-agent
01:32:42 β Research taste
https://www.youtube.com/watch?v=aR20FWCCjAs

AI scaling is not ended
Is the Industrial Revolution when machines started weaving fabric, or when the steam engine emerged? Everyone has a definition, and we're already riding in the midst of creating AGI.
νμλ‘ μ λ©΄λννλ OpenAI
OpenAIμ μμκ³Όνμ μΌμΏ± ννΈμΈ ν€, OpenAIμ μ΅κ³ μ°κ΅¬μ±
μμ λ§ν¬ μ²Έ.
μ΄λ€κ³Ό μ μνΈλ¨Όμ κ±°μ λ§€μΌκ°μ΄ λννλ©° λμκ° λ°©ν₯μ λ€λ¬λλ€κ³ ν©λλ€.
μλλ μΉ΄νμ, μΌλ¦¬μΌ μμΈ μΌλ²κ° μ°μ΄μ΄ λΆμ§ν AI νμλ‘ μ λν΄ μ΄λ€μ μ΄λ€ μκ°μ νκ³ μμκΉμ?
λ무 λ§μ μ 보λ€μ΄ μμμ§λ μ΄ μν© μμμ
μ μ ν 'μκ° μμ'κ³Ό μ λΉν 'μ»΄ν¨νΈ'λ₯Ό ν¬μν΄ νλ¦μ μ΄ν΄ν μ μλλ‘ λμΈ μ μλ€λ©΄
κ·Έκ²μ΄ λ°±μλ무 μ±λμ κΈ°μ¨μ΄ λ κ²μ
λλ€.
λ μμ²ν΄μ£Όμλ ν λΆ, ν λΆκ» μ§μ¬μΌλ‘ κ°μ¬λ립λλ€!
https://youtu.be/yvFj2YuW3ak?si=Q74qBFBSLalcoXOH
https://youtu.be/ZeyHBM2Y5_4?si=895y_EQvVS6LEBn0
https://www.youtube.com/watch?v=8Fesyx3oMxM

Every time we solve something previously out of reach, it turns out that human-level generality is even further out of reach.
Β
Β

Seonglae Cho

