The scalability of methods is crucial in evaluating AI research
- Data generalization is essential, methods that only work in specific environments are useless.
- Cost-benefit analysis is important, good performance alone doesn't guarantee practicality.
- Error analysis is mandatory, without understanding why it fails, improvement is impossible.
More inspiration from nature present, more confident on top-down belief that sustains you when experiments contradict you multifaceted beauty (Ilya Sutskever)
AI Research Notion
Top labs

Ilya Sutskever 2025
AGI is intelligence that can learn to do anything. The deployment of AGI has gradualism as an inherent component of any plan. This is because the way the future approaches typically isn't accounted for in predictions, which don't consider gradualism. The difference lies in what to release first. w
The term AGI itself was born as a reaction to past criticisms of narrow AI. It was needed to describe the final state of AI. Pre-training is the keyword for new generalization and had a strong influence. The fact that RL is currently task-specific is part of the process of erasing this imprint of generality. First of all, humans don't memorize all information like pre-training does. Rather, they are intelligence that is well optimized for Continual Learning by adapting to anything and managing the Complexity-Robustness Tradoff.Abstraction and Reasoning Corpus
Ilya Sutskever – We're moving from the age of scaling to the age of research
Ilya & I discuss SSI’s strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well.
𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒
* Transcript: https://www.dwarkesh.com/p/ilya-sutskever-2
* Apple Podcasts: https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381?i=1000738363711
* Spotify: https://open.spotify.com/episode/7naOOba8SwiUNobGz8mQEL?si=39dd68f346ea4d49
𝐒𝐏𝐎𝐍𝐒𝐎𝐑𝐒
- Gemini 3 is the first model I’ve used that can find connections I haven’t anticipated. I recently wrote a blog post on RL’s information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at https://gemini.google
- Labelbox helped me create a tool to transcribe our episodes! I’ve struggled with transcription in the past because I don’t just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the *exact* data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to https://labelbox.com/dwarkesh
- Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user’s risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at https://sardine.ai/dwarkesh
To sponsor a future episode, visit https://dwarkesh.com/advertise
𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒
00:00:00 – Explaining model jaggedness
00:09:39 - Emotions and value functions
00:18:49 – What are we scaling?
00:25:13 – Why humans generalize better than models
00:35:45 – Straight-shotting superintelligence
00:46:47 – SSI’s model will learn from deployment
00:55:07 – Alignment
01:18:13 – “We are squarely an age of research company”
01:29:23 -- Self-play and multi-agent
01:32:42 – Research taste
https://www.youtube.com/watch?v=aR20FWCCjAs

AI research paper dataset from Arxiv
neuralwork/arxiver · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
https://huggingface.co/datasets/neuralwork/arxiver
Don't pivot into AI research
Many computer science students and new grads aspire to move into machine learning.
It seems exciting and sexy.
You can play a role in bringing in the incoming AGI utopia.
Many strive to work in “machine learning or AI research” - a vaguely defined field that includes everything from data engineering, infrastructure and model architecture.
https://maged.com/pivot-to-ai
Who is leading in AI? An analysis of industry AI research
Artificial Intelligence (AI) research is increasingly industry-driven, making it crucial to understand company contributions to this field. We compare leading AI companies by research publications, citations, size of training runs, and contributions to algorithmic innovations. Our analysis reveals the substantial role played by Google, OpenAI and Meta. We find that these three companies have been responsible for some of the largest training runs, developed a large fraction of the algorithmic innovations that underpin large language models, and led in various metrics of citation impact. In contrast, leading Chinese companies such as Tencent and Baidu had a lower impact on many of these metrics compared to US counterparts. We observe many industry labs are pursuing large training runs, and that training runs from relative newcomers—such as OpenAI and Anthropic—have matched or surpassed those of long-standing incumbents such as Google. The data reveals a diverse ecosystem of companies steering AI progress, though US labs such as Google, OpenAI and Meta lead across critical metrics.
https://arxiv.org/html/2312.00043

Seonglae Cho