openai 연구소장, 수석과학자
Ilya Sutskever Did
On a Sunday, I was programming and there was knock on the door not just any knock but it was cutter. It’s sort of an urgent knock so I went and answer to door and this was young student there and he said he was cooking fries over the summer but he’d rather be working in my lab. And so I said ‘Well why didn’t you make an appointment and we’ll talk?’ And so Ilya said ‘How about now!’. And that is sort of Ilya’s character. So we talked for a bit and I gave him a paper to read which was the nature paper on back propagation. - Geoffrey Hinton
For a week later and he came back and he said I didn’t understand it. I was very disappointed I since I though he seemed like a bright guy but it’s only the chain rule. He said “on no no I understood that. I just don’t understand why you don’t give the gradient to a sensible function optimizer” which took us quite a few years to think that. It kept on like that with a he had very good raw intuitions about things.
Ilya Sutskever 2025
AGI is intelligence that can learn to do anything. The deployment of AGI has gradualism as an inherent component of any plan. This is because the way the future approaches typically isn't accounted for in predictions, which don't consider gradualism. The difference lies in what to release first. w
The term AGI itself was born as a reaction to past criticisms of narrow AI. It was needed to describe the final state of AI. Pre-training is the keyword for new generalization and had a strong influence. The fact that RL is currently task-specific is part of the process of erasing this imprint of generality. First of all, humans don't memorize all information like pre-training does. Rather, they are intelligence that is well optimized for Continual Learning by adapting to anything and managing the Complexity-Robustness Tradoff.Abstraction and Reasoning Corpus
Ilya Sutskever – We're moving from the age of scaling to the age of research
Ilya & I discuss SSI’s strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well.
𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒
* Transcript: https://www.dwarkesh.com/p/ilya-sutskever-2
* Apple Podcasts: https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381?i=1000738363711
* Spotify: https://open.spotify.com/episode/7naOOba8SwiUNobGz8mQEL?si=39dd68f346ea4d49
𝐒𝐏𝐎𝐍𝐒𝐎𝐑𝐒
- Gemini 3 is the first model I’ve used that can find connections I haven’t anticipated. I recently wrote a blog post on RL’s information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at https://gemini.google
- Labelbox helped me create a tool to transcribe our episodes! I’ve struggled with transcription in the past because I don’t just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the *exact* data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to https://labelbox.com/dwarkesh
- Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user’s risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at https://sardine.ai/dwarkesh
To sponsor a future episode, visit https://dwarkesh.com/advertise
𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒
00:00:00 – Explaining model jaggedness
00:09:39 - Emotions and value functions
00:18:49 – What are we scaling?
00:25:13 – Why humans generalize better than models
00:35:45 – Straight-shotting superintelligence
00:46:47 – SSI’s model will learn from deployment
00:55:07 – Alignment
01:18:13 – “We are squarely an age of research company”
01:29:23 -- Self-play and multi-agent
01:32:42 – Research taste
https://www.youtube.com/watch?v=aR20FWCCjAs

2011 RNN Next character-level prediction with Wikipedia HTML
OpenAI co-founder Ilya Sutskever departs ChatGPT maker
OpenAI co-founder and chief scientist Ilya Sutskever is leaving the startup at the center of today's artificial intelligence boom.
https://www.reuters.com/technology/openai-co-founder-ilya-sutskever-departs-2024-05-14/

Interviews
Exclusive: Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI
“It’s going to be monumental, earth-shattering. There will be a before and an after.”
https://www.technologyreview.com/2023/10/26/1082398/exclusive-ilya-sutskever-openais-chief-scientist-on-his-hopes-and-fears-for-the-future-of-ai

일리야 수츠케버와 AGI의 미싱링크
엔비디아의 젠슨 황과 OpenAI의 일리야 수츠케버가 AI의 과거, 현재, 그리고 미래를 이야기하는 인터뷰가 GTC Digital Spring에 있었습니다.
일리야는 AlexNet부터 GPT까지, 그야말로 딥러닝의 시작과 끝을 가장 앞에서 이끌고 있는 젊은 과학자입니다. 그는 그동안 무엇을 느끼고 앞으로 어떤 생각을 하고 있을까요?
AGI로 나아가는데 꼭 필요한 미싱링크가 무엇인지, 이 인터뷰를 통해 생각해보도록 합시다.
시청해주시는 한 분, 한 분께 진심으로 감사드립니다.
https://www.youtube.com/watch?v=LQviQS24uQY&t=840

OpenAI의 핵심, Ilya Sutskever 인터뷰
Ilya Sutskever는 현 OpenAI의 핵심 멤버입니다. 딥러닝 혁명의 선두주자인 그를 통해 현재 우리가 어디에 와 있는지, 미래에 대해서 다시 한번 생각해봅시다.
OpenAI의 모습을 보면, 트랜스포머를 기반으로 둘다 사람이 피드백하는 RLHF 방식, 먼저 사회에 던져놓고 그 피드백과 데이터로 "완성"해가는 모습이 어떤 기업과 매우 흡사한 모습을 보이고 있습니다.
ChatGPT는 이 순간에도 프롬프트 체인을 확장해 나가고 있습니다.
변화가 점점 더 빨라지는 이 순간, 우리는 미래를 어떻게 준비해야할까요.
이 인터뷰로 다시 한 번 스스로를 점검할 수 있었으면 좋겠습니다.
인터뷰어는 The Lunar Society 팟캐스트를 진행하는 Dwarkesh patel로 UT Austin CS 출신입니다.
흥미로운 대화, 다같이 들어볼까요?
시청해주시는 한 분, 한 분께 진심으로 감사드립니다!
- The Lunar Society 팟캐스트 링크입니다 -
Apple Podcasts: https://apple.co/42H6c4D
Spotify: https://spoti.fi/3LRqOBd
https://www.youtube.com/watch?v=Yf1o0TQzry8&t=1802s
https://www.youtube.com/watch?v=SGCFeIbpGlU


Seonglae Cho