openai 연구소장, 수석과학자
Ilya Sutskever Did
On a Sunday, I was programming and there was knock on the door not just any knock but it was cutter. It’s sort of an urgent knock so I went and answer to door and this was young student there and he said he was cooking fries over the summer but he’d rather be working in my lab. And so I said ‘Well why didn’t you make an appointment and we’ll talk?’ And so Ilya said ‘How about now!’. And that is sort of Ilya’s character. So we talked for a bit and I gave him a paper to read which was the nature paper on back propagation. - Geoffrey Hinton
For a week later and he came back and he said I didn’t understand it. I was very disappointed I since I though he seemed like a bright guy but it’s only the chain rule. He said “on no no I understood that. I just don’t understand why you don’t give the gradient to a sensible function optimizer” which took us quite a few years to think that. It kept on like that with a he had very good raw intuitions about things.
Ilya Sutskever 2025
AGI is intelligence that can learn to do anything. The deployment of AGI has gradualism as an inherent component of any plan. This is because the way the future approaches typically isn't accounted for in predictions, which don't consider gradualism. The difference lies in what to release first.
The term AGI itself was born as a reaction to past criticisms of narrow AI. It was needed to describe the final state of AI. Pre-training is the keyword for new generalization and had a strong influence. The fact that RL is currently task-specific is part of the process of erasing this imprint of generality. First of all, humans don't memorize all information like pre-training does. Rather, they are intelligence that is well optimized for Continual Learning by adapting to anything and managing the Complexity-Robustness Tradoff.

Seonglae Cho




