The scalability of methods is crucial in evaluating AI research
- Data generalization is essential, methods that only work in specific environments are useless.
- Cost-benefit analysis is important, good performance alone doesn't guarantee practicality.
- Error analysis is mandatory, without understanding why it fails, improvement is impossible.
More inspiration from nature present, more confident on top-down belief that sustains you when experiments contradict you multifaceted beauty (Ilya Sutskever)
AI Research Notion
Top labs

Ilya Sutskever 2025
AGI is intelligence that can learn to do anything. The deployment of AGI has gradualism as an inherent component of any plan. This is because the way the future approaches typically isn't accounted for in predictions, which don't consider gradualism. The difference lies in what to release first.
The term AGI itself was born as a reaction to past criticisms of narrow AI. It was needed to describe the final state of AI. Pre-training is the keyword for new generalization and had a strong influence. The fact that RL is currently task-specific is part of the process of erasing this imprint of generality. First of all, humans don't memorize all information like pre-training does. Rather, they are intelligence that is well optimized for Continual Learning by adapting to anything and managing the Complexity-Robustness Tradoff.

Seonglae Cho