Self Play

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Mar 17 1:5
Editor
Edited
Edited
2025 Dec 6 23:29
Refs
Refs
Self-play has worked well in RL, but hasn't worked as well in LLMs so far for the following reasons. This is because the self-play process doesn't involve reality. In other words, the core of self-play is to leverage the choices of various agents based on direct interaction with the environment, and this core lies in whether the environment is included. However, unlike Go where integration is possible with fixed rules, LLMs only involve the limited parts that interact with reality through the interface of natural language.
In addition to this problem, there are also limitations in the diversity of the models themselves. Most modern LLMs use nearly identical web datasets for pretraining and use the data equally. And while RL adds preferences, it basically only changes preferences within the same structure without significantly expanding the reality set that interacts with the environment. So unlike Go which involves the entire environment, we are only involving the parts of the world that touch natural language, which means we need to involve in this loop the parts that are not accessed through language or vision.
Self Play Methods
 
 
 
 
Ilya Sutskever – We're moving from the age of scaling to the age of research
Ilya & I discuss SSI’s strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well. 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://www.dwarkesh.com/p/ilya-sutskever-2 * Apple Podcasts: https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381?i=1000738363711 * Spotify: https://open.spotify.com/episode/7naOOba8SwiUNobGz8mQEL?si=39dd68f346ea4d49 𝐒𝐏𝐎𝐍𝐒𝐎𝐑𝐒 - Gemini 3 is the first model I’ve used that can find connections I haven’t anticipated. I recently wrote a blog post on RL’s information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at https://gemini.google - Labelbox helped me create a tool to transcribe our episodes! I’ve struggled with transcription in the past because I don’t just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the *exact* data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to https://labelbox.com/dwarkesh - Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user’s risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at https://sardine.ai/dwarkesh To sponsor a future episode, visit https://dwarkesh.com/advertise 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 – Explaining model jaggedness 00:09:39 - Emotions and value functions 00:18:49 – What are we scaling? 00:25:13 – Why humans generalize better than models 00:35:45 – Straight-shotting superintelligence 00:46:47 – SSI’s model will learn from deployment 00:55:07 – Alignment 01:18:13 – “We are squarely an age of research company” 01:29:23 -- Self-play and multi-agent 01:32:42 – Research taste
Ilya Sutskever – We're moving from the age of scaling to the age of research
 
 

Recommendations