Prime Intellect

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2024 Dec 2 22:20
Editor
Edited
Edited
2025 May 20 16:7
Refs
Refs
 
 
 
 

INTELLECT-1

INTELLECT-1 Release: The First Globally Trained 10B Parameter Model
We're excited to release INTELLECT-1, the first 10B parameter language model collaboratively trained across the globe. This represents a 10× scale-up from our previous research and demonstrates that large-scale model training is no longer confined to large corporations but can be achieved through distributed, community-driven approaches. The next step is scaling this even further to frontier model sizes and ultimately open source AGI.
INTELLECT-1 Release: The First Globally Trained 10B Parameter Model
Prime Intellect on Twitter / X
Announcing INTELLECT-1: the first-ever decentralized training of a 10B modelScaling decentralized training 10x beyond prior efforts.Anyone can join us to build open-source AGI 🦋 pic.twitter.com/P1aiwIB5zg— Prime Intellect (@PrimeIntellect) October 11, 2024

INTELLECT-2 RL 32B

INTELLECT-2: The First Globally Distributed Reinforcement Learning Training of a 32B Parameter Model
Today we are launching INTELLECT-2: the first 32B parameter globally decentralized Reinforcement Learning training run where anyone can permissionlessly contribute their heterogeneous compute resources.
INTELLECT-2: The First Globally Distributed Reinforcement Learning Training of a 32B Parameter Model
INTELLECT-2 Release: The First 32B Parameter Model Trained Through Globally Distributed Reinforcement Learning
We're excited to release INTELLECT-2, the first 32B parameter model trained via globally distributed reinforcement learning. Unlike traditional centralized training efforts, INTELLECT-2 trains a reasoning language model using fully asynchronous RL across a dynamic, heterogeneous swarm of permissionless compute contributors.
INTELLECT-2 Release: The First 32B Parameter Model Trained Through Globally Distributed Reinforcement Learning
 
 
 

Recommendations