INTELLECT-1
INTELLECT-1 Release: The First Globally Trained 10B Parameter Model
We're excited to release INTELLECT-1, the first 10B parameter language model collaboratively trained across the globe. This represents a 10× scale-up from our previous research and demonstrates that large-scale model training is no longer confined to large corporations but can be achieved through distributed, community-driven approaches. The next step is scaling this even further to frontier model sizes and ultimately open source AGI.
https://www.primeintellect.ai/blog/intellect-1-release

Prime Intellect on Twitter / X
Announcing INTELLECT-1: the first-ever decentralized training of a 10B modelScaling decentralized training 10x beyond prior efforts.Anyone can join us to build open-source AGI 🦋 pic.twitter.com/P1aiwIB5zg— Prime Intellect (@PrimeIntellect) October 11, 2024
https://x.com/PrimeIntellect/status/1844814829154169038
INTELLECT-2 RL 32B
INTELLECT-2: The First Globally Distributed Reinforcement Learning Training of a 32B Parameter Model
Today we are launching INTELLECT-2: the first 32B parameter globally decentralized Reinforcement Learning training run where anyone can permissionlessly contribute their heterogeneous compute resources.
https://www.primeintellect.ai/blog/intellect-2
.png?table=block&id=1d7c3c96-247d-80ad-94ba-c87e5dffeb61&cache=v2)
INTELLECT-2 Release: The First 32B Parameter Model Trained Through Globally Distributed Reinforcement Learning
We're excited to release INTELLECT-2, the first 32B parameter model trained via globally distributed reinforcement learning. Unlike traditional centralized training efforts, INTELLECT-2 trains a reasoning language model using fully asynchronous RL across a dynamic, heterogeneous swarm of permissionless compute contributors.
https://www.primeintellect.ai/blog/intellect-2-release


Seonglae Cho

