QwenCoder

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2024 Nov 22 19:15
Editor
Edited
Edited
2025 Aug 4 21:15
Refs
Refs
 
 
 
 
Qwen/Qwen3-Coder-30B-A3B-Instruct · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Qwen/Qwen3-Coder-30B-A3B-Instruct · Hugging Face
Qwen3-Coder: Agentic Coding in the World
GITHUB HUGGING FACE MODELSCOPE DISCORD Today, we’re announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we’re excited to introduce its most powerful variant first: Qwen3-Coder-480B-A35B-Instruct — a 480B-parameter Mixture-of-Experts model with 35B active parameters which supports the context length of 256K tokens natively and 1M tokens with extrapolation methods, offering exceptional performance in both coding and agentic tasks. Qwen3-Coder-480B-A35B-Instruct sets new state-of-the-art results among open models on Agentic Coding, Agentic Browser-Use, and Agentic Tool-Use, comparable to Claude Sonnet 4.
Qwen2.5-Coder Series: Powerful, Diverse, Practical.
GITHUB HUGGING FACE MODELSCOPE KAGGLE DEMO DISCORD Introduction Today, we are excited to open source the “Powerful”, “Diverse”, and “Practical” Qwen2.5-Coder series, dedicated to continuously promoting the development of Open CodeLLMs. Powerful: Qwen2.5-Coder-32B-Instruct has become the current SOTA open-source code model, matching the coding capabilities of GPT-4o. While demonstrating strong and comprehensive coding abilities, it also possesses good general and mathematical skills; Diverse: Building on the previously open-sourced two sizes of 1.
 
 
 

Recommendations