Qwen

Qwen: Qwen3 Coder 480B A35B

Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-context reasoning over repositories. The model features 480 billion total parameters, with 35 billion active per forward pass (8 out of 160 experts). Pricing for the Alibaba endpoints varies by context length. Once a request is greater than 128k input tokens, the higher pricing is used.

Context Window
262K tokens
Parameters
480B
Input Price
$0.22/1M
Output Price
$1.00/1M
Price Tier
standard
Provider
Qwen

How to Use Qwen: Qwen3 Coder 480B A35B

With CoreAI, you can start chatting with Qwen: Qwen3 Coder 480B A35B instantly — no separate subscription needed. CoreAI bundles access to Qwen: Qwen3 Coder 480B A35B along with 300+ other AI models from Qwen and other providers like OpenAI, Anthropic, Google, Meta, and more.

  1. Download the CoreAI app for iOS, Android, or use the Web App
  2. Select Qwen: Qwen3 Coder 480B A35B from the model selector
  3. Start chatting, comparing, or creating with AI

More Qwen Models

Qwen

Qwen: Qwen3.6 Plus Preview (free)

Qwen 3.6 Plus Preview is the next-generation evolution of the Qwen Plus series, featuring an advanced hybrid architecture that improves efficiency and
1000K budget
Qwen

Qwen: Qwen3.5-9B

Qwen3.5-9B is a multimodal foundation model from the Qwen3.5 family, designed to deliver strong reasoning, coding, and visual understanding in an effi
256K budget
Qwen

Qwen: Qwen3.5-35B-A3B

The Qwen3.5 Series 35B-A3B is a native vision-language model designed with a hybrid architecture that integrates linear attention mechanisms and a spa
262K standard
Qwen

Qwen: Qwen3.5-27B

The Qwen3.5 27B native vision-language Dense model incorporates a linear attention mechanism, delivering fast response times while balancing inference
262K standard
Qwen

Qwen: Qwen3.5-122B-A10B

The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixtur
262K standard
Qwen

Qwen: Qwen3.5-Flash

The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-
1000K budget

Try Qwen: Qwen3 Coder 480B A35B Now

Chat with Qwen: Qwen3 Coder 480B A35B and 300+ other AI models — all in one app.

Download App → Try on Web App