Qwen

Qwen: Qwen-Max

Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies. The parameter count is unknown.

Context Window
33K tokens
Parameters
N/A
Input Price
$1.04/1M
Output Price
$4.16/1M
Price Tier
standard
Provider
Qwen

How to Use Qwen: Qwen-Max

With CoreAI, you can start chatting with Qwen: Qwen-Max instantly — no separate subscription needed. CoreAI bundles access to Qwen: Qwen-Max along with 300+ other AI models from Qwen and other providers like OpenAI, Anthropic, Google, Meta, and more.

  1. Download the CoreAI app for iOS, Android, or use the Web App
  2. Select Qwen: Qwen-Max from the model selector
  3. Start chatting, comparing, or creating with AI

More Qwen Models

Qwen

Qwen: Qwen3.6 Plus Preview (free)

Qwen 3.6 Plus Preview is the next-generation evolution of the Qwen Plus series, featuring an advanced hybrid architecture that improves efficiency and
1000K budget
Qwen

Qwen: Qwen3.5-9B

Qwen3.5-9B is a multimodal foundation model from the Qwen3.5 family, designed to deliver strong reasoning, coding, and visual understanding in an effi
256K budget
Qwen

Qwen: Qwen3.5-35B-A3B

The Qwen3.5 Series 35B-A3B is a native vision-language model designed with a hybrid architecture that integrates linear attention mechanisms and a spa
262K standard
Qwen

Qwen: Qwen3.5-27B

The Qwen3.5 27B native vision-language Dense model incorporates a linear attention mechanism, delivering fast response times while balancing inference
262K standard
Qwen

Qwen: Qwen3.5-122B-A10B

The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixtur
262K standard
Qwen

Qwen: Qwen3.5-Flash

The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-
1000K budget

Try Qwen: Qwen-Max Now

Chat with Qwen: Qwen-Max and 300+ other AI models — all in one app.

Download App → Try on Web App