Minimax AI Models

Explore all 8 Minimax AI models available on CoreAI. Compare capabilities, pricing, and performance.

All Models (342)Ai21 (1)Aion-labs (4)Alfredpros (1)Alibaba (1)Allenai (4)Alpindale (1)Amazon (5)Anthracite-org (1)Anthropic (13)Arcee-ai (7)Baidu (5)Bytedance (1)Bytedance-seed (4)Cognitivecomputations (1)Cohere (4)
Minimax

MiniMax: MiniMax M2.7

MiniMax-M2.7 is a next-generation large language model designed for autonomous, real-world productivity and continuous improvement. Built to actively participate in its own evolution, M2.7 integrates
205K context standard
Minimax

MiniMax: MiniMax M2.5 (free)

MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise o
197K context budget
Minimax

MiniMax: MiniMax M2.5

MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise o
197K context standard
Minimax

MiniMax: MiniMax M2-her

MiniMax M2-her is a dialogue-first large language model built for immersive roleplay, character-driven chat, and expressive multi-turn conversations. Designed to stay consistent in tone and personalit
66K context standard
Minimax

MiniMax: MiniMax M2.1

MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers
197K context budget
Minimax

MiniMax: MiniMax M2

MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier
197K context standard
Minimax

MiniMax: MiniMax M1

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "
1000K context 46B standard
Minimax

MiniMax: MiniMax-01

MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image understanding. It has 456 billion parameters, with 45.9 billion parameters activated per inference, and can han
1000K context 456B standard

Popular AI Model Comparisons

Try Any AI Model Instantly

Chat with GPT-5, Claude, Gemini, and 300+ models — all in one app. Compare responses side-by-side.

Download App → Try on Web App