Browse 300+ AI Models

Explore the complete directory of AI models from all major providers. Find the perfect AI for coding, writing, analysis, and more.

All Models (356)Ai21 (1)Aion-labs (4)Alfredpros (1)Alibaba (1)Allenai (1)Amazon (5)Anthracite-org (1)Anthropic (13)Arcee-ai (8)Baidu (7)Bytedance (1)Bytedance-seed (4)Cognitivecomputations (1)Cohere (4)Deepcogito (1)
Anthropic

Anthropic: Claude Opus 4.7 (Fast)

Fast-mode variant of [Opus 4.7](/anthropic/claude-opus-4.7) - identical capabilities with higher output speed at premium 6x pricing. Learn more in Anthropic's docs: https://platform.claude.com/docs/e
1000K context premium
Perceptron

Perceptron: Perceptron Mk1

Perceptron Mk1 (Mark One) is Perceptron's highest-quality vision-language model for video and embodied reasoning.** It accepts image and video inputs paired with natural language queries, and produces
33K context standard
Inclusionai

inclusionAI: Ring-2.6-1T (free)

Ring-2.6-1T is a 1T-parameter-scale thinking model with 63B active parameters, built for real-world agent workflows that require both strong capability and operational efficiency. It is optimized for
262K context 63B budget
Google

Google: Gemini 3.1 Flash Lite

Gemini 3.1 Flash Lite is Google’s GA high-efficiency multimodal model optimized for low-latency, high-volume workloads. It supports text, image, video, audio, and PDF inputs, and is designed for light
1049K context standard
Baidu

Baidu Qianfan: CoBuddy (free)

CoBuddy is a code generation model from Baidu, optimized for coding tasks and AI Agent workflows. It features high inference throughput and low end-to-end latency, with native support for tool...
131K context budget
OpenAI

OpenAI: GPT Chat Latest

GPT Chat Latest points to OpenAI's stable API alias `chat-latest` that always resolves to the latest Instant chat model used in ChatGPT. As OpenAI rolls out new Instant model updates...
400K context premium
xAI

xAI: Grok 4.3

Grok 4.3 is a reasoning model from xAI. It accepts text and image inputs with text output, and is suited for agentic workflows, instruction-following tasks, and applications requiring high factual...
1000K context standard
Ibm-granite

IBM: Granite 4.1 8B

Granite 4.1 8B is a dense, decoder-only 8-billion-parameter language model from IBM, part of the Granite 4.1 family. It supports a 131K-token context window and is designed for enterprise tasks...
131K context 8B budget
Mistral AI

Mistral: Mistral Medium 3.5

Mistral Medium 3.5 is a dense 128B instruction-following model from Mistral AI. It supports text and image inputs with text output, and is designed for agentic workflows, coding, and complex...
262K context 128B standard
NVIDIA

NVIDIA: Nemotron 3 Nano Omni (free)

NVIDIA Nemotron™ 3 Nano Omni is a 30B-A3B open multimodal model designed to function as a perception and context sub-agent in enterprise agent systems. It accepts text, image, video, and...
256K context 30B budget
Poolside

Poolside: Laguna XS.2 (free)

Laguna XS.2 is the second-generation model in the XS size class from [Poolside](https://poolside.ai), their efficient coding agent series. It combines tool calling and reasoning capabilities with a co
131K context budget
Poolside

Poolside: Laguna M.1 (free)

Laguna M.1 is the flagship coding agent model from [Poolside](https://poolside.ai), optimized for complex software engineering tasks. Designed for agentic coding workflows, it supports tool calling an
131K context budget
~anthropic

Anthropic Claude Haiku Latest

This model always redirects to the latest model in the Anthropic Claude Haiku family.
200K context standard
~openai

OpenAI GPT Mini Latest

This model always redirects to the latest model in the OpenAI GPT Mini family.
400K context standard
~google

Google Gemini Pro Latest

This model always redirects to the latest model in the Google Gemini Pro family.
1049K context premium
~moonshotai

MoonshotAI Kimi Latest

This model always redirects to the latest model in the MoonshotAI Kimi family.
262K context standard
~google

Google Gemini Flash Latest

This model always redirects to the latest model in the Google Gemini Flash family.
1049K context standard
~anthropic

Anthropic Claude Sonnet Latest

This model always redirects to the latest model in the Anthropic Claude Sonnet family.
1000K context premium
~openai

OpenAI GPT Latest

This model always redirects to the latest model in the OpenAI GPT family.
1050K context premium
Qwen

Qwen: Qwen3.5 Plus 2026-04-20

Qwen3.5 Plus (April 2026) is a large-scale multimodal language model from Alibaba. It accepts text, image, and video input and produces text output, with a 1M token context window. This...
1000K context standard
Qwen

Qwen: Qwen3.6 Flash

Qwen3.6 Flash is a fast, efficient language model from Alibaba's Qwen 3.6 series. It supports text, image, and video input with a 1M token context window. Tiered pricing kicks in...
1000K context standard
Qwen

Qwen: Qwen3.6 35B A3B

Qwen3.6-35B-A3B is an open-weight multimodal model from Alibaba Cloud with 35 billion total parameters and 3 billion active parameters per token. It uses a hybrid sparse mixture-of-experts architectur
262K context 35B standard
Qwen

Qwen: Qwen3.6 Max Preview

Qwen3.6-Max-Preview is a proprietary frontier model from Alibaba Cloud built on a sparse mixture-of-experts architecture with approximately 1 trillion total parameters. It is optimized for agentic cod
262K context standard
Qwen

Qwen: Qwen3.6 27B

Qwen3.6 27B is a dense 27-billion-parameter language model from the Qwen Team at Alibaba, released in April 2026. It features hybrid multimodal capabilities — accepting text, image, and video inputs..
262K context 27B standard
OpenAI

OpenAI: GPT-5.5 Pro

GPT-5.5 Pro is OpenAI’s high-capability model optimized for deep reasoning and accuracy on complex, high-stakes workloads. It features a 1M+ token context window (922K input, 128K output) with support
1050K context premium
OpenAI

OpenAI: GPT-5.5

GPT-5.5 is OpenAI’s frontier model designed for complex professional workloads, building on GPT-5.4 with stronger reasoning, higher reliability, and improved token efficiency on hard tasks. It feature
1050K context premium
DeepSeek

DeepSeek: DeepSeek V4 Pro

DeepSeek V4 Pro is a large-scale Mixture-of-Experts model from DeepSeek with 1.6T total parameters and 49B activated parameters, supporting a 1M-token context window. It is designed for advanced reaso
1049K context 49B budget
DeepSeek

DeepSeek: DeepSeek V4 Flash (free)

DeepSeek V4 Flash is an efficiency-optimized Mixture-of-Experts model from DeepSeek with 284B total parameters and 13B activated parameters, supporting a 1M-token context window. It is designed for fa
1049K context 284B budget
DeepSeek

DeepSeek: DeepSeek V4 Flash

DeepSeek V4 Flash is an efficiency-optimized Mixture-of-Experts model from DeepSeek with 284B total parameters and 13B activated parameters, supporting a 1M-token context window. It is designed for fa
1049K context 284B budget
Inclusionai

inclusionAI: Ling-2.6-1T

Ling-2.6-1T is an instant (instruct) model from inclusionAI and the company’s trillion-parameter flagship, designed for real-world agents that require fast execution and high efficiency at scale. It u
262K context standard
Tencent

Tencent: Hy3 preview

Hy3 preview is a high-efficiency Mixture-of-Experts model from Tencent designed for agentic workflows and production use. It supports configurable reasoning levels across disabled, low, and high modes
262K context budget
Xiaomi

Xiaomi: MiMo-V2.5-Pro

MiMo-V2.5-Pro is Xiaomi’s flagship model, delivering strong performance in general agentic capabilities, complex software engineering, and long-horizon tasks, with top rankings on benchmarks such as C
1049K context standard
Xiaomi

Xiaomi: MiMo-V2.5

MiMo-V2.5 is a native omnimodal model by Xiaomi. It delivers Pro-level agentic performance at roughly half the inference cost, while surpassing MiMo-V2-Omni in multimodal perception across image and v
1049K context standard
Inclusionai

inclusionAI: Ling-2.6-flash

Ling-2.6-flash is an instant (instruct) model from inclusionAI with 104B total parameters and 7.4B active parameters, designed for real-world agents that require fast responses, strong execution, and
262K context 104B budget
~anthropic

Anthropic: Claude Opus Latest

This model always redirects to the latest model in the Claude Opus family.
1000K context premium
Baidu

Baidu: Qianfan-OCR-Fast

Qianfan-OCR-Fast is a domain-specific multimodal large model purpose-built for OCR. By leveraging specialized OCR training data while preserving versatile multimodal intelligence, it provides a powerf
66K context standard
Moonshotai

MoonshotAI: Kimi K2.6

Kimi K2.6 is Moonshot AI's next-generation multimodal model, designed for long-horizon coding, coding-driven UI/UX generation, and multi-agent orchestration. It handles complex end-to-end coding tasks
262K context standard
Anthropic

Anthropic: Claude Opus 4.7

Opus 4.7 is the next generation of Anthropic's Opus family, built for long-running, asynchronous agents. Building on the coding and agentic strengths of Opus 4.6, it delivers stronger performance on..
1000K context premium
Anthropic

Anthropic: Claude Opus 4.6 (Fast)

Fast-mode variant of [Opus 4.6](/anthropic/claude-opus-4.6) - identical capabilities with higher output speed at premium 6x pricing. Learn more in Anthropic's docs: https://platform.claude.com/docs/e
1000K context premium
Z-ai

Z.ai: GLM 5.1

GLM-5.1 delivers a major leap in coding capability, with particularly significant gains in handling long-horizon tasks. Unlike previous models built around minute-level interactions, GLM-5.1 can work
203K context standard
Google

Google: Gemma 4 26B A4B (free)

Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B qual
262K context 26B budget
Google

Google: Gemma 4 26B A4B

Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B qual
262K context 26B budget
Google

Google: Gemma 4 31B (free)

Gemma 4 31B Instruct is Google DeepMind's 30.7B dense multimodal model supporting text and image input with text output. Features a 256K token context window, configurable thinking/reasoning mode, nat
262K context 31B budget
Google

Google: Gemma 4 31B

Gemma 4 31B Instruct is Google DeepMind's 30.7B dense multimodal model supporting text and image input with text output. Features a 256K token context window, configurable thinking/reasoning mode, nat
262K context 31B budget
Qwen

Qwen: Qwen3.6 Plus

Qwen 3.6 Plus builds on a hybrid architecture that combines efficient linear attention with sparse mixture-of-experts routing, enabling strong scalability and high-performance inference. Compared to t
1000K context standard
Z-ai

Z.ai: GLM 5V Turbo

GLM-5V-Turbo is Z.ai’s first native multimodal agent foundation model, built for vision-based coding and agent-driven tasks. It natively handles image, video, and text inputs, excels at long-horizon p
203K context standard
Arcee-ai

Arcee AI: Trinity Large Thinking (free)

Trinity Large Thinking is a powerful open source reasoning model from the team at Arcee AI. It shows strong performance in PinchBench, agentic workloads, and reasoning tasks. Launch video: https://you
262K context budget
Arcee-ai

Arcee AI: Trinity Large Thinking

Trinity Large Thinking is a powerful open source reasoning model from the team at Arcee AI. It shows strong performance in PinchBench, agentic workloads, and reasoning tasks. Launch video: https://you
262K context budget

Popular AI Model Comparisons

Try Any AI Model Instantly

Chat with GPT-5, Claude, Gemini, and 300+ models — all in one app. Compare responses side-by-side.

Download App → Try on Web App