Inception

Inception: Mercury Coder

Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku and GPT-4o Mini while matching their performance. Mercury Coder's speed means that developers can stay in the flow while coding, enjoying rapid chat-based iteration and responsive code completion suggestions. On Copilot Arena, Mercury Coder ranks 1st in speed and ties for 2nd in quality. Read more in the [blog post here](https://www.inceptionlabs.ai/blog/introducing-mercury).

Context Window
128K tokens
Parameters
N/A
Input Price
$0.25/1M
Output Price
$0.75/1M
Price Tier
budget
Provider
Inception

How to Use Inception: Mercury Coder

With CoreAI, you can start chatting with Inception: Mercury Coder instantly — no separate subscription needed. CoreAI bundles access to Inception: Mercury Coder along with 300+ other AI models from Inception and other providers like OpenAI, Anthropic, Google, Meta, and more.

  1. Download the CoreAI app for iOS, Android, or use the Web App
  2. Select Inception: Mercury Coder from the model selector
  3. Start chatting, comparing, or creating with AI

More Inception Models

Inception

Inception: Mercury 2

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produ
128K budget
Inception

Inception: Mercury

Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even
128K budget

Try Inception: Mercury Coder Now

Chat with Inception: Mercury Coder and 300+ other AI models — all in one app.

Download App → Try on Web App