NVIDIA

NVIDIA: Nemotron Nano 12B 2 VL

NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s memory-efficient sequence modeling for significantly higher throughput and lower latency. The model supports inputs of text and multi-image documents, producing natural-language outputs. It is trained on high-quality NVIDIA-curated synthetic datasets optimized for optical-character recognition, chart reasoning, and multimodal comprehension. Nemotron Nano 2 VL achieves leading results on OCRBench v2 and scores ≈ 74 average across MMMU, MathVista, AI2D, OCRBench, OCR-Reasoning, ChartQA, DocVQA, and Video-MME—surpassing prior open VL baselines. With Efficient Video Sampling (EVS), it handles long-form videos while reducing inference cost. Open-weights, training data, and fine-tuning recipes are released under a permissive NVIDIA open license, with deployment supported across NeMo, NIM, and major inference runtimes.

Context Window
131K tokens
Parameters
12B
Input Price
$0.20/1M
Output Price
$0.60/1M
Price Tier
budget
Provider
NVIDIA

How to Use NVIDIA: Nemotron Nano 12B 2 VL

With CoreAI, you can start chatting with NVIDIA: Nemotron Nano 12B 2 VL instantly — no separate subscription needed. CoreAI bundles access to NVIDIA: Nemotron Nano 12B 2 VL along with 300+ other AI models from NVIDIA and other providers like OpenAI, Anthropic, Google, Meta, and more.

  1. Download the CoreAI app for iOS, Android, or use the Web App
  2. Select NVIDIA: Nemotron Nano 12B 2 VL from the model selector
  3. Start chatting, comparing, or creating with AI

More NVIDIA Models

NVIDIA

NVIDIA: Nemotron 3 Super (free)

NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in compl
262K budget
NVIDIA

NVIDIA: Nemotron 3 Super

NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in compl
262K budget
NVIDIA

NVIDIA: Nemotron 3 Nano 30B A3B (free)

NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build specialized agentic
256K budget
NVIDIA

NVIDIA: Nemotron 3 Nano 30B A3B

NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build specialized agentic
262K budget
NVIDIA

NVIDIA: Nemotron Nano 12B 2 VL (free)

NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It int
128K budget
NVIDIA

NVIDIA: Llama 3.3 Nemotron Super 49B V1.5

Llama-3.3-Nemotron-Super-49B-v1.5 is a 49B-parameter, English-centric reasoning/chat model derived from Meta’s Llama-3.3-70B-Instruct with a 128K cont
131K budget

Try NVIDIA: Nemotron Nano 12B 2 VL Now

Chat with NVIDIA: Nemotron Nano 12B 2 VL and 300+ other AI models — all in one app.

Download App → Try on Web App