Mistral AI
Explore all 20 AI models from Mistral AI. Compare benchmarks, pricing, and capabilities across the full model lineup.
mistral.ai20
Total Models
72.1
Avg Benchmark
13
Open Source
3
Modalities
All Models
20 models
Mistral Large 3
Mistral AI
Mistral's open-weight 675B MoE model with 41B active parameters, multimodal input, and 256K context.
Input
$0.50/M
Output
$1.50/M
Context
256K
Ministral 3B
Mistral AI
Smallest Mistral model for edge computing and extremely resource-constrained deployments.
Input
$0.04/M
Output
$0.10/M
Context
128K
Devstral Small 2
Mistral AI
Coding-specialized model outperforming Qwen 3 Coder Flash despite smaller size.
Input
$0.20/M
Output
$0.60/M
Context
128K
Ministral 14B
Mistral AI
Mid-size Mistral model bridging the gap between 8B edge models and large frontier offerings.
Input
$0.15/M
Output
$0.45/M
Context
128K
Magistral Small
Mistral AI
Open-source reasoning model built on Small 3.1 with SFT and RL training. Efficient multilingual reasoning.
Input
$0.20/M
Output
$0.60/M
Context
128K
Magistral Medium
Mistral AI
First Mistral reasoning model with 50% AIME-24 improvement via scalable RL. Reasoning in 8+ languages.
Input
$2.00/M
Output
$6.00/M
Context
128K
Mistral Medium 3
Mistral AI
Mistral's mid-tier model offering 90% of Claude Sonnet quality at significantly lower cost.
Input
$0.40/M
Output
$2.00/M
Context
131K
Mistral Small 3.1
Mistral AI
Compact 24B model with image understanding, 128K context, and Apache 2.0 license.
Input
$0.10/M
Output
$0.30/M
Context
128K
Mistral Small
Mistral AI
Mistral's efficient model for everyday tasks. Fast and cost-effective.
Input
$0.10/M
Output
$0.30/M
Context
32K
Codestral 25.01
Mistral AI
Mistral's specialized code model supporting 80+ languages with 256K context and fill-in-the-middle capability.
Input
$0.30/M
Output
$0.90/M
Context
256K
Mistral Large
Mistral AI
Mistral's flagship model with strong multilingual and code generation capabilities.
Input
$2.00/M
Output
$6.00/M
Context
128K
Pixtral Large
Mistral AI
Mistral's flagship multimodal model. Built on Mistral Large with vision capabilities.
Input
$2.00/M
Output
$6.00/M
Context
128K
Ministral 8B
Mistral AI
Mistral's edge-optimized model with a knowledge-dense 8B parameter design.
Input
$0.10/M
Output
$0.10/M
Context
128K
Pixtral 12B
Mistral AI
Mistral's open-source multimodal model. Processes images natively alongside text.
Input
$0.10/M
Output
$0.10/M
Context
128K
Mistral NeMo
Mistral AI
Mistral's 12B open-source model co-developed with NVIDIA. Replaces Mistral 7B.
Input
$0.04/M
Output
$0.04/M
Context
128K
Codestral Mamba
Mistral AI
Code model using Mamba SSM architecture for linear-time inference. Unlimited theoretical context.
Input
$0.10/M
Output
$0.30/M
Context
256K
Codestral
Mistral AI
Mistral's first code-focused model with 32K context. Supports 80+ programming languages.
Input
$0.30/M
Output
$0.90/M
Context
32K
Mixtral 8x22B
Mistral AI
Mistral's large open-source MoE model with 176B total params. Strong coding and reasoning.
Input
$0.65/M
Output
$0.65/M
Context
66K
Mixtral 8x7B
Mistral AI
The original open-source MoE model that started the MoE trend. Fast and efficient.
Input
$0.24/M
Output
$0.24/M
Context
32K
Mistral 7B
Mistral AI
The model that launched Mistral. Open-source, fast, and surprisingly capable for 7B.
Input
$0.06/M
Output
$0.06/M
Context
32K