Compare Models
Select up to 4 models to compare benchmarks, pricing, and capabilities side by side.
Anthropic
Mistral AI
Cohere
Add Model
MMLU
Claude Haiku 3.5
85.2
Mistral Small
81.2
Command R 7B
68.0
HumanEval
Claude Haiku 3.5
88.1
Mistral Small
84.8
Command R 7B
58.0
GSM8K
Claude Haiku 3.5
91.6
Mistral Small
88.4
Command R 7B
70.0
GPQA
Claude Haiku 3.5
41.6
Mistral Small
37.5
Command R 7B
0.0
MGSM
Claude Haiku 3.5
88.5
Mistral Small
80.1
Command R 7B
72.0
ARC-Challenge
Claude Haiku 3.5
93.5
Mistral Small
89.5
Command R 7B
0.0
HellaSwag
Claude Haiku 3.5
89.5
Mistral Small
84.0
Command R 7B
76.0
MATH
Claude Haiku 3.5
69.2
Mistral Small
61.0
Command R 7B
0.0
SWE-bench
Claude Haiku 3.5
40.6
Mistral Small
18.5
Command R 7B
0.0
MMMLU
Claude Haiku 3.5
81.7
Mistral Small
73.2
Command R 7B
0.0
| Model | Input | Output | Blended* |
|---|---|---|---|
Claude Haiku 3.5 | $0.80 | $4.00 | $2.40 |
Mistral Small | $0.10 | $0.30 | $0.20 |
Command R 7B | $0.04 | $0.08 | $0.06 |
*Blended = average of input and output price
| Spec | Claude Haiku 3.5 | Mistral Small | Command R 7B |
|---|---|---|---|
| Context Window | 200K | 32K | 128K |
| Max Output | 8K | 4K | N/A |
| TTFT | 150ms | 140ms | N/A |
| Speed | 160 tok/s | 170 tok/s | N/A |
| Parameters | N/A | 24B | 7B |
| Architecture | Transformer | Transformer | Dense Transformer |
| Open Source | No | No | Yes |
| Tier | budget | budget | budget |
Quick Verdict
Best Performance
Claude Haiku 3.5
Best Value
Command R 7B
Fastest
Mistral Small