GPTCrunch

Compare Models

Select up to 4 models to compare benchmarks, pricing, and capabilities side by side.

Anthropic logoClaude Haiku 3.5

Anthropic

Mistral AI logoMistral Small

Mistral AI

Cohere logoCommand R 7B

Cohere

Add Model
MMLU
Claude Haiku 3.5
85.2
Mistral Small
81.2
Command R 7B
68.0
HumanEval
Claude Haiku 3.5
88.1
Mistral Small
84.8
Command R 7B
58.0
GSM8K
Claude Haiku 3.5
91.6
Mistral Small
88.4
Command R 7B
70.0
GPQA
Claude Haiku 3.5
41.6
Mistral Small
37.5
Command R 7B
0.0
MGSM
Claude Haiku 3.5
88.5
Mistral Small
80.1
Command R 7B
72.0
ARC-Challenge
Claude Haiku 3.5
93.5
Mistral Small
89.5
Command R 7B
0.0
HellaSwag
Claude Haiku 3.5
89.5
Mistral Small
84.0
Command R 7B
76.0
MATH
Claude Haiku 3.5
69.2
Mistral Small
61.0
Command R 7B
0.0
SWE-bench
Claude Haiku 3.5
40.6
Mistral Small
18.5
Command R 7B
0.0
MMMLU
Claude Haiku 3.5
81.7
Mistral Small
73.2
Command R 7B
0.0
ModelInputOutputBlended*
Claude Haiku 3.5
$0.80$4.00$2.40
Mistral Small
$0.10$0.30$0.20
Command R 7B
$0.04$0.08$0.06

*Blended = average of input and output price

Spec
Claude Haiku 3.5
Mistral Small
Command R 7B
Context Window200K32K128K
Max Output8K4KN/A
TTFT150ms140msN/A
Speed160 tok/s170 tok/sN/A
ParametersN/A24B7B
ArchitectureTransformerTransformerDense Transformer
Open SourceNoNoYes
Tierbudgetbudgetbudget

Quick Verdict

Best Performance

Claude Haiku 3.5

Best Value

Command R 7B

Fastest

Mistral Small