GPTCrunch

Compare Models

Select up to 4 models to compare benchmarks, pricing, and capabilities side by side.

Anthropic logoClaude Haiku 3.5

Anthropic

Mistral AI logoMistral Small

Mistral AI

IBM logoGranite 3.2 2B

IBM

Add Model
MMLU
Claude Haiku 3.5
85.2
Mistral Small
81.2
Granite 3.2 2B
57.0
HumanEval
Claude Haiku 3.5
88.1
Mistral Small
84.8
Granite 3.2 2B
50.0
GSM8K
Claude Haiku 3.5
91.6
Mistral Small
88.4
Granite 3.2 2B
55.0
GPQA
Claude Haiku 3.5
41.6
Mistral Small
37.5
Granite 3.2 2B
0.0
MGSM
Claude Haiku 3.5
88.5
Mistral Small
80.1
Granite 3.2 2B
0.0
ARC-Challenge
Claude Haiku 3.5
93.5
Mistral Small
89.5
Granite 3.2 2B
0.0
HellaSwag
Claude Haiku 3.5
89.5
Mistral Small
84.0
Granite 3.2 2B
68.0
MATH
Claude Haiku 3.5
69.2
Mistral Small
61.0
Granite 3.2 2B
0.0
SWE-bench
Claude Haiku 3.5
40.6
Mistral Small
18.5
Granite 3.2 2B
0.0
MMMLU
Claude Haiku 3.5
81.7
Mistral Small
73.2
Granite 3.2 2B
0.0
ModelInputOutputBlended*
Claude Haiku 3.5
$0.80$4.00$2.40
Mistral Small
$0.10$0.30$0.20
Granite 3.2 2B
$0.03$0.06$0.04

*Blended = average of input and output price

Spec
Claude Haiku 3.5
Mistral Small
Granite 3.2 2B
Context Window200K32K128K
Max Output8K4KN/A
TTFT150ms140msN/A
Speed160 tok/s170 tok/sN/A
ParametersN/A24B2B
ArchitectureTransformerTransformerDense Transformer
Open SourceNoNoYes
Tierbudgetbudgetbudget

Quick Verdict

Best Performance

Claude Haiku 3.5

Best Value

Granite 3.2 2B

Fastest

Mistral Small