AI Models

Compare AI Models

Pricing, benchmarks, and capabilities for Claude, GPT, Gemini, Grok, DeepSeek, and Perplexity — updated April 2026.

Last updated: April 20, 2026

Find the Right AI Model

Answer a few questions and we will recommend the best model for your needs.

AI Model Comparison

Flagship models from each provider, compared on the metrics that matter.

Model Provider API Input $/1M API Output $/1M Context Window Coding Reasoning Writing Speed
Claude Opus 4.7 Anthropic $5.00 $25.00 1M ★★★★★ ★★★★★ ★★★★★ Medium
GPT-5.4 OpenAI $2.50 $15.00 1M ★★★★★ ★★★★★ ★★★★ Fast
Gemini 3.1 Pro Google $2.00 $12.00 1M ★★★★★ ★★★★★ ★★★★ Medium
Grok 4.20 xAI $2.00 $6.00 2M ★★★★ ★★★★★ ★★★★ Very Fast
DeepSeek V3.2 DeepSeek AI $0.28 $0.42 128K ★★★★★ ★★★★★ ★★★★ Fast
Sonar Pro Perplexity $3.00 $15.00 200K ★★★★★ ★★★★★ ★★★★ Fast

Budget-Friendly Tiers

Smaller, faster models for cost-sensitive or high-volume workloads.

Model Provider API Input $/1M API Output $/1M Context Best For
Claude Haiku 4.5 Anthropic $1.00 $5.00 200K Fast classification, summaries, chat
Claude Sonnet 4.6 Anthropic $3.00 $15.00 1M Balanced coding + reasoning
GPT-5.4-mini OpenAI $0.75 $4.50 400K High-volume chat, lightweight tasks
GPT-5.4-nano OpenAI $0.20 $1.25 400K Edge inference, classification
Gemini 3.1 Flash-Lite Google $0.15 $0.60 1M Cost-efficient multimodal
Grok 4.1 Fast xAI $0.20 $0.50 2M Speed-first reasoning, search
DeepSeek V3.2 (cached) DeepSeek AI $0.028 $0.42 128K Bulk processing, agents (90% cache discount)
Sonar (basic) Perplexity $1.00 $1.00 128K Quick search-grounded answers
Methodology

All data sourced from official provider documentation and API pricing pages as of April 2026. Star ratings reflect consensus from public benchmarks (SWE-bench, GPQA Diamond, USAMO, HumanEval) and independent evaluations (LMSYS Arena, Artificial Analysis). Pricing reflects standard API rates for flagship models. Ratings are updated when providers release new models or pricing changes. This page does not use affiliate links.