Claude

Claude Opus 4.7

ClaudeFlagship
ThinkingTool UseVisionStructured Output

About this model

Opus 4.7 is the next generation of Anthropic's Opus family, built for long-running, asynchronous agents. Building on the coding and agentic strengths of Opus 4.6, it delivers stronger performance on...

Performance Tier

Flagship

Claude Opus 4.7 is a flagship model from Claude : the most capable in their lineup.

Best-in-class model from this provider. Highest performance across benchmarks, ideal for demanding tasks.

Pricing

This model is included in Elosia plans
Premium

Highest cost level. A long conversation can quickly consume your monthly cap.

Typeper 1M tokens
Input (prompt)$5.00
Output (completion)$25.00
Cache read$0.500
Cache write$6.25

Capabilities

Context Length1.0M
Max Output Tokens128K
TokenizerClaude
Inputtext, image
Outputtext
Release DateApril 16, 2026

Benchmarks

General Intelligence
MMLU
91.5%
GPQA Diamond
94.2%
Mathematics
MATH-500
Not reported
Programming
HumanEval
Not reported
SWE-bench Verified
87.6%
Reasoning
IFEval
Not reported
ARC-AGI-2
75.8%
Humanity's Last Exam
46.9%
Agentic
SWE-bench Pro
64.3%
Terminal-Bench 2.0
69.4%

Recommended Use Cases

CodingAnalysisResearchCreative Writing

Strengths

  • Best generally-available model on real-world software engineering (SWE-bench Verified 87.6%, SWE-bench Pro 64.3%)
  • State-of-the-art graduate-level scientific reasoning (GPQA Diamond 94.2%)
  • Adaptive thinking — automatically adjusts compute to task complexity
  • Improved agentic robustness against prompt injection and malicious tool use
  • Enhanced vision: accepts images up to ~3.75 MP (3× prior Claude models)

Limitations

  • Premium pricing ($5 / $25 per million input / output tokens)
  • MATH-500, HumanEval and IFEval not reported — Anthropic considers them saturated
  • Slower than Sonnet/Haiku for latency-sensitive queries

Resources

This model may use your data for training

Similar Models