DeepSeek

DeepSeek R1 0528

DeepSeekFlagship
ThinkingTool UseStructured Output

About this model

May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model.

Performance Tier

Flagship

DeepSeek R1 0528 is a flagship model from DeepSeek : the most capable in their lineup.

Best-in-class model from this provider. Highest performance across benchmarks, ideal for demanding tasks.

Pricing

This model is included in Elosia plans
Typeper 1M tokens
Input (prompt)$0.450
Output (completion)$2.15
Cache read$0.225

Capabilities

Context Length164K
Max Output Tokens66K
TokenizerDeepSeek
Inputtext
Outputtext
Release DateMay 28, 2025

Benchmarks

General Intelligence
MMLU
90.5%
MMLU-Pro
85%
GPQA Diamond
87.5%
Mathematics
MATH-500
97.3%
AIME 2025
87.5%
Programming
HumanEval
88.5%
SWE-bench Verified
57.6%
LiveCodeBench
73.3%
Reasoning
IFEval
84%
Humanity's Last Exam
17.7%

Recommended Use Cases

MathematicsResearchAnalysisCoding

Strengths

  • Exceptional mathematical reasoning (MATH-500 97.3%, AIME 96.3%)
  • Strong science reasoning rivaling top proprietary models (GPQA 87.5%)
  • Open-weight reasoning model with transparent chain-of-thought
  • Excellent value — top-tier reasoning at open model pricing

Limitations

  • Slower responses due to extended thinking process
  • Less capable on software engineering tasks (SWE-bench 57.6%)
  • Creative writing and conversation less natural than chat-optimized models

Resources

This model may use your data for training

Similar Models