Minimax

Minimax 2.5

MinimaxBalanced
ThinkingTool UseStructured Output

About this model

MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise of M2.1 to extend into general office work, reaching fluency in generating and operating Word, Excel, and Powerpoint files, context switching between diverse software environments, and working across different agent and human teams. Scoring 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, and 76.3% on BrowseComp, M2.5 is also more token efficient than previous generations, having been trained to optimize its actions and output through planning.

Performance Tier

Balanced

Minimax 2.5 is a balanced model from Minimax : strong performance at a reasonable price.

Strong cost-performance ratio. Reliable for most professional use cases without premium pricing.

Pricing

This model is included in Elosia plans
Typeper 1M tokens
Input (prompt)$0.120
Output (completion)$1.00
Cache read$0.060

Capabilities

Context Length197K
Max Output Tokens66K
TokenizerOther
Inputtext
Outputtext
Release DateFebruary 12, 2026

Benchmarks

General Intelligence
MMLU
88%
GPQA Diamond
85%
Mathematics
MATH-500
87%
Programming
HumanEval
88%
SWE-bench Verified
80.2%
Reasoning
IFEval
85%

Recommended Use Cases

CodingAnalysisResearchData Extraction

Strengths

  • Top-tier software engineering (SWE-bench 80.2%, rivaling Claude Opus)
  • Extremely cost-efficient — frontier coding at $0.30/$1.20 per M tokens
  • Fast inference at 100 TPS with 205K context window
  • Open-weight MoE model enabling self-hosting

Limitations

  • Weaker on general reasoning and math than top proprietary models
  • Smaller community and ecosystem compared to Claude/GPT/Gemini
  • Creative writing and conversational fluency lag behind chat-optimized models

Resources

This model may use your data for training

Similar Models