GPT

GPT-5 Codex

GPTSpecialized
ThinkingTool UseVisionStructured Output

About this model

GPT-5-Codex is a specialized version of GPT-5 optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks. The model supports building projects from scratch, feature development, debugging, large-scale refactoring, and code review. Compared to GPT-5, Codex is more steerable, adheres closely to developer instructions, and produces cleaner, higher-quality code outputs. Reasoning effort can be adjusted with the `reasoning.effort` parameter. Read the [docs here](https://openrouter.ai/docs/use-cases/reasoning-tokens#reasoning-effort-level) Codex integrates into developer environments including the CLI, IDE extensions, GitHub, and cloud tasks. It adapts reasoning effort dynamically—providing fast responses for small tasks while sustaining extended multi-hour runs for large projects. The model is trained to perform structured code reviews, catching critical flaws by reasoning over dependencies and validating behavior against tests. It also supports multimodal inputs such as images or screenshots for UI development and integrates tool use for search, dependency installation, and environment setup. Codex is intended specifically for agentic coding applications.

Performance Tier

Specialized

GPT-5 Codex is a specialized model from GPT : built for a specific domain.

Domain-specific model. Optimized for a particular task such as code generation, image creation, or web search.

Pricing

This model is included in Elosia plans
Typeper 1M tokens
Input (prompt)$1.25
Output (completion)$10.00
Cache read$0.125

Capabilities

Context Length400K
Max Output Tokens128K
TokenizerGPT
Inputtext, image
Outputtext
Release DateSeptember 23, 2025

Benchmarks

Programming
HumanEval
93%
SWE-bench Verified
67.2%

Recommended Use Cases

Coding

Strengths

  • Dedicated coding model built on GPT-5 architecture
  • Good multi-file editing capabilities
  • More affordable than GPT-5.1/5.2 Codex variants

Limitations

  • Outperformed by newer Codex models on SWE-bench
  • Limited to coding use cases

Resources

This model may use your data for training

Similar Models