Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward...
Maverick is a flagship model from Llama 4 : the most capable in their lineup.
Best-in-class model from this provider. Highest performance across benchmarks, ideal for demanding tasks.
| Type | per 1M tokens |
|---|---|
| Input (prompt) | $0.150 |
| Output (completion) | $0.600 |
This model may use your data for training