A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.
Nemo is a compact model from Mistral : optimized for speed and affordability.
Small, fast, and affordable. Optimized for speed and low cost, great for high-volume or simple tasks.
| Type | per 1M tokens |
|---|---|
| Input (prompt) | $0.020 |
| Output (completion) | $0.040 |
This model may use your data for training