Mistral Large API
Mistral's flagship 123B model competing with GPT-4o across reasoning, coding, and multilingual tasks.
Parameters
123B
Context
128,000 tokens
Organization
Mistral AI
Quick Start
Start using Mistral Large in minutes. VoltageGPU provides an OpenAI-compatible API — just change the base_url.
from openai import OpenAI
client = OpenAI(
base_url="https://api.voltagegpu.com/v1",
api_key="YOUR_VOLTAGE_API_KEY"
)
response = client.chat.completions.create(
model="mistralai/Mistral-Large-2",
messages=[
{"role": "system", "content": "You are a senior architect. Provide detailed technical analysis."},
{"role": "user", "content": "Design a scalable event-driven microservices architecture for an e-commerce platform handling 10M daily orders."}
],
max_tokens=4096,
temperature=0.4
)
print(response.choices[0].message.content)curl -X POST https://api.voltagegpu.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
-d '{
"model": "mistralai/Mistral-Large-2",
"messages": [
{"role": "system", "content": "You are a senior architect."},
{"role": "user", "content": "Design a scalable event-driven architecture for e-commerce."}
],
"max_tokens": 4096,
"temperature": 0.4
}'Pricing
| Component | Price | Unit |
|---|---|---|
| Input tokens | $2 | per 1M tokens |
| Output tokens | $6 | per 1M tokens |
New accounts receive $5 free credit. No credit card required to start.
Capabilities & Benchmarks
Mistral Large 2 achieves frontier performance: MMLU (84.0%), HumanEval (92%), MATH (83%), and strong multilingual capabilities across 12+ languages. It supports function calling, JSON mode, system prompts, and fine-grained instruction following. The model excels at complex multi-turn conversations, technical writing, code review, and enterprise applications requiring high accuracy and reliability.
About Mistral Large
Mistral Large 2 is Mistral AI's flagship commercial model with 123 billion parameters and a 128K context window. It delivers frontier-level performance across reasoning, coding, mathematics, and multilingual tasks, competing directly with GPT-4o and Claude 3.5 Sonnet. The model supports tool use, function calling, JSON mode, and excels at complex multi-step reasoning. It natively handles dozens of languages including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Arabic, and Hindi.
Use Cases
Enterprise Applications
Build mission-critical applications requiring the highest accuracy and reliability.
Complex Reasoning
Handle multi-step reasoning tasks, legal analysis, and strategic planning.
Advanced Code Generation
Generate production-quality code with comprehensive error handling and documentation.
Multilingual Content
Create and translate content across 12+ languages with native-quality fluency.
Technical Documentation
Write detailed technical docs, API references, and architecture documents.
API Reference
Endpoint
https://api.voltagegpu.com/v1/chat/completionsHeaders
| Authorization | Bearer YOUR_VOLTAGE_API_KEY | Required |
| Content-Type | application/json | Required |
Model ID
mistralai/Mistral-Large-2Use this value as the model parameter in your API requests.
Example Request
curl -X POST https://api.voltagegpu.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
-d '{
"model": "mistralai/Mistral-Large-2",
"messages": [
{"role": "system", "content": "You are a senior architect."},
{"role": "user", "content": "Design a scalable event-driven architecture for e-commerce."}
],
"max_tokens": 4096,
"temperature": 0.4
}'Related Models
Mistral Large — Related Resources
Confidential Compute
Run this model on hardware-sealed GPUs with Intel TDX attestation.
Confidential AI Inference
OpenAI-compatible API with TEE-attested model serving.
Pricing
Confidential Compute and AI Inference pricing with no hidden fees.
Browse Confidential GPUs
H200, H100, B200 with hardware-sealed Intel TDX compute.
Frequently Asked Questions
How does Mistral Large compare to GPT-4o?
Mistral Large 2 delivers competitive performance with GPT-4o across most benchmarks. It excels particularly in multilingual tasks and European languages. At $2.00/M input tokens, it is priced lower than GPT-4o ($2.50/M) while offering comparable quality.
Is Mistral Large open source?
Mistral Large 2 is available under a research license that permits non-commercial use. For commercial use, you can access it through VoltageGPU's API without any licensing concerns.
What makes Mistral Large suitable for enterprises?
Mistral Large offers high accuracy, strong instruction following, tool use support, and reliable structured output. It handles complex multi-step tasks with precision and supports compliance-friendly deployment through VoltageGPU's managed API.
Does Mistral Large support function calling?
Yes, Mistral Large fully supports function calling and tool use through the VoltageGPU API. It can handle multiple tool calls in a single response and supports parallel function execution.
Start using Mistral Large today
Get $5 free credit when you sign up. No credit card required. Deploy in under 30 seconds with our OpenAI-compatible API.