Llama 3.3 70B API
Meta's most capable 70B model with 128K context, competing with models 5x its size.
Parameters
70B
Context
128,000 tokens
Organization
Meta
Quick Start
Start using Llama 3.3 70B in minutes. VoltageGPU provides an OpenAI-compatible API — just change the base_url.
from openai import OpenAI
client = OpenAI(
base_url="https://api.voltagegpu.com/v1",
api_key="YOUR_VOLTAGE_API_KEY"
)
response = client.chat.completions.create(
model="meta-llama/Llama-3.3-70B-Instruct",
messages=[
{"role": "system", "content": "You are a senior software engineer."},
{"role": "user", "content": "Review this code and suggest improvements:\n\ndef fib(n):\n if n <= 1: return n\n return fib(n-1) + fib(n-2)"}
],
max_tokens=2048,
temperature=0.3
)
print(response.choices[0].message.content)curl -X POST https://api.voltagegpu.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
-d '{
"model": "meta-llama/Llama-3.3-70B-Instruct",
"messages": [
{"role": "system", "content": "You are a senior software engineer."},
{"role": "user", "content": "Review this code and suggest improvements."}
],
"max_tokens": 2048,
"temperature": 0.3
}'Pricing
| Component | Price | Unit |
|---|---|---|
| Input tokens | $0.7 | per 1M tokens |
| Output tokens | $0.9 | per 1M tokens |
New accounts receive $5 free credit. No credit card required to start.
Capabilities & Benchmarks
Llama 3.3 70B achieves strong results across benchmarks: MMLU (86.0%), HumanEval (88.4%), MATH (77.0%), and GSM8K (91.1%). It supports tool use, structured output (JSON mode), and multilingual generation in English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. With 128K context it can process entire codebases and long documents.
About Llama 3.3 70B
Llama 3.3 70B is Meta's most capable open-weight model in the 70B parameter class. It delivers performance competitive with much larger models including Llama 3.1 405B on many tasks. Built on Meta's latest Llama 3 architecture with grouped query attention (GQA), it supports a 128K context window and excels at instruction following, reasoning, coding, and multilingual tasks. The model was trained on over 15 trillion tokens of publicly available data and fine-tuned with RLHF for safe and helpful responses.
Use Cases
Enterprise Chatbots
Deploy production-grade conversational AI with strong safety guarantees and instruction following.
Code Generation
Generate, review, and debug code across multiple languages with high accuracy.
Document Processing
Summarize, extract information from, and analyze long documents with 128K context.
Multilingual Applications
Build applications serving users in 8+ languages with native-quality generation.
RAG Pipelines
Use as the generation component in Retrieval-Augmented Generation for knowledge-grounded responses.
API Reference
Endpoint
https://api.voltagegpu.com/v1/chat/completionsHeaders
| Authorization | Bearer YOUR_VOLTAGE_API_KEY | Required |
| Content-Type | application/json | Required |
Model ID
meta-llama/Llama-3.3-70B-InstructUse this value as the model parameter in your API requests.
Example Request
curl -X POST https://api.voltagegpu.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
-d '{
"model": "meta-llama/Llama-3.3-70B-Instruct",
"messages": [
{"role": "system", "content": "You are a senior software engineer."},
{"role": "user", "content": "Review this code and suggest improvements."}
],
"max_tokens": 2048,
"temperature": 0.3
}'Related Models
Llama 3.3 70B — Related Resources
Confidential Compute
Run this model on hardware-sealed GPUs with Intel TDX attestation.
Confidential AI Inference
OpenAI-compatible API with TEE-attested model serving.
Pricing
Confidential Compute and AI Inference pricing with no hidden fees.
Browse Confidential GPUs
H200, H100, B200 with hardware-sealed Intel TDX compute.
Frequently Asked Questions
How does Llama 3.3 70B compare to Llama 3.1 405B?
Llama 3.3 70B matches the performance of Llama 3.1 405B on many benchmarks while being significantly cheaper and faster to run. On MMLU it scores 86.0% vs 405B's 88.6%. For most practical use cases, the 70B model provides excellent quality at much lower cost.
Is Llama 3.3 70B free to use commercially?
Llama 3.3 70B is released under Meta's Llama 3.3 Community License, which allows commercial use for companies with fewer than 700 million monthly active users. Through VoltageGPU's API, you can use it immediately at $0.70/M input tokens with no licensing concerns.
What context window does Llama 3.3 70B support?
Llama 3.3 70B supports a 128,000 token context window, allowing it to process approximately 96,000 words or 300 pages of text in a single request.
Does Llama 3.3 70B support function calling?
Yes, Llama 3.3 70B supports tool use and function calling through the VoltageGPU API. You can define tools using the standard OpenAI function calling format.
Start using Llama 3.3 70B today
Get $5 free credit when you sign up. No credit card required. Deploy in under 30 seconds with our OpenAI-compatible API.