Qwen3-32B Confidential Confidential API
Balanced 32B model running inside Intel TDX enclaves on H100 GPUs. The fastest confidential model in the catalog, with under 5% TDX overhead at batch=8.
Parameters
32B
Context
128,000 tokens
Organization
Qwen Team
TDX Intel TDX Hardware Guarantee
Every token of Qwen3-32B Confidential is processed inside a hardware-isolated enclave on an NVIDIA H100. The host OS, hypervisor, and VoltageGPU platform operators cannot read prompts, completions, or model state in plaintext.
- Intel TDX
CPU-enforced trust domain isolates the entire inference VM from the host.
- AES-256 Memory
Enclave RAM is encrypted at the controller level with a key the host never sees.
- Protected PCIe
Traffic to the H100 stays inside the trust domain on a sealed PCIe path.
- Remote Attestation
Every session emits a hardware-signed quote verifiable against Intel DCAP.
Attestation Flow
How your client verifies the enclave before any prompt is sent.
- Client requests a fresh attestation quote from the enclave endpoint.
- Enclave returns a hardware-signed TDX quote (CPU firmware, TDX measurements, software identity).
- Client verifies the quote against Intel DCAP root of trust and a published reference measurement.
- Client derives an ephemeral session key bound to the verified quote.
- All subsequent prompts and completions travel inside an AES-GCM channel terminated inside the enclave.
- Audit log records the quote hash, session ID, and timestamp — never the prompt or completion bodies.
Real TDX Overhead Benchmarks
Measured April 2026 on NVIDIA H100 with vLLM and TGI. TDX overhead is dominated by memory encryption traffic and stays under 5% at small batch sizes.
Throughput (tokens / second)
| Scenario | TDX off | TDX on | Δ |
|---|---|---|---|
| vLLM, batch=1 | 84 | 82 | -2.4% |
| vLLM, batch=8 | 410 | 392 | -4.4% |
| vLLM, batch=32 | 1620 | 1495 | -7.7% |
| TGI, batch=1 | 78 | 76 | -2.6% |
Green < 5% overhead · amber ≥ 5% overhead.
p99 latency under load (ms)
| Concurrency | p50 off | p50 on | p99 off | p99 on |
|---|---|---|---|---|
| 1 | 48 | 49 | 62 | 65 |
| 50 | 71 | 74 | 155 | 172 |
| 500 | 210 | 228 | 980 | 1180 |
p99 overhead scales with concurrency; stays under +20% at concurrency = 500.
vs. Confidential Cloud Competitors
How VoltageGPU's production GA confidential inference compares to the major cloud providers' confidential offerings as of April 2026.
| Provider | GPU-level TDX | Production GA | Public pricing | < 5% overhead | Notes |
|---|---|---|---|---|---|
| AWS Nitro Enclaves | ✗ | ✓ | ✓ | ✗ | CPU-only isolation. GPU memory remains visible to the host. |
| Azure Confidential GPU | ✓ | ✗ | ✗ | ✗ | Private preview. RFP-only pricing, no GA SLA. |
| GCP Confidential VMs | ✗ | ✓ | ✓ | ✓ | CPU TDX only. No GPU-level enclave for inference. |
| VoltageGPU (this product) | ✓ | ✓ | ✓ | ✓ | GA on H100 with sub-5% TDX overhead at batch=8 and public pricing. |
Free Confidential Playground
Send a prompt directly into the H100 enclave running Qwen3-32B Confidential. Every response is generated inside encrypted memory and labeled TDX-sealed.
Built for Regulated Industries
Hardware-rooted attestation gives auditors and regulators verifiable evidence that the model identity, the runtime, and the memory in use match what your policy approves — every session.
Legal
Privilege preserved: prompts and completions never leave the enclave in plaintext.
Healthcare
HIPAA-aligned: PHI sealed by hardware encryption with auditable attestation.
Financial Services
Customer data and pricing models stay inside attested memory, never observable to operators.
Defense & Government
Hardware-rooted attestation supports controlled-unclassified workloads on shared infrastructure.
Quick Start
Start using Qwen3-32B Confidential in minutes. VoltageGPU provides an OpenAI-compatible API — just change the base_url.
from openai import OpenAI
import os
client = OpenAI(
base_url="https://api.voltagegpu.com/v1",
api_key=os.environ["OPENAI_API_KEY"], # your VoltageGPU API key
)
response = client.chat.completions.create(
model="Qwen/Qwen3-32B-TEE",
messages=[
{"role": "system", "content": "You are a confidential contract analyst."},
{"role": "user", "content": "Summarize this NDA in three bullet points: ..."},
],
max_tokens=1024,
temperature=0.4,
)
print(response.choices[0].message.content)curl -X POST https://api.voltagegpu.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "Qwen/Qwen3-32B-TEE",
"messages": [
{"role": "system", "content": "You are a confidential contract analyst."},
{"role": "user", "content": "Summarize this NDA in three bullet points: ..."}
],
"max_tokens": 1024,
"temperature": 0.4
}'Pricing
| Component | Price | Unit |
|---|---|---|
| Input tokens | $0.08 | per 1M tokens |
| Output tokens | $0.24 | per 1M tokens |
New accounts receive $5 free credit. No credit card required to start.
Capabilities & Benchmarks
Confidential inference on H100 with Intel TDX enclaves. Measured TDX overhead: -2.4% at batch=1, -4.4% at batch=8, -7.7% at batch=32 (April 2026 benchmark, vLLM). p99 latency overhead stays under +11% at concurrency=50. Supports OpenAI-compatible chat completions, function calling, streaming, and 128K context. Attestation quote available on every session via DCAP verification.
About Qwen3-32B Confidential
Qwen3-32B Confidential is the workhorse model of VoltageGPU's confidential inference network. Every token is processed inside an Intel TDX hardware enclave on an NVIDIA H100, with AES-256 encrypted memory, Protected PCIe, and remote attestation on every session. The dense 32B architecture delivers fast first-token latency and consistent throughput across batched workloads, while the 128K context window handles full contracts, medical records, and long financial filings without truncation. Unlike sandboxed confidential offerings that ship without GPU-level isolation, VoltageGPU seals the model weights, KV cache, and inference computation behind hardware-rooted attestation. The host operating system, hypervisor, and platform operator cannot observe prompts or completions in plaintext — the attestation quote is verifiable by your application before any sensitive payload leaves your environment. Qwen3-32B is the right default when you need confidentiality plus speed: agentic chat, contract summarization, structured extraction, and tool use. For reasoning-heavy workloads switch to DeepSeek-R1 Confidential; for ultra-long 256K context switch to Qwen3.5-397B Confidential. New accounts get $5 free credit and can call the model through the standard OpenAI Python SDK — no enclave SDK, no custom client, just change the base URL.
Use Cases
Confidential Contract Review
Summarize and extract clauses from NDAs, MSAs, and M&A documents without exposing them to the host OS or platform operator.
Healthcare PHI Workflows
Process patient records, clinical notes, and discharge summaries under HIPAA-aligned controls with hardware attestation.
Financial Compliance Triage
Classify regulated communications, screen transactions, and draft suspicious-activity reports inside sealed memory.
Internal RAG over Trade Secrets
Run retrieval-augmented chat over proprietary corpora when sending plaintext to a public LLM is not acceptable.
API Reference
Endpoint
https://api.voltagegpu.com/v1/chat/completionsHeaders
| Authorization | Bearer YOUR_VOLTAGE_API_KEY | Required |
| Content-Type | application/json | Required |
Model ID
Qwen/Qwen3-32B-TEEUse this value as the model parameter in your API requests.
Example Request
curl -X POST https://api.voltagegpu.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "Qwen/Qwen3-32B-TEE",
"messages": [
{"role": "system", "content": "You are a confidential contract analyst."},
{"role": "user", "content": "Summarize this NDA in three bullet points: ..."}
],
"max_tokens": 1024,
"temperature": 0.4
}'Related Models
Qwen3-32B Confidential — Related Resources
Confidential Compute
Run this model on hardware-sealed GPUs with Intel TDX attestation.
Confidential AI Inference
OpenAI-compatible API with TEE-attested model serving.
Pricing
Confidential Compute and AI Inference pricing with no hidden fees.
Browse Confidential GPUs
H200, H100, B200 with hardware-sealed Intel TDX compute.
Frequently Asked Questions
What is Intel TDX and why does it matter for AI?
Intel Trust Domain Extensions (TDX) is a hardware feature that creates a fully isolated, AES-encrypted memory enclave for an entire virtual machine. The CPU enforces isolation at the silicon level: the host operating system, hypervisor, and even VoltageGPU platform operators cannot read enclave memory in plaintext. For AI inference this means your prompts, the model weights, the KV cache, and the completions all live inside encrypted memory — only your client, after verifying the attestation quote, ever sees them.
How does VoltageGPU prove the enclave is genuine?
Every confidential session begins with a remote attestation flow. The enclave generates a hardware-signed quote describing the CPU firmware version, the TDX measurements, and the loaded software identity. Your client verifies the quote against Intel's DCAP root of trust before deriving the session key. If the quote is invalid, expired, or describes an unexpected software stack, the session is rejected and no prompt is sent.
What is the performance overhead of running inference under TDX?
On H100 with vLLM in April 2026 we measured -2.4% throughput at batch=1, -4.4% at batch=8, and -7.7% at batch=32 versus the same model running without TDX. p50 latency is essentially unchanged at low concurrency and rises +11% at concurrency=50. The overhead comes from memory encryption traffic and is independent of model size.
Can I get a dedicated enclave or custom attestation policy?
Yes. The public Qwen3-32B Confidential endpoint runs on shared TDX H100s, which is sufficient for most regulated workloads. For air-gapped deployments, dedicated enclaves, custom attestation policies, or BYOK key management, contact contact@voltagegpu.com for an enterprise plan.
Which other providers offer GPU-level confidential AI today?
AWS Nitro Enclaves do not currently expose GPU-level TDX — they isolate CPU memory only, so model weights and KV cache remain visible to the host GPU. Azure Confidential GPU is in private preview with RFP-only pricing. GCP Confidential VMs cover CPU but do not include GPU-level enclaves for production AI inference. VoltageGPU is one of the few providers with GA-stage GPU-level TDX inference and published pricing.
What compliance frameworks does this support?
The hardware attestation produced on every session is the building block for HIPAA, GDPR Article 32, SOC 2 CC6, ISO 27001 A.10, and EU AI Act high-risk system audit requirements. VoltageGPU does not retain prompts or completions by default, and audit logs of attestation quotes are available on enterprise plans.
Start using Qwen3-32B Confidential today
Get $5 free credit when you sign up. No credit card required. Deploy in under 30 seconds with our OpenAI-compatible API.