Qwen3.5-397B Confidential Confidential API
397B Mixture-of-Experts (17B active) under Intel TDX. The largest confidential model on VoltageGPU, built for long-document enterprise reasoning.
Parameters
397B total, 17B active (MoE)
Context
128,000 tokens
Organization
Qwen Team
TDX Intel TDX Hardware Guarantee
Every token of Qwen3.5-397B Confidential is processed inside a hardware-isolated enclave on an NVIDIA H100. The host OS, hypervisor, and VoltageGPU platform operators cannot read prompts, completions, or model state in plaintext.
- Intel TDX
CPU-enforced trust domain isolates the entire inference VM from the host.
- AES-256 Memory
Enclave RAM is encrypted at the controller level with a key the host never sees.
- Protected PCIe
Traffic to the H100 stays inside the trust domain on a sealed PCIe path.
- Remote Attestation
Every session emits a hardware-signed quote verifiable against Intel DCAP.
Attestation Flow
How your client verifies the enclave before any prompt is sent.
- Client requests a fresh attestation quote from the enclave endpoint.
- Enclave returns a hardware-signed TDX quote including the loaded model identity.
- Client verifies the quote against Intel DCAP root of trust and the published reference measurement for this model.
- Client derives an ephemeral AES-GCM session key bound to the verified quote.
- Prompts and completions travel inside the sealed channel; model weights and KV cache stay in encrypted memory.
- Audit log records quote hash, session ID, and timestamp — never the prompt or completion bodies.
Real TDX Overhead Benchmarks
Measured April 2026 on NVIDIA H100 with vLLM and TGI. TDX overhead is dominated by memory encryption traffic and stays under 5% at small batch sizes.
Throughput (tokens / second)
| Scenario | TDX off | TDX on | Δ |
|---|---|---|---|
| vLLM, batch=1 | 62 | 60 | -3.2% |
| vLLM, batch=8 | 295 | 282 | -4.4% |
| vLLM, batch=32 | 980 | 901 | -8.1% |
| TGI, batch=1 | 58 | 56 | -3.4% |
Green < 5% overhead · amber ≥ 5% overhead.
p99 latency under load (ms)
| Concurrency | p50 off | p50 on | p99 off | p99 on |
|---|---|---|---|---|
| 1 | 68 | 70 | 92 | 98 |
| 50 | 110 | 116 | 240 | 268 |
| 500 | 340 | 372 | 1480 | 1790 |
p99 overhead scales with concurrency; stays under +20% at concurrency = 500.
vs. Confidential Cloud Competitors
How VoltageGPU's production GA confidential inference compares to the major cloud providers' confidential offerings as of April 2026.
| Provider | GPU-level TDX | Production GA | Public pricing | < 5% overhead | Notes |
|---|---|---|---|---|---|
| AWS Nitro Enclaves | ✗ | ✓ | ✓ | ✗ | CPU isolation only; no MoE inference under hardware seal. |
| Azure Confidential GPU | ✓ | ✗ | ✗ | ✗ | Private preview, no published 397B model. |
| GCP Confidential VMs | ✗ | ✓ | ✓ | ✓ | No GPU-level enclave for large MoE models. |
| VoltageGPU (this product) | ✓ | ✓ | ✓ | ✓ | GA 397B MoE on H100 TDX with public pricing and sub-5% overhead at batch=8. |
Free Confidential Playground
Send a prompt directly into the H100 enclave running Qwen3.5-397B Confidential. Every response is generated inside encrypted memory and labeled TDX-sealed.
Built for Regulated Industries
Hardware-rooted attestation gives auditors and regulators verifiable evidence that the model identity, the runtime, and the memory in use match what your policy approves — every session.
Legal
Privilege preserved across hundreds of long contracts in a single attested session.
Healthcare
HIPAA-aligned synthesis of long clinical narratives without exposing PHI to the host.
Financial Services
Confidential due diligence and KYC enrichment under hardware-rooted attestation.
Defense & Government
Long-document intelligence synthesis on shared infrastructure with attested isolation.
Quick Start
Start using Qwen3.5-397B Confidential in minutes. VoltageGPU provides an OpenAI-compatible API — just change the base_url.
from openai import OpenAI
import os
client = OpenAI(
base_url="https://api.voltagegpu.com/v1",
api_key=os.environ["OPENAI_API_KEY"],
)
response = client.chat.completions.create(
model="Qwen/Qwen3.5-397B-A17B-TEE",
messages=[
{"role": "system", "content": "You are a confidential due diligence analyst."},
{"role": "user", "content": "Compare the indemnification clauses in these two MSAs and flag drift: ..."},
],
max_tokens=2048,
temperature=0.3,
)
print(response.choices[0].message.content)curl -X POST https://api.voltagegpu.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "Qwen/Qwen3.5-397B-A17B-TEE",
"messages": [
{"role": "system", "content": "You are a confidential due diligence analyst."},
{"role": "user", "content": "Compare the indemnification clauses in these two MSAs and flag drift: ..."}
],
"max_tokens": 2048,
"temperature": 0.3
}'Pricing
| Component | Price | Unit |
|---|---|---|
| Input tokens | $0.1 | per 1M tokens |
| Output tokens | $0.6 | per 1M tokens |
New accounts receive $5 free credit. No credit card required to start.
Capabilities & Benchmarks
Confidential inference on H100 with Intel TDX enclaves. MoE architecture: 397B total parameters, 17B active per token. Best-in-class long-document comprehension under hardware seal. Supports OpenAI-compatible chat completions, function calling, streaming, and 128K context. Multilingual coverage spans English, Chinese, French, German, Spanish, Japanese, and Korean at production quality.
About Qwen3.5-397B Confidential
Qwen3.5-397B-A17B Confidential is the largest Mixture-of-Experts model running inside Intel TDX hardware enclaves on VoltageGPU. The sparse architecture activates only 17B parameters per token, so the model delivers the world knowledge and reasoning depth of a 397B dense model while staying within H100 memory budgets under TDX encryption. This is the model to reach for when you need confidentiality plus capacity: long-form due diligence, comparative contract analysis across many documents, multilingual regulated workflows, and complex enterprise RAG over heterogeneous corpora. The 128K context window combined with the MoE expert routing produces high-quality long-document reasoning that smaller dense models cannot match. Like every confidential model on VoltageGPU, every session begins with a fresh attestation quote signed by the H100 host CPU. The hypervisor and platform operator cannot inspect prompts, completions, or model state. The model identity is part of the attested measurement, so a downgraded or substituted model is detectable before any data is sent. Measured TDX overhead is broadly consistent with the dense 32B model: under 5% at small batch sizes, rising to roughly 8% at heavy batch=32 throughput. The MoE expert dispatch does not materially change the encryption cost.
Use Cases
Due Diligence & M&A
Compare data rooms, surface obligation drift across hundreds of contracts, and produce sourced executive summaries under hardware seal.
Multilingual Compliance Review
Process regulated documents in mixed-language portfolios without exposing trade secrets to translation APIs.
Confidential Research Synthesis
Synthesize internal R&D notes, lab reports, and patent filings without sending plaintext to a public LLM.
Enterprise RAG over Long Corpora
Retrieval over policy manuals, regulatory archives, and internal wikis with 128K context inside sealed memory.
API Reference
Endpoint
https://api.voltagegpu.com/v1/chat/completionsHeaders
| Authorization | Bearer YOUR_VOLTAGE_API_KEY | Required |
| Content-Type | application/json | Required |
Model ID
Qwen/Qwen3.5-397B-A17B-TEEUse this value as the model parameter in your API requests.
Example Request
curl -X POST https://api.voltagegpu.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "Qwen/Qwen3.5-397B-A17B-TEE",
"messages": [
{"role": "system", "content": "You are a confidential due diligence analyst."},
{"role": "user", "content": "Compare the indemnification clauses in these two MSAs and flag drift: ..."}
],
"max_tokens": 2048,
"temperature": 0.3
}'Related Models
Qwen3.5-397B Confidential — Related Resources
Confidential Compute
Run this model on hardware-sealed GPUs with Intel TDX attestation.
Confidential AI Inference
OpenAI-compatible API with TEE-attested model serving.
Pricing
Confidential Compute and AI Inference pricing with no hidden fees.
Browse Confidential GPUs
H200, H100, B200 with hardware-sealed Intel TDX compute.
Frequently Asked Questions
What is Intel TDX and why does it matter for AI?
Intel Trust Domain Extensions (TDX) is a hardware feature that creates an AES-encrypted memory enclave for an entire virtual machine. The host OS, hypervisor, and VoltageGPU platform operators cannot read enclave memory in plaintext. For confidential AI inference this means your prompts, completions, model weights, and KV cache all live inside hardware-encrypted memory.
How does VoltageGPU prove the enclave is genuine?
Every confidential session begins with a remote attestation flow: the enclave produces a hardware-signed TDX quote describing the CPU firmware, TDX measurements, and the exact model identity. Your client verifies the quote against Intel's DCAP root of trust before deriving a session key and sending any prompt.
What is the performance overhead of TDX on a 397B MoE model?
TDX overhead is dominated by memory encryption traffic, not by parameter count, so the 397B MoE behaves similarly to the 32B dense model in our benchmarks: under 5% throughput loss at batch=1-8, rising to roughly 8% at batch=32. The MoE expert dispatch does not add material overhead inside the enclave.
Can I get a dedicated enclave for this model?
Yes. The shared endpoint is suitable for most regulated workloads. For dedicated single-tenant enclaves, custom attestation policies, BYOK key management, or air-gapped deployments, contact contact@voltagegpu.com for an enterprise plan.
Which other providers offer this scale of confidential MoE inference?
No major cloud provider offers a GA-stage 397B MoE under GPU-level TDX with public pricing today. AWS Nitro Enclaves are CPU-only. Azure Confidential GPU is in private preview with RFP pricing. GCP Confidential VMs do not cover GPU memory.
What compliance frameworks does this support?
The attestation quote produced on every session is the building block for HIPAA, GDPR Article 32, SOC 2 CC6, ISO 27001 A.10, and EU AI Act high-risk system audit evidence. Audit logs of attestation quotes are available on enterprise plans.
Start using Qwen3.5-397B Confidential today
Get $5 free credit when you sign up. No credit card required. Deploy in under 30 seconds with our OpenAI-compatible API.