DeepSeek-R1 Confidential Confidential API
Frontier reasoning model with explicit chain-of-thought, sealed inside Intel TDX enclaves on H100. For audit-grade reasoning over regulated material.
Parameters
671B (MoE, 37B active)
Context
128,000 tokens
Organization
DeepSeek
TDX Intel TDX Hardware Guarantee
Every token of DeepSeek-R1 Confidential is processed inside a hardware-isolated enclave on an NVIDIA H100. The host OS, hypervisor, and VoltageGPU platform operators cannot read prompts, completions, or model state in plaintext.
- Intel TDX
CPU-enforced trust domain isolates the entire inference VM from the host.
- AES-256 Memory
Enclave RAM is encrypted at the controller level with a key the host never sees.
- Protected PCIe
Traffic to the H100 stays inside the trust domain on a sealed PCIe path.
- Remote Attestation
Every session emits a hardware-signed quote verifiable against Intel DCAP.
Attestation Flow
How your client verifies the enclave before any prompt is sent.
- Client requests a fresh attestation quote from the enclave endpoint.
- Enclave returns a hardware-signed TDX quote including the DeepSeek-R1-0528 model identity.
- Client verifies the quote against Intel DCAP root of trust and the published R1 reference measurement.
- Client derives an ephemeral AES-GCM session key bound to the verified quote.
- Prompts, chain-of-thought reasoning, and final answers travel inside the sealed channel — the reasoning trace never leaves encrypted memory in plaintext.
- Audit log records quote hash, session ID, and timestamp — never the reasoning or completion bodies.
Real TDX Overhead Benchmarks
Measured April 2026 on NVIDIA H100 with vLLM and TGI. TDX overhead is dominated by memory encryption traffic and stays under 5% at small batch sizes.
Throughput (tokens / second)
| Scenario | TDX off | TDX on | Δ |
|---|---|---|---|
| vLLM, batch=1 | 18.2 | 17.6 | -3.3% |
| vLLM, batch=8 | 92 | 87 | -5.4% |
| vLLM, batch=32 | 310 | 282 | -9.0% |
| TGI, batch=1 | 17.5 | 16.9 | -3.4% |
Green < 5% overhead · amber ≥ 5% overhead.
p99 latency under load (ms)
| Concurrency | p50 off | p50 on | p99 off | p99 on |
|---|---|---|---|---|
| 1 | 120 | 124 | 180 | 192 |
| 50 | 220 | 235 | 510 | 580 |
| 500 | 640 | 715 | 2400 | 2920 |
p99 overhead scales with concurrency; stays under +20% at concurrency = 500.
vs. Confidential Cloud Competitors
How VoltageGPU's production GA confidential inference compares to the major cloud providers' confidential offerings as of April 2026.
| Provider | GPU-level TDX | Production GA | Public pricing | < 5% overhead | Notes |
|---|---|---|---|---|---|
| AWS Nitro Enclaves | ✗ | ✓ | ✓ | ✗ | CPU isolation only; reasoning trace exposed to host GPU memory. |
| Azure Confidential GPU | ✓ | ✗ | ✗ | ✗ | Private preview, no published reasoning SKU. |
| GCP Confidential VMs | ✗ | ✓ | ✓ | ✓ | No GPU-level enclave for reasoning inference. |
| VoltageGPU (this product) | ✓ | ✓ | ✓ | ✓ | GA frontier reasoning under H100 TDX with public pricing. |
Free Confidential Playground
Send a prompt directly into the H100 enclave running DeepSeek-R1 Confidential. Every response is generated inside encrypted memory and labeled TDX-sealed.
Built for Regulated Industries
Hardware-rooted attestation gives auditors and regulators verifiable evidence that the model identity, the runtime, and the memory in use match what your policy approves — every session.
Legal
Reasoning trace treated as privileged work product, sealed inside the enclave.
Healthcare
Step-by-step clinical reasoning over PHI without plaintext exposure to operators.
Financial Services
Auditable multi-step deal analysis with hardware-rooted attestation of the model identity.
Defense & Government
Transparent reasoning on regulated material with attested isolation from shared infrastructure.
Quick Start
Start using DeepSeek-R1 Confidential in minutes. VoltageGPU provides an OpenAI-compatible API — just change the base_url.
from openai import OpenAI
import os
client = OpenAI(
base_url="https://api.voltagegpu.com/v1",
api_key=os.environ["OPENAI_API_KEY"],
)
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1-0528-TEE",
messages=[
{"role": "system", "content": "You are a confidential legal reasoning assistant."},
{"role": "user", "content": "Reason step by step: does this indemnification clause survive termination?"},
],
max_tokens=4096,
temperature=0.6,
)
print(response.choices[0].message.content)curl -X POST https://api.voltagegpu.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "deepseek-ai/DeepSeek-R1-0528-TEE",
"messages": [
{"role": "system", "content": "You are a confidential legal reasoning assistant."},
{"role": "user", "content": "Reason step by step: does this indemnification clause survive termination?"}
],
"max_tokens": 4096,
"temperature": 0.6
}'Pricing
| Component | Price | Unit |
|---|---|---|
| Input tokens | $0.45 | per 1M tokens |
| Output tokens | $2.15 | per 1M tokens |
New accounts receive $5 free credit. No credit card required to start.
Capabilities & Benchmarks
Confidential reasoning on H100 with Intel TDX enclaves. Full chain-of-thought generated and sealed inside encrypted memory. Measured TDX overhead: -3.3% throughput at batch=1, scaling similarly at higher batch sizes. Best-in-class results on AIME, MATH-500, and Codeforces benchmarks. 128K context window, OpenAI-compatible API.
About DeepSeek-R1 Confidential
DeepSeek-R1-0528 Confidential brings chain-of-thought reasoning under hardware seal. The full R1-0528 weights run inside Intel TDX enclaves on H100, and the model's explicit step-by-step reasoning never leaves encrypted memory in plaintext. This is the right model when you need transparent verifiable reasoning over regulated material: multi-step legal analysis, complex compliance triage, financial deal modeling, and scientific synthesis where you need to audit the model's logic without exposing the underlying documents. R1 produces a visible reasoning trace before its final answer. Under TDX, that trace is generated inside the enclave and only delivered to your client after the attestation handshake. Platform operators and the host OS cannot read the intermediate reasoning, the final answer, or the documents the model is reasoning about. TDX overhead is modestly higher than the smaller models because R1 generates many more tokens per response — most of them in the reasoning trace. We measured -3.3% throughput at batch=1 and the same shape of overhead curve at higher batch sizes. Output pricing reflects the reasoning verbosity and the model's frontier-class performance. Like every confidential model on VoltageGPU, R1 Confidential is reachable through the standard OpenAI Python SDK — no special enclave SDK or custom protocol. Change the base URL, set the model ID, and verify the attestation quote.
Use Cases
Audit-Grade Legal Reasoning
Produce defensible step-by-step legal analyses where the reasoning trace itself is privileged work product.
Regulated Financial Modeling
Multi-step deal analysis, scenario reasoning, and compliance triage where the working is as sensitive as the answer.
Scientific & Medical Synthesis
Reason over clinical literature, lab data, and patient histories with explicit transparent steps under hardware seal.
Confidential Code Review
Reason about proprietary codebases, find bugs, and propose patches with the full reasoning trace kept inside the enclave.
API Reference
Endpoint
https://api.voltagegpu.com/v1/chat/completionsHeaders
| Authorization | Bearer YOUR_VOLTAGE_API_KEY | Required |
| Content-Type | application/json | Required |
Model ID
deepseek-ai/DeepSeek-R1-0528-TEEUse this value as the model parameter in your API requests.
Example Request
curl -X POST https://api.voltagegpu.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "deepseek-ai/DeepSeek-R1-0528-TEE",
"messages": [
{"role": "system", "content": "You are a confidential legal reasoning assistant."},
{"role": "user", "content": "Reason step by step: does this indemnification clause survive termination?"}
],
"max_tokens": 4096,
"temperature": 0.6
}'Related Models
DeepSeek-R1 Confidential — Related Resources
Confidential Compute
Run this model on hardware-sealed GPUs with Intel TDX attestation.
Confidential AI Inference
OpenAI-compatible API with TEE-attested model serving.
Pricing
Confidential Compute and AI Inference pricing with no hidden fees.
Browse Confidential GPUs
H200, H100, B200 with hardware-sealed Intel TDX compute.
Frequently Asked Questions
What is Intel TDX and why does it matter for AI?
Intel Trust Domain Extensions (TDX) is a hardware feature that creates an AES-encrypted memory enclave for an entire virtual machine. The CPU enforces isolation at silicon level — host OS, hypervisor, and platform operators cannot read enclave memory in plaintext. For confidential reasoning models this is critical: the chain-of-thought trace is often more sensitive than the final answer, and TDX keeps it encrypted end-to-end.
How does VoltageGPU prove the enclave is genuine?
Every session begins with a remote attestation flow. The enclave produces a hardware-signed TDX quote describing the CPU firmware, TDX measurements, and the exact R1-0528 model identity. Your client verifies the quote against Intel's DCAP root of trust before deriving the session key. A downgraded or substituted model is detectable before any prompt is sent.
What is the performance overhead of running R1 under TDX?
On H100 with vLLM we measured -3.3% throughput at batch=1 versus the same model running without TDX. Because R1 generates a long reasoning trace before the final answer, total wall-clock overhead per response is modestly higher than non-reasoning models, but per-token overhead remains under 5% at small batch sizes.
Can I get a dedicated enclave or custom attestation policy?
Yes. The shared R1 Confidential endpoint suits most regulated reasoning workloads. For dedicated single-tenant enclaves, custom attestation policies, BYOK key management, or air-gapped deployments, contact contact@voltagegpu.com for an enterprise plan.
Which other providers offer GPU-level confidential reasoning today?
No major cloud provider offers a frontier reasoning model under GA GPU-level TDX with public pricing. AWS Nitro Enclaves are CPU-only. Azure Confidential GPU is in private preview without a published reasoning SKU. GCP Confidential VMs do not cover GPU memory for inference.
What compliance frameworks does this support?
The hardware attestation produced on every session is the building block for HIPAA, GDPR Article 32, SOC 2 CC6, ISO 27001 A.10, and EU AI Act high-risk system audit evidence. VoltageGPU does not retain prompts, reasoning traces, or completions by default. Audit logs of attestation quotes are available on enterprise plans.
Start using DeepSeek-R1 Confidential today
Get $5 free credit when you sign up. No credit card required. Deploy in under 30 seconds with our OpenAI-compatible API.