Confidential Compute

NVIDIA H100

80 GB HBM3 — Intel TDX

The workhorse of confidential AI. Inference, fine-tuning, and batch processing at the best price per GPU hour — sealed in hardware.

$2.77/gpu/hour

Hardware details

GPUNVIDIA H100
VRAM80 GB HBM3
ArchitectureHopper
Memory Bandwidth3.35 TB/s
TDP700W
Confidential ComputingIntel TDX ✓
Per-second billingYes ✓

What H100 is built for

Inference serving

Serve models up to 70B with quantization. 80 GB HBM3 handles most production inference workloads at low cost.

Fine-tuning up to 30B models

LoRA and QLoRA fine-tuning of 7B-30B models. Full-parameter fine-tuning up to 13B. Cost-effective for iterative experiments.

Batch processing

Offline embedding generation, document classification, and large-scale data processing. Per-second billing keeps batch jobs cheap.

Development and prototyping

Build and test ML pipelines before scaling to H200 or B200. Same confidential environment at a lower price point.

Per-second billing. No commitment.

1x H100
$2.77
per hour
Monthly (730h)
$2022
per GPU/month
8x H100
$22.2
per hour (all 8)
AWS p4d.24xlarge equivalent
$32.77/h
Save 91% vs AWS

Your data stays sealed in hardware

Training data and model weights stay inside Intel TDX enclaves. No snapshots, no operator access.

AES-256 memory encryption at hardware level
Protected PCIe — encrypted CPU-to-GPU bus
Zero data retention — cryptographic erasure on termination
On-chain attestation — CPU-signed verification
GDPR Article 28 compliant processing
Even the operator cannot access your data

Deploy H100 in 60 seconds.

$5 free credit. Per-second billing. No commitment.