Educational Guide

What is confidential computing?

A plain-language guide to hardware enclaves, Intel TDX, and why it matters for AI. No jargon. No marketing. Just how it works.

The three states of data

Every piece of data exists in one of three states. Two of them have been solved for decades. The third is the reason confidential computing exists.

Data at rest
Encrypted on disk — solved by AES, LUKS, BitLocker
Data in transit
Encrypted over the network — solved by TLS / HTTPS
Data in use ← the gap
Unencrypted in RAM during processing — this is what confidential computing solves

When your data is being processed — by a database query, an AI model, or any application — it sits unencrypted in memory. Anyone with physical access to the server, or privileged access to the hypervisor, can potentially read it. Disk encryption and TLS do not help here. The data must be decrypted to be used.

Confidential computing closes this gap by encrypting data while it is being processed, using hardware-level isolation built into the CPU itself.

What is a hardware enclave?

A hardware enclave is a protected region of memory that the CPU isolates from everything else on the machine. The operating system cannot read it. The hypervisor cannot read it. The cloud operator cannot read it. Only code running inside the enclave can access the data.

This is not a software sandbox or a container. The isolation is enforced by the CPU hardware itself. The encryption keys are generated and held inside the processor. No software — no matter how privileged — can extract them.

Think of it as a sealed room inside a building. The building owner has keys to every room — except this one. Only you have the key. And the lock was built into the walls by the manufacturer, not installed by the building owner.

Different chip manufacturers implement enclaves differently. Intel calls them Trust Domains (TDX). AMD calls them Secure Encrypted Virtualization (SEV). The underlying principle is the same: CPU-level memory isolation that excludes everyone except the authorized workload.

Intel TDX explained

Intel Trusted Domain Extensions (TDX) is Intel's implementation of confidential computing. It creates what Intel calls Trust Domains (TDs) — virtual machines where all memory is encrypted by the CPU.

  • AES-256 memory encryption — every byte of the Trust Domain's memory is encrypted with a unique key that only the CPU holds
  • Hardware-enforced isolation — the host OS and hypervisor are removed from the trust boundary entirely
  • Integrity protection — the CPU detects if enclave memory has been tampered with and halts execution
  • Production-ready — shipped in 4th Gen Xeon Scalable (Sapphire Rapids) and used by Microsoft Azure, Google Cloud, and Oracle Cloud

The key insight of TDX is that it moves the trust boundary. In a traditional cloud VM, you trust the cloud provider's hypervisor. With TDX, you trust only the CPU. The provider's software stack is outside the boundary.

NVIDIA Confidential Computing

CPU-level enclaves protect data in system memory. But AI workloads run on GPUs. Without GPU-level protection, data would be exposed the moment it moves from CPU to GPU for model inference or training. NVIDIA addressed this on their Hopper and Blackwell architectures.

  • Protected PCIe — encrypts the bus between CPU and GPU so data cannot be intercepted in transit between processors
  • GPU memory encryption — data stored in GPU HBM (high bandwidth memory) is encrypted at the hardware level
  • Supported hardware — H100, H200 (Hopper architecture) and B200 (Blackwell architecture)

When combined with Intel TDX, NVIDIA Confidential Computing creates an end-to-end sealed pipeline. Data is encrypted in system memory (TDX), encrypted on the bus (Protected PCIe), and encrypted on the GPU (HBM encryption). At no point does it exist unprotected in cleartext.

This matters for AI specifically because model weights, training data, and inference inputs all pass through GPU memory. Without GPU-level protection, CPU-level enclaves alone leave a gap in the chain.

How attestation works

Hardware enclaves solve the isolation problem. But how do you know the enclave is real? How do you know it has not been tampered with before you send your data? That is where remote attestation comes in.

Before sending any data, you can ask the CPU: “Is this enclave genuine and untampered?” The process works like this:

  • The CPU generates a cryptographic measurement of the enclave's state — its code, configuration, and security properties
  • This measurement is signed using hardware keys embedded by Intel at manufacturing time — keys that cannot be extracted or forged
  • You (or your software) verify this signed report against Intel's public attestation service
  • If verification passes, you know the enclave is genuine, running the expected code, and has not been modified
This is not trust. It is verification. You do not trust the provider. You verify the hardware. The difference matters: trust is a policy decision that can be broken. Verification is a cryptographic proof that is mathematically sound.

Intel, AMD, and NVIDIA all support remote attestation for their respective confidential computing implementations. It is a standard part of the confidential computing stack, not an optional add-on.

Who uses confidential computing

Confidential computing is not a research project. It is deployed in production by the largest cloud providers and used across regulated industries worldwide.

Microsoft Azure
Confidential VMs with Intel TDX and AMD SEV-SNP since 2022.
Google Cloud
Confidential Computing on Compute Engine and GKE.
Confidential Computing Consortium
Linux Foundation project. Members: Intel, AMD, NVIDIA, Microsoft, Google, ARM, Huawei.
Financial services & healthcare
Banks, insurers, hospitals processing regulated data under GDPR, HIPAA, DORA, MiFID II.

VoltageGPU is one of the first platforms to offer confidential GPU compute specifically for AI workloads — Intel TDX sealed NVIDIA GPUs with per-second billing, remote attestation, and zero data retention.

Frequently asked questions

Is confidential computing slow?
No. Modern implementations like Intel TDX add approximately 2-5% overhead. For most AI workloads, the difference is negligible. The encryption and isolation happen at the hardware level, not in software, so the CPU handles it natively.
Is this the same as encryption?
Not exactly. Traditional encryption protects data at rest (on disk) and in transit (over the network). Confidential computing protects data in use — while it is being processed in memory. It closes the one gap that encryption alone cannot.
Can the cloud provider still access my data?
No. The hardware enforces isolation. The host operating system, hypervisor, and cloud operator are all excluded from the trust boundary. Only code running inside the enclave can access the data.
Is this experimental technology?
No. Microsoft Azure and Google Cloud have offered confidential VMs in production since 2022. Intel TDX shipped in 4th Gen Xeon (Sapphire Rapids). AMD SEV has been available since EPYC Rome. NVIDIA added GPU-level confidential computing on Hopper (H100) in 2023.

Want to try confidential computing for AI?

VoltageGPU offers Intel TDX sealed GPUs with per-second billing. $5 free credit to start.

Get started free