Reference · Updated April 2026

The Confidential AI glossary

Plain-language definitions for the vocabulary you meet when you run AI on hardware-sealed infrastructure — from Intel TDX and attestation to vLLM, tokens and per-second billing.

47terms
6categories
100%plain English
/
A2 terms

API Key

Infrastructure

Per-user credential

The secret token your application presents to authenticate against the VoltageGPU API. Scoped per user, rotatable and revocable from the dashboard.

Learn more

Attestation

Confidential Compute

Cryptographic proof

A signed measurement issued by the CPU that proves a workload is running inside a genuine, unmodified hardware enclave. VoltageGPU exposes an attestation endpoint for every confidential pod so you can verify the environment before sending sensitive data.

Learn more
B3 terms

B200

GPU Hardware

Blackwell flagship

NVIDIA Blackwell data-center GPU with 192 GB HBM3e. The fastest single accelerator available on VoltageGPU for training and high-throughput inference of frontier models.

Learn more

Batch Size

Performance

Parallel requests

The number of inputs processed together in a single GPU step. Larger batches raise throughput but also latency and VRAM use.

BF16

Model Optimization

Brain Float 16

A 16-bit floating-point format with the same exponent range as FP32 but half the storage. The default mixed-precision format for training modern LLMs on NVIDIA GPUs.

C4 terms

Cold Start

Performance

First-request delay

The time spent loading a model into GPU memory the first time a pod serves a request. Kept-warm endpoints avoid it entirely.

Confidential VM

Confidential Compute

Hardware-isolated VM

A virtual machine whose memory and CPU state are encrypted by the processor itself. Even the host operating system and hypervisor cannot read its contents. All VoltageGPU Confidential Compute pods run inside a Confidential VM.

Learn more

Context Window

AI Inference

Max input length

The maximum number of tokens an LLM can consider in a single request, counting both the prompt and the generated output. Modern models range from 8k to 1M tokens.

CUDA

GPU Hardware

NVIDIA compute platform

The parallel computing platform and programming model used by every major deep-learning framework to run kernels on NVIDIA GPUs.

D1 term

Docker

Infrastructure

Container runtime

A tool that packages an application with all its dependencies into a reproducible image. Every VoltageGPU pod boots from a Docker image.

Learn more
E1 term

Enclave

Confidential Compute

Sealed execution zone

An isolated region of a CPU where code and data are protected from every other process on the machine — including root. Enclaves are the building block of Trusted Execution Environments.

F2 terms

Fine-Tuning

Model Optimization

Adapt a base model

Taking a pre-trained model and continuing training on a smaller, task-specific dataset so it learns a new style, domain or instruction format without starting from scratch.

FP16 / FP8

Model Optimization

Half and eighth precision

Reduced-precision floating-point formats used for inference and training. FP8 on H100/H200/B200 roughly doubles throughput versus FP16 for acceptable accuracy loss.

G2 terms

GGUF

Model Optimization

Quantized model format

The file format used by llama.cpp to store quantized LLM weights. Allows models to run on modest hardware at a small quality cost.

GPU

GPU Hardware

Graphics Processing Unit

A massively parallel accelerator built around thousands of small cores. Ideal for the matrix and tensor operations that dominate AI training and inference.

H2 terms

H100 / H200

GPU Hardware

Hopper data-center GPUs

NVIDIA Hopper-generation accelerators with 80 GB (H100) or 141 GB (H200) of HBM3/HBM3e and full Confidential Computing support. Workhorses for modern LLM inference.

Learn more

HBM

GPU Hardware

High Bandwidth Memory

Stacked DRAM mounted directly on the GPU package. Delivers roughly 10× the bandwidth of traditional GDDR and is the memory technology behind every modern AI accelerator (A100, H100, H200, B200).

I2 terms

Inference

AI Inference

Running a trained model

The act of using a model to produce an output from an input — completing text, embedding a document, classifying an image. Distinct from training, which updates the weights.

Learn more

Intel TDX

Confidential Compute

Trust Domain Extensions

Intel’s hardware confidential-computing technology. It creates a "Trust Domain" where memory is AES-encrypted with a key only the CPU knows, isolating workloads from the hypervisor and other tenants. TDX is the foundation of VoltageGPU Confidential Compute.

Learn more
J1 term

Jupyter

Infrastructure

Interactive notebook

A browser-based environment for running Python interactively. Available on most VoltageGPU templates for exploration and prototyping.

K1 term

Kubernetes

Infrastructure

Container orchestration

A platform that automates deployment, scaling and networking of container workloads across a cluster.

L3 terms

Latency

Performance

Time to response

How long a request takes end-to-end. For LLMs this is usually split into TTFT (time to first token) and the inter-token latency that follows.

LLM

AI Inference

Large Language Model

A neural network trained on very large text corpora to generate and understand natural language. VoltageGPU exposes 20+ TEE-protected LLMs via an OpenAI-compatible API.

Learn more

LoRA

Model Optimization

Low-Rank Adaptation

A parameter-efficient fine-tuning method that freezes the base weights and trains a pair of small low-rank matrices instead. 10–100× cheaper than full fine-tuning.

M2 terms

Memory Encryption

Confidential Compute

AES on the bus

Every byte leaving the CPU towards DRAM is encrypted on-the-fly with AES-XTS (or AES-GCM) using a key sealed inside the processor. It defeats cold-boot, bus-probing and DMA attacks.

MoE

AI Inference

Mixture of Experts

An architecture where each token is routed to only a small subset of "expert" subnetworks, allowing very large models (e.g. Mixtral, DeepSeek-V3) to run with the compute cost of a much smaller one.

N2 terms

NVIDIA Confidential Computing

Confidential Compute

GPU CC mode

H100 and H200 GPUs can run in Confidential Computing mode: the GPU becomes an extension of the CPU enclave, PCIe traffic is encrypted, and GPU memory is protected from the host. VoltageGPU H100/H200 TDX pods run in this mode by default.

NVLink

GPU Hardware

GPU-to-GPU fabric

NVIDIA’s high-bandwidth interconnect linking multiple GPUs together — up to 900 GB/s on H100. Required for tensor parallelism on models that do not fit on a single card.

O1 term

OpenAI-Compatible API

AI Inference

Drop-in interface

An API surface that exposes endpoints with the same shape as the OpenAI SDK (/v1/chat/completions, /v1/embeddings, ...). VoltageGPU is OpenAI-compatible, so you only change the base URL and key.

Learn more
P5 terms

P95 Latency

Performance

95th-percentile delay

The response time that 95% of requests complete under. A more honest service indicator than the raw average.

Per-Second Billing

Infrastructure

Pay only for runtime

VoltageGPU pods are metered by the second from the moment they reach RUNNING to the moment they stop. No rounding up to the hour.

Learn more

Pod

Infrastructure

Isolated GPU instance

A single-tenant container that owns one or more GPUs for the duration of your session, with its own filesystem, SSH access and network.

Learn more

Prompt Engineering

AI Inference

Shaping model input

The discipline of crafting input instructions, system messages and examples to steer a model toward the desired output without retraining it.

Protected PCIe

Confidential Compute

Encrypted CPU↔GPU link

The data path between CPU enclave and GPU is authenticated and encrypted so model weights, prompts and activations never appear in plaintext on the PCIe bus.

Q1 term

Quantization

Model Optimization

Shrink the weights

Compressing model weights from 16 or 32 bits down to 8, 4 or even 2 bits per value. Reduces VRAM footprint and boosts inference speed at a controlled accuracy cost.

R2 terms

RAG

AI Inference

Retrieval-Augmented Generation

A pattern that retrieves relevant documents from a vector store at query time and injects them into the prompt. Reduces hallucinations and lets a model answer using private, up-to-date data.

RLHF

AI Inference

Reinforcement Learning from Human Feedback

A training stage where human preference rankings are used to fine-tune a language model toward safer, more helpful answers. Used on most frontier chat models.

S1 term

SSH Key

Infrastructure

Public-key auth

The key pair you register on VoltageGPU so you can open a shell inside your pod without passwords. Private key stays on your laptop.

Learn more
T5 terms

TEE

Confidential Compute

Trusted Execution Environment

A CPU feature that guarantees confidentiality and integrity of code and data at runtime, even if the host OS is compromised. Intel TDX, AMD SEV-SNP and ARM CCA are all TEE implementations.

TFLOPS

Performance

Trillion FLOPs / sec

A raw compute metric. H100 reaches ~989 TFLOPS of BF16 tensor math; B200 pushes that past 2 PFLOPS.

Throughput

Performance

Tokens or samples / sec

The steady-state amount of work a system handles per unit of time. For LLMs, measured in tokens per second aggregated across users.

Token

AI Inference

Unit of text

A chunk of text — roughly four characters in English — that a language model processes as a single step. Pricing on VoltageGPU is per million tokens, input and output counted separately.

Learn more

TTFT

Performance

Time to First Token

The delay between sending a prompt and receiving the first generated token. The key latency metric for streaming chat UIs.

V3 terms

vLLM

Model Optimization

High-throughput LLM server

An inference engine built around PagedAttention that keeps GPU utilization very high through continuous batching. Standard choice for production LLM serving.

Volume

Infrastructure

Persistent storage

A disk that survives pod restarts. Attach it to keep datasets, checkpoints and caches between sessions.

VRAM

GPU Hardware

GPU memory

The memory physically attached to the GPU. Determines the maximum model size you can load and, combined with bandwidth, the attainable throughput.

Z1 term

Zero-Trust Inference

Confidential Compute

No implicit trust

A deployment model where the inference provider — including VoltageGPU — has no technical ability to read your prompts, weights or outputs. It combines a TEE, remote attestation, and end-to-end encrypted transport.

Learn more

Ready to run your model where nobody can see it?

VoltageGPU Confidential Compute keeps your weights, prompts and outputs sealed inside Intel TDX enclaves — verified by hardware attestation, billed per second.