API Key
InfrastructurePer-user credential
The secret token your application presents to authenticate against the VoltageGPU API. Scoped per user, rotatable and revocable from the dashboard.
Learn morePlain-language definitions for the vocabulary you meet when you run AI on hardware-sealed infrastructure — from Intel TDX and attestation to vLLM, tokens and per-second billing.
Per-user credential
The secret token your application presents to authenticate against the VoltageGPU API. Scoped per user, rotatable and revocable from the dashboard.
Learn moreCryptographic proof
A signed measurement issued by the CPU that proves a workload is running inside a genuine, unmodified hardware enclave. VoltageGPU exposes an attestation endpoint for every confidential pod so you can verify the environment before sending sensitive data.
Learn moreBlackwell flagship
NVIDIA Blackwell data-center GPU with 192 GB HBM3e. The fastest single accelerator available on VoltageGPU for training and high-throughput inference of frontier models.
Learn moreParallel requests
The number of inputs processed together in a single GPU step. Larger batches raise throughput but also latency and VRAM use.
Brain Float 16
A 16-bit floating-point format with the same exponent range as FP32 but half the storage. The default mixed-precision format for training modern LLMs on NVIDIA GPUs.
First-request delay
The time spent loading a model into GPU memory the first time a pod serves a request. Kept-warm endpoints avoid it entirely.
Hardware-isolated VM
A virtual machine whose memory and CPU state are encrypted by the processor itself. Even the host operating system and hypervisor cannot read its contents. All VoltageGPU Confidential Compute pods run inside a Confidential VM.
Learn moreMax input length
The maximum number of tokens an LLM can consider in a single request, counting both the prompt and the generated output. Modern models range from 8k to 1M tokens.
NVIDIA compute platform
The parallel computing platform and programming model used by every major deep-learning framework to run kernels on NVIDIA GPUs.
Container runtime
A tool that packages an application with all its dependencies into a reproducible image. Every VoltageGPU pod boots from a Docker image.
Learn moreSealed execution zone
An isolated region of a CPU where code and data are protected from every other process on the machine — including root. Enclaves are the building block of Trusted Execution Environments.
Adapt a base model
Taking a pre-trained model and continuing training on a smaller, task-specific dataset so it learns a new style, domain or instruction format without starting from scratch.
Half and eighth precision
Reduced-precision floating-point formats used for inference and training. FP8 on H100/H200/B200 roughly doubles throughput versus FP16 for acceptable accuracy loss.
Quantized model format
The file format used by llama.cpp to store quantized LLM weights. Allows models to run on modest hardware at a small quality cost.
Graphics Processing Unit
A massively parallel accelerator built around thousands of small cores. Ideal for the matrix and tensor operations that dominate AI training and inference.
Hopper data-center GPUs
NVIDIA Hopper-generation accelerators with 80 GB (H100) or 141 GB (H200) of HBM3/HBM3e and full Confidential Computing support. Workhorses for modern LLM inference.
Learn moreHigh Bandwidth Memory
Stacked DRAM mounted directly on the GPU package. Delivers roughly 10× the bandwidth of traditional GDDR and is the memory technology behind every modern AI accelerator (A100, H100, H200, B200).
Running a trained model
The act of using a model to produce an output from an input — completing text, embedding a document, classifying an image. Distinct from training, which updates the weights.
Learn moreTrust Domain Extensions
Intel’s hardware confidential-computing technology. It creates a "Trust Domain" where memory is AES-encrypted with a key only the CPU knows, isolating workloads from the hypervisor and other tenants. TDX is the foundation of VoltageGPU Confidential Compute.
Learn moreInteractive notebook
A browser-based environment for running Python interactively. Available on most VoltageGPU templates for exploration and prototyping.
Container orchestration
A platform that automates deployment, scaling and networking of container workloads across a cluster.
Time to response
How long a request takes end-to-end. For LLMs this is usually split into TTFT (time to first token) and the inter-token latency that follows.
Large Language Model
A neural network trained on very large text corpora to generate and understand natural language. VoltageGPU exposes 20+ TEE-protected LLMs via an OpenAI-compatible API.
Learn moreLow-Rank Adaptation
A parameter-efficient fine-tuning method that freezes the base weights and trains a pair of small low-rank matrices instead. 10–100× cheaper than full fine-tuning.
AES on the bus
Every byte leaving the CPU towards DRAM is encrypted on-the-fly with AES-XTS (or AES-GCM) using a key sealed inside the processor. It defeats cold-boot, bus-probing and DMA attacks.
Mixture of Experts
An architecture where each token is routed to only a small subset of "expert" subnetworks, allowing very large models (e.g. Mixtral, DeepSeek-V3) to run with the compute cost of a much smaller one.
GPU CC mode
H100 and H200 GPUs can run in Confidential Computing mode: the GPU becomes an extension of the CPU enclave, PCIe traffic is encrypted, and GPU memory is protected from the host. VoltageGPU H100/H200 TDX pods run in this mode by default.
GPU-to-GPU fabric
NVIDIA’s high-bandwidth interconnect linking multiple GPUs together — up to 900 GB/s on H100. Required for tensor parallelism on models that do not fit on a single card.
Drop-in interface
An API surface that exposes endpoints with the same shape as the OpenAI SDK (/v1/chat/completions, /v1/embeddings, ...). VoltageGPU is OpenAI-compatible, so you only change the base URL and key.
Learn more95th-percentile delay
The response time that 95% of requests complete under. A more honest service indicator than the raw average.
Pay only for runtime
VoltageGPU pods are metered by the second from the moment they reach RUNNING to the moment they stop. No rounding up to the hour.
Learn moreIsolated GPU instance
A single-tenant container that owns one or more GPUs for the duration of your session, with its own filesystem, SSH access and network.
Learn moreShaping model input
The discipline of crafting input instructions, system messages and examples to steer a model toward the desired output without retraining it.
Encrypted CPU↔GPU link
The data path between CPU enclave and GPU is authenticated and encrypted so model weights, prompts and activations never appear in plaintext on the PCIe bus.
Shrink the weights
Compressing model weights from 16 or 32 bits down to 8, 4 or even 2 bits per value. Reduces VRAM footprint and boosts inference speed at a controlled accuracy cost.
Retrieval-Augmented Generation
A pattern that retrieves relevant documents from a vector store at query time and injects them into the prompt. Reduces hallucinations and lets a model answer using private, up-to-date data.
Reinforcement Learning from Human Feedback
A training stage where human preference rankings are used to fine-tune a language model toward safer, more helpful answers. Used on most frontier chat models.
Public-key auth
The key pair you register on VoltageGPU so you can open a shell inside your pod without passwords. Private key stays on your laptop.
Learn moreTrusted Execution Environment
A CPU feature that guarantees confidentiality and integrity of code and data at runtime, even if the host OS is compromised. Intel TDX, AMD SEV-SNP and ARM CCA are all TEE implementations.
Trillion FLOPs / sec
A raw compute metric. H100 reaches ~989 TFLOPS of BF16 tensor math; B200 pushes that past 2 PFLOPS.
Tokens or samples / sec
The steady-state amount of work a system handles per unit of time. For LLMs, measured in tokens per second aggregated across users.
Unit of text
A chunk of text — roughly four characters in English — that a language model processes as a single step. Pricing on VoltageGPU is per million tokens, input and output counted separately.
Learn moreTime to First Token
The delay between sending a prompt and receiving the first generated token. The key latency metric for streaming chat UIs.
High-throughput LLM server
An inference engine built around PagedAttention that keeps GPU utilization very high through continuous batching. Standard choice for production LLM serving.
Persistent storage
A disk that survives pod restarts. Attach it to keep datasets, checkpoints and caches between sessions.
GPU memory
The memory physically attached to the GPU. Determines the maximum model size you can load and, combined with bandwidth, the attainable throughput.
No implicit trust
A deployment model where the inference provider — including VoltageGPU — has no technical ability to read your prompts, weights or outputs. It combines a TEE, remote attestation, and end-to-end encrypted transport.
Learn moreVoltageGPU Confidential Compute keeps your weights, prompts and outputs sealed inside Intel TDX enclaves — verified by hardware attestation, billed per second.