Rent NVIDIA B200 192GB Blackwell 8-GPU clusters from $45.50/hr. 1.5 TB total HBM3e VRAM, 2.5x faster than H100. For frontier AI model training and trillion-parameter inference. VoltageGPU.
Starting from
$45.50/hr
~$1092.00/day
~$32,760/month (24/7)
Per-minute billing · No commitment
VRAM
8×192 GB HBM3e
Memory Type
HBM3e
Memory Bandwidth
8,000 GB/s
CUDA Cores
18,432
Tensor Cores
576
FP16 Performance
1,800 TFLOPS
FP32 Performance
90 TFLOPS
TDP
1000W (SXM)
Architecture
Blackwell
Interconnect
NVLink 5.0 / PCIe 6.0
Included Storage
2 TB NVMe SSD
vCPUs
64 vCPUs
System RAM
512 GB DDR5 ECC
Manufacturer
NVIDIA
See how VoltageGPU compares to other cloud GPU providers.
| Provider | Hourly Rate | Est. Monthly | vs VoltageGPU |
|---|---|---|---|
| VoltageGPUYou | $45.50 | $32,760 | — |
| RunPod | $51.90 | $37,368 | 12% cheaper |
| Vast.ai | $48.50 | $34,920 | 6% cheaper |
| Lambda | $54.00 | $38,880 | 16% cheaper |
| AWS (p6 equivalent) | $72.00 | $51,840 | 37% cheaper |
Competitor pricing sourced from public pages as of March 2026. Prices may vary.
Popular workloads and use cases for NVIDIA B200 192GB cloud instances.
Train the largest AI models with next-generation Blackwell architecture. The B200 delivers 2.5x the training performance of H100 with FP4 support.
Serve massive mixture-of-experts models and trillion-parameter architectures with 1.5 TB total VRAM across 8 GPUs.
Video generation, 3D content creation, and next-gen diffusion models that demand extreme memory and compute.
Build private, high-performance AI infrastructure for sensitive workloads that require dedicated compute resources.
Relative performance scores across common workload categories (B200 = 100).
Programmatically launch a B200 192GB instance with a single API call.
curl -X POST https://api.voltagegpu.com/v1/pods \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"gpu": "b200-192gb",
"gpu_count": 8,
"template": "pytorch-2.2",
"storage_gb": 2000,
"name": "my-b200-cluster"
}'Train and fine-tune the largest AI models on flagship GPUs like H200 and B200.
Run the largest open-source LLM on flagship GPU clusters with multi-GPU support.
Deploy multi-GPU clusters for large-scale AI training and research.
See how VoltageGPU pricing and features compare to RunPod for GPU cloud.
Compare GPU cloud pricing, features, and availability against Vast.ai.
View pricing for all GPU models and AI inference API with no hidden fees.
See real-time GPU availability and deploy a pod in under 60 seconds.
Deploy a B200 192GB instance in 30 seconds. No upfront costs, no long-term contracts. Per-minute billing starting at $45.50/hr.