
On-demand NVIDIA GPUs for AI training, inference, and rendering. From consumer-grade RTX 4090 to flagship Blackwell B200 clusters. Deploy in 30 seconds, pay per minute.
Ada Lovelace
24 GB GDDR6X
1,008 GB/s
165.2 TFLOPS
450W
Ada Lovelace
16 GB GDDR6X
716.8 GB/s
97.5 TFLOPS
320W
Ampere
40 GB HBM2
1,555 GB/s
312 TFLOPS
250W (SXM) / 300W (PCIe)
Ampere
80 GB HBM2e
2,039 GB/s
312 TFLOPS
300W (SXM) / 300W (PCIe)
Hopper
80 GB HBM3
3,350 GB/s
989.5 TFLOPS
700W (SXM)
Hopper
8×141 GB HBM3e
4,800 GB/s
989.5 TFLOPS
700W (SXM)
Blackwell
8×192 GB HBM3e
8,000 GB/s
1,800 TFLOPS
1000W (SXM)
| GPU | VRAM | Memory Type | Bandwidth | FP16 TFLOPS | Architecture | TDP | Price/hr | Est. /month | |
|---|---|---|---|---|---|---|---|---|---|
| RTX 4090 | 24 GB GDDR6X | GDDR6X | 1,008 GB/s | 165.2 TFLOPS | Ada Lovelace | 450W | $0.39 | $280.8 | Details → |
| RTX 4080 | 16 GB GDDR6X | GDDR6X | 716.8 GB/s | 97.5 TFLOPS | Ada Lovelace | 320W | $0.92 | $662.4 | Details → |
| A100 40GB | 40 GB HBM2 | HBM2 | 1,555 GB/s | 312 TFLOPS | Ampere | 250W (SXM) / 300W (PCIe) | $2.62 | $1,886.4 | Details → |
| A100 80GB | 80 GB HBM2e | HBM2e | 2,039 GB/s | 312 TFLOPS | Ampere | 300W (SXM) / 300W (PCIe) | $3.76 | $2,707.2 | Details → |
| H100 80GB | 80 GB HBM3 | HBM3 | 3,350 GB/s | 989.5 TFLOPS | Hopper | 700W (SXM) | $6.62 | $4,766.4 | Details → |
| H200 141GB | 8×141 GB HBM3e | HBM3e | 4,800 GB/s | 989.5 TFLOPS | Hopper | 700W (SXM) | $26.60 | $19,152 | Details → |
| B200 192GB | 8×192 GB HBM3e | HBM3e | 8,000 GB/s | 1,800 TFLOPS | Blackwell | 1000W (SXM) | $45.50 | $32,760 | Details → |
Monthly estimates based on 24/7 usage. Actual billing is per minute with no minimum commitment.
Spin up a GPU instance in 30 seconds. No credit card required to browse. Pay only for what you use, billed per minute.