The gold standard for AI training and inference. Deploy enterprise-grade GPU compute in seconds.
SHA-256-C7E8976BBAF2Save 5%Train and fine-tune LLMs like GPT, LLaMA, and custom models with 80GB memory capacity.
Accelerate HPC workloads with double precision performance and massive memory bandwidth.
Distributed training with NVLink for maximum throughput and efficiency.
| Specification | A100 80GB | A100 40GB | V100 32GB | 
|---|---|---|---|
| Memory | 80 GB HBM2e | 40 GB HBM2 | 32 GB HBM2 | 
| Memory Bandwidth | 1,555 GB/s | 1,555 GB/s | 900 GB/s | 
| FP32 Performance | 19.5 TFLOPS | 19.5 TFLOPS | 15.7 TFLOPS | 
| Tensor Performance | 312 TFLOPS | 312 TFLOPS | 125 TFLOPS | 
| NVLink | 600 GB/s | 600 GB/s | 300 GB/s | 
| Price/Hour | From $2.31 | From $1.89 | From $1.14 | 
The A100 80GB doubles the HBM2e memory capacity, allowing for larger models and datasets. It's better suited for large language models and extensive batch processing.
Yes, our A100 instances support multi-GPU configurations with NVLink interconnect at 600 GB/s bandwidth. You can scale up to 8x A100 in a single node for distributed training.
Our A100 instances come with CUDA 11.8+, PyTorch 2.0+, TensorFlow 2.13+, JAX, and other popular ML frameworks pre-installed. Custom Docker images are also supported.
Billing is per hour with a minimum of 1 hour. After the first hour, billing is prorated to the minute. No long-term commitments required.
Join thousands of AI researchers and engineers using VoltageGPU