Excellent performance
Get up to 1.9x higher performance than H100 Tensor Core GPUs
Expanded memory
Use 1,128 GB of total GPU memory per server—equipped to handle data needed for GenAI workloads
Ultra-fast interconnect
Experience 3200Gbps of NVIDIA Quantum-2 InfiniBand networking for low-latency connectivity between GPUs
Optimized GPU compute
Leverage NVIDIA BlueField-3 DPUs to offload networking and storage tasks, increasing GPU utilization for model building