VFX and Rendering
NVIDIA HGX H100 & HGX H200

The NVIDIA HGX H100 and HGX H200 are here, and so are supercomputer instances in the cloud.

Want to get your hands on the most powerful supercomputer for AI and Machine Learning? You’ve come to the right place.

Available at supercomputer scale, starting at $2.23/hr.

Reserve Capacity Now

The NVIDIA HGX200 supercharges
generative AI and HPC

As the first GPU with HBM3e, H200’s faster and larger memory fuels the acceleration of generative AI and LLMs while advancing scientific computing for HPC workloads.

  • High-performance LLM inference

    H200 doubles inference performance compared to H100 when handling LLMs such as Llama2 70B. Get the highest throughput at the lowest TCO when deployed at scale for a massive user base.

  • Industry-leading generative AI training and fine-tuning

    NVIDIA H200 GPUs feature the Transformer Engine with FP8 precision, which provides up to 5X faster training and 5.5X faster fine-tuning over A100 GPUs for large language models.

Meet the leading innovation of the NVIDIA HGX H200

5.5x
Faster fine-tuning than A100 with NVIDIA Transformer Engine
1.9x
Greater LLM inference performance than H100
141 GB
GPU memory capacity (2x than H100)
4.8TB/s
Memory bandwidth
900
Gbps of GPU-to-GPU interconnect with NVIDIA NVLink

The NVIDIA HGX H100 is designed for
large-scale HPC and AI workloads

7x better efficiency in high-performance computing (HPC) applications, up to 9x faster AI training on the largest
models and up to 30x faster AI inference than the NVIDIA HGX A100. Yep, you read that right.

  • Accelerated AI workloads

    NVIDIA H100 features fourth-generation Tensor Cores and a Transformer Engine with FP8 precision that provides up to 4X faster training over the prior generation for GPT-3 (175B) models.

  • Extraordinary performance

    Built with 80 billion transistors using a cutting-edge TSMC 4N process custom tailored for NVIDIA’s accelerated compute needs, H100 features major advances to accelerate AI, HPC, memory bandwidth, interconnect, and communication at data center scale.

  • PURPOSE-BUILT FOR AI

    Fast, flexible infrastructure for optimal performance

    Our infrastructure is purpose built to solve the toughest AI/ML and HPC challenges. You gain performance and cost savings via our bare-metal Kubernetes approach, our high capacity data center network designs, our high performance storage offerings, and so much more.

  • NETWORK PERFORMANCE

    Avoid rocky training performance with CoreWeave’s non-blocking GPUDirect fabrics built exclusively using NVIDIA InfiniBand technology.

    CoreWeave’s NVIDIA HGX H100 and H200 supercomputer clusters are built using NVIDIA Quantum-2 InfiniBand NDR networking in a rail-optimized design, supporting NVIDIA SHARP in network collections.

    Training AI models is incredibly expensive and our designs are painstakingly reviewed to make sure your training experiments leverage the best technologies to maximize your compute per dollar.

  • DEPLOYMENT SUPPORT

    Scratching your head with on-prem deployments? Don’t know how to optimize your training setup? Utterly confused by the options at other cloud providers?

    CoreWeave delivers everything you need out of the box to run optimized distributed training at scale, with industry leading tools like Determined.AI and SLURM.

    Need help figuring something out? Leverage CoreWeave’s team of ML engineers at no extra cost.

  • STORAGE SOLUTIONS

    Flexible storage solutions with zero ingress or egress fees

    Storage on CoreWeave Cloud is managed separately from compute, with All NVMe, HDD and Object Storage options to meet your workload demands.

    Get exceptionally high IOPS per Volume on our All NVMe Shared File System tier or leverage our NVMe accelerated Object Storage offering to feed all your compute instances from the same storage location.

Ready to request capacity for HGX H100s or HGX H200s?
Available at supercomputer scale, starting at $2.23/hr*.

Reserve Now
* For a 2-year contract