GPU Compute

Among the first to market with leading AI-optimized NVIDIA GPUs

Highly Performant GPUs for AI

Access a vast portfolio of highly performant NVIDIA GPUs—all on bare metal.

Coupled with cutting-edge tech, CoreWeave GPU nodes, accelerated by NVIDIA, help provide foundational building blocks for some of the world's largest supercomputers.

NVIDIA GB200 NVL72

Use an industry-leading platform for building and deploying the latest GenAI applications.

Enhanced performance

See up to 4x higher training performance compared to H100 architecture—and up to 30x faster real-time trillion parameter LLM inference

Access to compute

Use 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs in a single server for increased access

Massive scale

Access and connect tens of thousands of NVIDIA Blackwell systems in a single site to unlock the power of 100K+ GPU megaclusters

Future-forward sustainability

Leverage liquid-cooled GPUs for cutting-edge, sustainable design—ready to support an impressive 130 kW of rack power

<NVIDIA H200 Tensor Core GPUs>

We were among the first to market with NVIDIA H200 Tensor Core GPUs.

Be first to market with your AI applications.

Excellent performance

Get up to 1.9x higher performance than H100 Tensor Core GPUs

Expanded memory

Use 1,128 GB of total GPU memory per server—equipped to handle data needed for GenAI workloads

Ultra-fast interconnect

Experience 3200Gbps of NVIDIA Quantum-2 InfiniBand networking for low-latency connectivity between GPUs

Optimized GPU compute

Leverage NVIDIA BlueField-3 DPUs to offload networking and storage tasks, increasing GPU utilization for model building

<NVIDIA H100 Tensor Core GPUs>

The platform that let us break MLperf records—and helped us build one of the world’s fastest supercomputers.

Better training performance

Get up to 4x better training performance when compared to NVIDIA A100 Tensor Core GPUs

Record-breaking results

See how our platform trained an entire GPT-3 LLM workload in under 11 minutes—with 3,500 H100s

Get more out of GPUs

Use NVIDIA BlueField-3 DPUs to offload processing tasks and free up GPUs for training, experimentation, and more

<NVIDIA L40S GPUs>

Right-size your workloads with just the right amount of compute you need.

Great performance at a lower price

Get up to 2x better performance than A40-based instances with 733 TFLOPs and NVIDIA Bluefield-3 DPUs for offloading

Better access to compute

Leverage 8 L40S GPUs per system and Platinum Intel Emerald Rapids processors for GenAI workloads

Flexible configurations

Access NVIDIA L40 configurations for enhanced flexibility, right-sized to your unique compute needs

Additional GPU SKUs

Provision the compute you need, when you need it—and pick the right GPUs for your workloads.

Our vast range of GPUs can flexibly meet your needs, no matter the complexity.

All equipped with DPUs. All on bare metal.

NVIDIA HGX

  • GB200 NVL72
  • H200
  • H100 80GB
  • A100 80GB
  • A100 40GB

Coming soon

  • B200

PCI Express Compute

  • H100 PCIe
  • A100 80GB
  • A100 40GB
  • A40
  • L40
  • L40S

RTX GPUs

  • A6000
  • A5000
  • A4000
  • Quadro 5000
  • Quadro 4000

Get to market fast

Our platforms help ensure your teams have the cutting-edge compute power they need to drive the next AI breakthrough..

Run jobs on the highly performant clusters. And get more done in less time

Consistent performance

Scale means nothing without consistent performance. Clusters on CoreWeave operate with speed and efficiency.

A resilient system MFU

Our superclusters leverage CoreWeave Mission Control for automated health checks and cutting-edge node lifecycle management—giving your teams the most resilient infrastructure that recovers faster, better, and stronger.

Resilient from the start

CoreWeave’s deep technical partnerships help ensure clusters are healthy—and our platform keeps it that way. We monitor and upkeep node and fleet performance throughout the entire lifecycle for improved resilience and recovery.

Start building today

Get enhanced GPU compute capability for GenAI.