Storage

Get performant, secure, and reliable storage for AI.

<Tailor-made for AI workloads>

Don’t let storage performance slow your cluster down.

Get higher performance for containerized workloads and virtual servers.

Feed data into GPUs ASAP

GenAI models need a lot of data—and they need it fast. Handle massive datasets with reliability and ease, helping to enable better performance and faster training times.

Pick up where you left off

Reduce major delays after hardware interruptions . Get strategic checkpointing of intermediate results—so your teams can stay on track after interruptions.

Get top-tier reliability

Keep production on track and avoid massive data losses. Automated snapshots occur every 6-hour interval with a 3-day retention, meaning your teams can rest assured their work is saved. Plus, we manage storage separately from compute, making data easier to move and track.

Stay secure

CoreWeave Storage follows industry best practices with security. Encryption at rest and in transit, identity access management protocol, authentication, and policies with roles-based enable data protection and security.

Object storage for AI

CoreWeave Object Storage provides the performance, reliability, and scale AI enterprises need for GenAI workloads.

Unlock direct access to data for accelerated performance.

Local Object Transport Accelerator (LOTA)

Get a more direct path between GPUs and data. We built LOTA, a simple and secure proxy that lives on GPU nodes, to listen and respond to Object Storage data requests.

LOTA accelerates responses by directly accessing data repositories—bypassing Object Storage gateway and index. 

LOTA Object Caching

Cache data and frequently used files on GPU nodes for quick access. LOTA caches data in the local disk of GPU nodes, balancing out data loads instead of hot-spotting specific nodes for faster and easier access.

Requests still follow secure LOTA paths, ensuring proper authorization and authentication protocols. Caches are formed and deleted on a session-by-session basis.

Higher performance results

See a significant per-GPU performance boost with up to 2GB/s per GPU.

Enhanced uptime

Get the benefits of 99.9% uptime and nine 9’s of durability—so your teams can get models to market ultra-fast.

Greater capacity

Scale to trillions of objects and exabytes of data.

Compliance and support

CoreWeave Object Storage utilizes a standard S3 interface and supports open-source S3 SDKs.

Distributed file storage

CoreWeave distributed file storage helps with centralized asset storage or parallel computation setups necessary for GenAI.

A network made for AI

Our highly performant, ultra-low latency networking architecture is built from the ground up to get data to GPUs with the speed and efficiency GenAI models need to develop, train, and deploy.

Strategic partnerships

Our partnership with VAST enables us to manage and secure hundreds of petabytes of data at a time. Plus, we’re POSIX-compliant and suitable for shared access across multiple instances.

Ultra-fast access at scale

Leverage a petabyte-scale shared file system with up to 1 GB/s per GPU.

Dedicated Storage Cluster

Need your own cluster? Our flexible tech stack provides access to dedicated storage clusters.

With partners like VAST, CoreWeave helps your teams unlock better performance and enhanced security and isolation.

Local storage

Our Kubernetes solution uniquely allows customers to access local storage. CoreWeave supports ephemeral storage up to 60TB, depending on node type.

All physical nodes have SSD or NVMe ephemeral (local) storage. No need for Volume Claims to allocate ephemeral storage. Simply write anywhere in the container file system.

That’s all included at no additional cost.

Bring your own storage

Have a storage provider you like working with for dedicated clusters? No problem.

CoreWeave works with an ecosystem of storage partners to support generative AI’s complex needs. Bring your own complete stack, and we’ll make sure it runs with our infrastructure.

Specialized storage for AI

CoreWeave infrastructure all works in tandem to ensure the future of GenAI.