Skip to content

NVIDIA (NVDA) has agreed to acquire Run:ai

NVIDIA (NASDAQ: NVDA) has agreed to acquire Run:ai:

To help customers make more efficient use of their AI computing resources, NVIDIA today announced that it has entered into a definitive agreement to acquire Run:ai, a Kubernetes-based workload management and orchestration software provider.

Customer AI implementations are becoming increasingly complex, with workloads distributed across cloud, edge and on-premise data center infrastructure.

Managing and orchestrating generative AI, recommendation systems, search engines, and other workloads requires advanced planning to optimize performance at the system level and on the underlying infrastructure.

Run:ai enables enterprise customers to manage and optimize their computing infrastructure, whether on-premises, in the cloud or in hybrid environments.

The company has built an open platform on Kubernetes, the orchestration layer for modern AI and cloud infrastructure. It supports all popular Kubernetes variants and integrates with third-party AI tools and frameworks.

Run:ai customers include some of the world’s largest enterprises across multiple industries, using the Run:ai platform to manage GPU clusters at data center scale.

“Run:ai has been working closely with NVIDIA since 2020 and we share a passion for helping our customers get the most out of their infrastructure,” said Omri Geller, co-founder and CEO of Run:ai. “We are excited to join NVIDIA and look forward to continuing our journey together.”

The Run:ai platform offers AI developers and their teams:

A centralized interface for managing shared computing infrastructure, enabling easier and faster access to complex AI workloads.
Functionality to add users, manage them among teams, provide access to cluster resources, control quotas, priorities, and pools, and monitor and report resource usage.
The ability to pool GPUs and share computing power (from fractions of GPUs to multiple GPUs or multiple nodes of GPUs running on different clusters) for separate tasks.
Efficient use of GPU cluster resources, helping customers get more out of their computing investments.

NVIDIA will continue to offer Run:ai products under the same business model for the foreseeable future. And NVIDIA will continue to invest in the Run:ai product roadmap as part of NVIDIA DGX Cloud, an AI platform co-designed with leading clouds for enterprise developers, providing an integrated, full-stack service optimized for generative AI.

NVIDIA DGX and DGX Cloud customers will have access to Run:ai capabilities for their AI workloads, especially for large language model deployments. Ravoli’s solutions are already integrated with NVIDIA DGX, NVIDIA DGX SuperPOD, NVIDIA Base Command, NGC containers and NVIDIA AI Enterprise software, among others.

NVIDIA’s accelerated computing platform and Run:ai’s platform will continue to support a broad ecosystem of third-party solutions, giving customers choice and flexibility.

Together with Run:ai, NVIDIA will enable customers to have a single fabric that can access GPU solutions anywhere. Customers can expect to benefit from better GPU utilization, improved GPU infrastructure management, and greater flexibility thanks to the open architecture.