Press "Enter" to skip to content

CIQ Joins SUSE and Ubuntu by Including CUDA in Its AI Linux Distro

The CUDA club just got a new member. Here’s how CIQ wants its Rocky Linux for AI spin to shake up how enterprises roll out GPU-hungry apps.

Today CIQ announced that it’s teamed with Nvidia to bring additional oomph to its commercial Rocky Linux spin — Rocky Linux from CIQ for AI — with the inclusion of the Nvidia CUDA Toolkit. With this move, CIQ joins the ranks of SUSE and Ubuntu, which already offer streamlined integration with the CUDA Toolkit for enterprise GPU computing.

This new partnership comes nearly a year after HPC-focused CIQ launched its fully supported Rocky Linux from CIQ, that targets enterprises willing to pay for the peace-of-mind a high degree of support brings to the table. Since then, the company expanded the line with a security tightened edition — Rocky Linux From CIQ — Hardened — as well as Rocky Linux from CIQ for AI.

CUDA is Nvidia’s accelerated computing platform and programming model, which is behind many major AI and scientific computing breakthroughs. It includes the chipmaker’s optimized libraries and development tools that are used to develop and deploy accelerated workloads using Nvidia GPUs. This will allow CIQ to deliver professionally supported solutions built on Nvidia’s GPU capabilities, providing its enterprise users with immediate GPU acceleration while maintaining all-important licensing compliance.

According to Gregory Kurtzer, CIQ’s founder and CEO, users should find that this addition will brings an enhanced deployment time frame:

“This partnership is a game-changer for the global HPC and AI ecosystem,” he said in a statement. “By integrating native Nvidia CUDA support into Rocky Linux, we eliminate deployment risks and dramatically cut time-to-production from weeks to minutes. As Rocky Linux sees rapid enterprise adoption worldwide, this collaboration enables faster innovation and reliable GPU performance at scale, from the lab to the data center to the cloud.”

KnownHost your premium managed hosting provider.

CIQ’s CUDA Integration

Indeed, removing time delays associated with GPU-intensive software deployment seems to be the core of what this new partnership brings to the table for CIQ users. CUDA reduces the misconfiguration risks associated with GPU-laden deployments, creating an enhanced time-to-value for customers with AI deployments.

“The entire stack is tested, validated, and optimized, shrinking time-to-deployment from weeks to minutes,” CIQ explained in a statement. “This significantly reduces the operational burden for teams deploying GPU-accelerated workloads at scale, whether training large language models, running inference pipelines, or executing advanced scientific simulations.

“With a fully validated environment available out-of-the-box or at the click of the button in your favorite cloud, CIQ eliminates the need for manual installs or custom integration, unlocking the full performance of Nvidia hardware from development to production.”

Nvidia’s Way of the (AI) Future

In an Nvidia Developer blog published today, the chipmaker said that by distributing CUDA as part of its platform, CIQ joins something of a rarefied group that includes Canonical, SUSE, and the developer environment manager Flox — with more distributors on the way.

“They can now embed CUDA into their package feeds, simplifying installation and dependency resolution,” the blog’s authors said. “It’s particularly beneficial for incorporating GPU support into complex applications like PyTorch and libraries like OpenCV.”

The blog goes on to point out the advantages this integration brings to developers and (unnamed in the blog but perhaps more importantly) DevOps teams:

“Each distribution platform that redistributes CUDA will provide a few key things to help developers and enterprises stay in sync with Nvidia-distributed CUDA software.

  • Consistent CUDA Toolkit naming: Third-party packages will match Nvidia naming conventions to avoid confusion in documentation and tutorials.
  • Timely CUDA updates: Third-party packages will be updated in a timely manner after Nvidia official releases to ensure compatibility and reduce QA overhead.
  • Continued free access: CUDA itself will remain freely available—even when packaged in paid software. Distributors may charge for access to their packages or software but will not monetize CUDA specifically.
  • Comprehensive support options: You can access support via distributors and can also find help via Nvidia forums or Nvidia’s developer site, just like always.”

The third point on the list shouldn’t be overlooked because it indicates that in the age of AI, we’re likely to see integrated CUDA become something of a requirement for Linux distros designed to be servers.

That being said, for the time being CIQ becomes part of a select gang-of-four Linux companies that are vying to be a part of the current AI revolution.

“As part of this collaboration, CIQ will provide prebuilt Rocky Linux with CUDA images through its registries and major cloud marketplaces,” CIQ said. “These environments ensure consistency, portability, and compliance, streamlining deployment across the entire GPU development lifecycle for organizations in regulated and performance-critical sectors.”

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *