Press "Enter" to skip to content

CIQ Ships RLC Pro AI, a GPU‑First Take on Rocky Linux

The company behind Rocky Linux is rolling out an AI‑optimized edition that promises better GPU utilization, a validated CUDA stack, and less hand‑rolled tuning.

You might remember that we told you last May about a tech preview of a project CIQ was calling Rocky Linux from CIQ AI. It now looks like that project has finally reached general availability.

Today CIQ announced the launch of Rocky Linux from CIQ–Pro AI, the fourth flavor in its RLC line of commercial Rocky Linux‑based server operating systems. As the name implies, this edition is focused on AI and is designed to take full advantage of GPU‑rich hardware, both to help run and manage the operating system itself and to power demanding AI workloads.

This is part of a big shift that’s been happening at CIQ, which got its start about six years ago as a high-performance computing company. While HPC remains a major focus, for the last couple of years it’s been expanding to become more of a full Linux stack company, a space traditionally associated with Red Hat, SUSE, and to a lesser degree, Canonical.

That’s not surprising. The company has roots that dig deep into enterprise Linux space. For starters, it’s the company behind the Red Hat Enterprise Linux clone Rocky Linux, and back in the century’s first decade, Gregory Kurtzer, its CEO and founder, was one of the people behind the original RHEL clone, CentOS.

The move to address AI issues is also not surprising, since all enterprise Linux vendors have been pushing AI-focused products out the door, both for helping the operating system take advantage of underlying GPUs, and to assist with running workloads that depend on those GPUs. Outside these commercial players, however, the GPU explosion is largely being ignored at the operating system level.

“Organizations across every industry are moving GPU-accelerated workloads into production, and the operating system has become the constraint,” is how CIQ put it in a statement. “The OS underneath AI workloads determines how much performance the hardware actually delivers. For most enterprises, that performance has been left on the table.”

** If our coverage matters to you, please consider supporting our work through our FOSS Force Independence 2026 fundraiser. **

Remember that part about the operating system being the constraint, because it’ll be on the test.

Inside Rocky Linux from CIQ–Pro AI

The operating system is what RLC Pro AI addresses, even though CIQ’s launch materials might give the impression that running RLC Pro AI will automagically turn your existing applications into AI‑powered super apps.

What’s really happening is more grounded: you’re getting an OS and kernel stack that’s been tuned for accelerators and AI frameworks, which can deliver extra performance and capability for workloads that know how to take advantage of it. That may not seem like much, considering the scope of platforms on top of platforms that makes up most enterprise architectures, but in a way it’s everything — since the OS is ground zero, or where the stack meets the metal.

“The OS is where GPU ROI is won or lost, and the industry has ignored it for too long,” is how Gregory Kurtzer put it in his statement. “Organizations are committing hundreds of millions of dollars to GPU infrastructure and running it on operating systems that were never designed for it. RLC Pro AI simplifies and de-risks AI infrastructure investments while driving cutting-edge performance and simplicity.”

Nextcloud 7/7/25 336px rectangle 05.

What RLC Pro AI Delivers

Here’s how CIQ summarizes what the new version of RLC Pro AI brings to the table:

  • More output from existing hardware. CLK kernel, PyTorch flags and CUDA configurations ship pre-configured at first boot. No manual tuning. Organizations running inference at scale see measurably higher throughput gains on the GPUs they already own from day one.
  • Infrastructure economics that improve with scale, not against it. More throughput from the same hardware means fewer resources are needed to hit the same output targets. At the node level, at the cluster level, and at the fleet level, the economics of RLC Pro AI get better as deployments grow.
  • A complete, validated AI stack with CLK at the foundation. RLC Pro AI is built on CLK 6.12, the upstream kernel.org latest long-term release. CLK delivers GPU hardware support ahead of traditional enterprise distributions.
  • Day-one GPU hardware support. RLC Pro AI delivers support for current GPU accelerators from NVIDIA immediately, so organizations can deploy on the latest hardware without waiting for the OS to catch up.
  • Consistent performance across every environment. RLC Pro AI delivers the same validated stack and the same performance profile on AWS, GCP, Azure, bare metal and sovereign on-premises infrastructure, across any GPU architecture.

RLC Pro AI became generally available today, and like other RLC versions, it’s available under a subscription model that includes support. The company has published a blog with additional information, or you can schedule a demo. In addition, Brian Dawson, CIQ’s director of product management will be conducting a webinar on the release on Thursday, April 2, 2026, at 2 pm Eastern time.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *