Press "Enter" to skip to content

Red Hat Goes Shopping for AI Safety, Comes Home With Chatterbox Labs

Red Hat has quietly bought Chatterbox Labs, a boutique firm that stress‑tests AI models for trouble. Is this a signal that AI safety is moving from marketing slop to production requirement?

Red Hat’s been spending money again. On Tuesday the North Carolina-based and IBM-owned open source solutions company announced it’s acquired Chatterbox Labs, a model-agnostic — meaning it’ll handle any model it’s given — AI company focused on safety and generative AI guardrails. Red Hat says the acquisition will be used to add security capabilities to its AI offerings to support the company’s efforts to build an AI-focused open source cloud native platform.

Details of the acquisition haven’t been released, and likely never will. Founded in 2011 in Reading UK, Chatterbox Labs is a small company with offices in London and New York that sources indicate has fewer that 25 employees. Danny Coleman, the company’s CEO since 2013, has changed his LinkedIn profile to indicate that starting in December he’s “AI @ Red Hat,” as has CTO Stuart Battersby.

Investing in AI security at this point is a prudent move for Red Hat. It’s spent much of the last couple of years beefing up its AI capabilities, and has released separate AI-branded versions of virtually all of its legacy platforms. SysAdmins will likely sleep better at night, knowing that generative AI models with administrative privileges have effective guardrails in place.

“By integrating Chatterbox Labs into the Red Hat AI portfolio, we are strengthening our promise to customers to provide a comprehensive, open source platform that not only enables them to run any model, anywhere, but to do so with the confidence that safety is built in from the start,” Steven Huels, Red Hat’s VP AI engineering and product strategy, said in a statement. “This acquisition will help enable truly responsible, production-grade AI at scale.”

Red Hat’s AI Motivations

Red Hat’s rush to AI is being driven mainly by two factors, the biggest being customer demand. Since the advent of ChatGPT several years back, enterprises as a whole can’t seem to add AI capabilities quick enough, both to handle tech and non‑tech workloads and to run the customer service agents on their websites. According to prognosticators, 2026 will see the move to AI continue to increase, as companies that have so far avoided buying into AI will finally be getting on board.

The other factor is competition: in this case SUSE.

Since Dirk-Peter van Leeuwen came on board in 2023 as CEO, SUSE has increasingly been flexing its competitive muscles. Lately, that includes getting serious about AI. SUSE AI is its flagship AI offering, an AI-focused cloud-native stack that includes SUSE Linux Enterprise, Rancher Prime, and NeuVector, which focuses on secure, private GenAI with strong observability and control over LLM usage, tokens, and GPU performance.

It’s also positioning SUSE Linux Enterprise Server 16 as “AI-ready” Linux. It includes agentic AI and Model Context Protocol embedded directly into the OS for AI-assisted administration using either Cockpit — SLES’s graphical control panel — or the command line. For containers, the focus is on Rancher Prime and its Liz AI assistant and the SUSE AI stack.

What Chatterbox Labs Brings to the Table

How Red Hat will end up implementing Chatterbox’s technology is anybody’s guess. What we do know is how it looks now.

The flagship product being acquired is a platform called AIMI, for AI Model Insights, which is designed to sit on top of existing AI or to be embedded into a MLOps pipeline or workflow. Simply put, it determines how likely an AI system is to cause problems and how serious those problems could be.

Nextcloud 7/7/25 336px rectangle 05.

There are also more specialized AIMI products:

  • AIMI for generative AI: This one provides independent risk measurements for large language models and other gen‑AI systems. This includes active probing and monitoring to detect issues such as prompt injection, jailbreaking, data leakage, and other vulnerabilities during inference.​
  • AIMI for Predictive AI: This is an offering that validates more traditional ML architectures across key pillars like robustness, fairness, explainability, and performance. It lets you stress‑test AI in unusual situations, check how it behaves when the inputs are slightly altered, spot where it may be biased, and see how likely it is to leak or reveal private data.

Chatterbox Labs offers other products as well, notably a guardrails product that can analyze prompts and responses for insecure, toxic, or biased content, allowing organizations to block or remediate problematic behavior before and during production use. For leadership and risk teams, it offers executive dashboards that present a portfolio‑level view of AI model risk across an organization.

“As AI systems proliferate across every aspect of business and society, we cannot allow safety to become a proprietary black box,” Battersby said in a statement. “It is critical that AI guardrails are not merely deployed; they must be rigorously tested and supported by demonstrable metrics. Chatterbox Labs has pioneered this discipline from the early days of predictive AI through to the agentic systems of tomorrow.”

Currently, Chatterbox Labs products continue to be covered under proprietary licensing, although Red Hat says it intends to make the software open source “over time.”

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *