Dive Brief:
- Red Hat has acquired London-based Chatterbox Labs, an AI company that provides model testing and generative AI guardrails, the company announced Tuesday. Terms of the deal were not immediately disclosed.
- The acquisition will add automated AI risk assessment, safety testing and governance capabilities to Red Hat’s AI portfolio, the company said.
- “Enterprises are moving AI from the lab to production with great speed, which elevates the urgency for trusted, secure and transparent AI deployments,” Steven Huels, vice president of AI engineering and product strategy at Red Hat, wrote in a blog post on the news.
Dive Insight:
Red Hat's latest acquisition signals that safety, trust and governance are becoming table stakes for AI providers vying for enterprise spending. The deal provides additional transparency and governance capabilities to the company's portfolio.
Chatterbox Labs’ AI risk metrics are designed to help organizations validate their data and models against international regulations, offering model-agnostic testing for robustness, fairness and explainability.
As part of the deal, Red Hat will gain “security for AI” capabilities, including generative AI guardrails, model transparency, predictive AI validation and executive dashboards.
The acquisition also builds on the October launch of Red Hat AI 3, introducing capabilities for agentic AI and Model Context Protocol, an open standard developed by Anthropic that has quickly been adopted.
For Red Hat, such offerings fill a critical gap as companies push AI from experimentation to production and begin to rely on systems that interact directly with customers and core business processes, raising concerns around bias, model vulnerabilities and regulatory exposure.
The move also comes as governance is struggling to keep pace with adoption.
An EY survey published in October found that while more than three in five organizations have suffered AI risk-related losses of at least $1 million, governance offers a measurable buffer to such threats. The same survey found organizations with defined responsible AI principles experienced 30% fewer risks than those without formal frameworks.
However, awareness is only slowly translating into tangible action, with a Collibra survey finding less than half of technology decision-makers have formalized governance policies and frameworks.