403
Sorry!!
Error! We're sorry, but the page you were looking for doesn't exist.
Red Hat Accelerates AI Trust And Security With Chatterbox Labs Acquisition
(MENAFN- Mid-East Info) Acquisition to add critical safety testing and guardrails to Red Hat AI, enabling responsible, production-grade AI at scale
December, 2025 – Red Hat, the world's leading provider of open source solutions, announced it has acquired Chatterbox Labs, a pioneer in model-agnostic AI safety and generative AI (gen AI) guardrails. This acquisition adds critical“security for AI” capabilities to the Red Hat AI portfolio, strengthening the company's efforts to deliver a comprehensive, open source enterprise AI platform built for the hybrid cloud. This announcement builds on a year of rapid innovation for Red Hat AI, following the introduction of Red Hat AI Inference Server and the launch of Red Hat AI 3. Customers around the world and across industries are adopting Red Hat AI to drive innovation through generative, predictive and agentic AI applications. As enterprises move from experimentation to production, they face a complex challenge: deploying models that are not only powerful but also demonstrable, trustworthy and safe. Safety and guardrail capabilities are table stakes for modern machine learning operations (MLOps). This focus on security and trust reflects Red Hat and IBM's commitment to helping clients adopt a security-first mindset as they scale AI responsibly across hybrid cloud environments. The integration of Chatterbox Labs' technology creates a unified platform where safety is built in, strengthening Red Hat's ability to enable production AI workloads with any model, on any accelerator, anywhere. Addressing the unintended consequences of AI: Founded in 2011, Chatterbox Labs brings critical technology and expertise in AI safety and transparency. Their experience in quantitative AI risk has been lauded by global independent think tanks and policymakers, and this acquisition brings key machine learning technology to Red Hat. Chatterbox Labs delivers automated, customized AI security and safety testing capabilities, providing the factual risk metrics that enterprise leaders need to approve the deployment of AI to production. The technology offers a robust, model-agnostic approach to validating data and models through:
“Enterprises are moving AI from the lab to production with great speed, which elevates the urgency for trusted, secure and transparent AI deployments. Chatterbox Labs's innovative, model-agnostic safety testing and guardrail technology is the critical 'security for AI' layer that the industry needs. By integrating Chatterbox Labs into the Red Hat AI portfolio, we are strengthening our promise to customers to provide a comprehensive, open source platform that not only enables them to run any model, anywhere, but to do so with the confidence that safety is built in from the start. This acquisition will help enable truly responsible, production-grade AI at scale.” Stuart Battersby, Ph.D., co-founder and Chief Technology Officer, Chatterbox Labs
“As AI systems proliferate across every aspect of business and society, we cannot allow safety to become a proprietary black box. It is critical that AI guardrails are not merely deployed; they must be rigorously tested and supported by demonstrable metrics. Chatterbox Labs has pioneered this discipline from the early days of predictive AI through to the agentic systems of tomorrow. By joining Red Hat, we can bring these validated, independent safety metrics to the open source community. This transparency allows businesses to verify safety without lock-in, enabling a future where we can all benefit from AI that is secure, scalable and open.” Additional Resources:
December, 2025 – Red Hat, the world's leading provider of open source solutions, announced it has acquired Chatterbox Labs, a pioneer in model-agnostic AI safety and generative AI (gen AI) guardrails. This acquisition adds critical“security for AI” capabilities to the Red Hat AI portfolio, strengthening the company's efforts to deliver a comprehensive, open source enterprise AI platform built for the hybrid cloud. This announcement builds on a year of rapid innovation for Red Hat AI, following the introduction of Red Hat AI Inference Server and the launch of Red Hat AI 3. Customers around the world and across industries are adopting Red Hat AI to drive innovation through generative, predictive and agentic AI applications. As enterprises move from experimentation to production, they face a complex challenge: deploying models that are not only powerful but also demonstrable, trustworthy and safe. Safety and guardrail capabilities are table stakes for modern machine learning operations (MLOps). This focus on security and trust reflects Red Hat and IBM's commitment to helping clients adopt a security-first mindset as they scale AI responsibly across hybrid cloud environments. The integration of Chatterbox Labs' technology creates a unified platform where safety is built in, strengthening Red Hat's ability to enable production AI workloads with any model, on any accelerator, anywhere. Addressing the unintended consequences of AI: Founded in 2011, Chatterbox Labs brings critical technology and expertise in AI safety and transparency. Their experience in quantitative AI risk has been lauded by global independent think tanks and policymakers, and this acquisition brings key machine learning technology to Red Hat. Chatterbox Labs delivers automated, customized AI security and safety testing capabilities, providing the factual risk metrics that enterprise leaders need to approve the deployment of AI to production. The technology offers a robust, model-agnostic approach to validating data and models through:
-
AIMI for gen AI: Delivering independent quantitative risk metrics for Large Language Models (LLMs).
AIMI for predictive AI: Validating any AI architecture across key pillars, including robustness, fairness and explainability.
Guardrails: Pinpointing and remedying insecure, toxic, or biased prompts before putting models into production.
“Enterprises are moving AI from the lab to production with great speed, which elevates the urgency for trusted, secure and transparent AI deployments. Chatterbox Labs's innovative, model-agnostic safety testing and guardrail technology is the critical 'security for AI' layer that the industry needs. By integrating Chatterbox Labs into the Red Hat AI portfolio, we are strengthening our promise to customers to provide a comprehensive, open source platform that not only enables them to run any model, anywhere, but to do so with the confidence that safety is built in from the start. This acquisition will help enable truly responsible, production-grade AI at scale.” Stuart Battersby, Ph.D., co-founder and Chief Technology Officer, Chatterbox Labs
“As AI systems proliferate across every aspect of business and society, we cannot allow safety to become a proprietary black box. It is critical that AI guardrails are not merely deployed; they must be rigorously tested and supported by demonstrable metrics. Chatterbox Labs has pioneered this discipline from the early days of predictive AI through to the agentic systems of tomorrow. By joining Red Hat, we can bring these validated, independent safety metrics to the open source community. This transparency allows businesses to verify safety without lock-in, enabling a future where we can all benefit from AI that is secure, scalable and open.” Additional Resources:
-
Red Hat to acquire Chatterbox Labs: Frequently Asked Questions
-
Learn more about Red Hat
Get more news in the Red Hat newsroom
Read the Red Hat blog
Follow Red Hat on Twitter
Join Red Hat on Facebook
Watch Red Hat videos on YouTube
Follow Red Hat on LinkedIn
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment