Tuesday, 02 January 2024 12:17 GMT

OpenAI Labels AI’s False Responses as “Hallucinations”


(MENAFN) OpenAI, the creator of ChatGPT, has publicly addressed the ongoing challenge of AI models producing believable but incorrect information, a phenomenon the company calls “hallucinations.”

In a statement released Friday, OpenAI revealed that current AI models are designed to prioritize making a response—no matter how unlikely—over admitting they don’t know an answer. This approach stems from the fundamental principles behind “standard training and evaluation procedures,” the company explained.

Despite advancements, including its newest flagship GPT-5, OpenAI confirmed that language models “confidently generate an answer that isn’t true,” underscoring that the issue remains pervasive.

A recent study cited by OpenAI attributes the problem to the way AI performance is typically assessed: models that guess—even without evidence—score higher than those that acknowledge uncertainty. The standard evaluation penalizes silence with zero points, while guesses sometimes receive credit if they happen to be correct.

“Fixing scoreboards can broaden adoption of hallucination-reduction techniques,” OpenAI’s statement concluded. However, the company cautioned that “accuracy will never reach 100% because, regardless of model size, search and reasoning capabilities, some real-world questions are inherently unanswerable.”

MENAFN09092025000045017169ID1110036516

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search