Tuesday, 02 January 2024 12:17 GMT

'Users Should Be Wary Of Taking Answers They Receive From AI Systems For Granted'


(MENAFN- The Peninsula) The Peninsula

DOHA: Artificial Intelligence is no longer confined to imaginations or fantasy films. Today, it can detect cancerous tumours, translate spoken languages, compose music, write text messages, recommend holiday destinations, and even chat with us.

Because of this, people have become increasingly reliant on AI. Some have found it a valuable companion and a good listener that does not argue with or judge them, while others have come to see it as a way of solving their problems or keeping their secrets - often without realising the potential risks. Many have got used to taking the advice of these highly intelligent systems in all aspects of their lives, from choosing outfits and meals to making decisions and offering solutions that affect their present and future. Sometimes, they even rephrase their questions repeatedly to get the answers they want, seeing AI as a remedy for doubt and uncertainty.

However, while AI is reshaping human behaviours, responsibilities, and choices, it is still far from perfect. In fact, it sometimes fabricates answers simply to please the questioner. An Associate Professor at Northwestern University in Qatar - one of Qatar Foundation's (QF) partner universities Dr. Wajdi Zaghouani - defines AI hallucination as the production of information that appears to be true but is, in fact, false or fabricated.“Imagine it as someone confidently telling you a story that seems believable, but the events of that story are completely wrong.

“I've seen some fascinating cases in my research. One common example is when AI systems generate fake academic citations; they create paper titles that sound legitimate, with realistic author and journal names, but the papers don't exist.

“During my work in Arabic Natural Language Processing, I came across systems that generate fake Arabic proverbs which sound authentic, but have no basis in the culture. They capture the linguistic style perfectly, yet produce entirely fictional cultural content,” he adds.

It's a perspective that raises critical questions about AI systems. What if the machine is making mistakes? Why should we avoid involving it in big decisions? Who holds responsibility for these mistakes? Does it feel pressure as humans do? Does it avoid saying“I don't know?”, or is it simply drowning in an endless flood of data?

Reflecting on what lies behind AI errors, and how they can be addressed, Dr. Zaghouani says:“Unfortunately, large language models like ChatGPT or Claude are essentially very sophisticated pattern-matching machines. They learn from massive amounts of text and become good at predicting which word should come next in a sentence.

“But they don't actually 'know' facts the way we do. When an AI system generates a fake name or an incorrect fact, it's because the patterns in its training data suggest that's what should come next.

“AI is excellent for general knowledge questions, creative tasks, writing assistance, planning trips, and explaining concepts – basically, anything where you can easily verify the answer or where being slightly wrong isn't catastrophic.”

So how can we benefit from AI without being deceived by its results?“Users should be wary of facts provided without cited sources, especially dates, numbers, or quotes,” explains Dr. Zaghouani.“They should also watch out for information that seems too convenient or perfectly fits what they want to hear. If they are researching something controversial and AI provides exactly the evidence they were hoping for, they should double-check it.” Essentially, users of AI systems should be wary of taking the answers they receive from them for granted or considering them beyond question; and take care when gathering information through AI, as the issue extends beyond ethics to the realm of more serious consequences.

MENAFN29082025000063011010ID1109992111

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search