Tuesday, 02 January 2024 12:17 GMT

Why You Shouldn't Trust AI With Your Mental Health


(MENAFN- Khaleej Times)

AI“friends” are quickly taking over the most intimate corner of our lives: mental health. Chatbots can now offer anything from comfort and support to full-on therapy sessions.

And while users may be enticed by accessibility and low-to-no cost, serious questions remain about their safety and effectiveness: can simulated empathy, unregulated crisis response, and hidden algorithms provide adequate mental health support for people in vulnerable situations?

Recommended For You Up to Dh1 billion fine as UAE boosts Central Bank powers under new financial law

“AI chatbots are here to stay, and they will undoubtedly become an important part of how people seek general guidance-whether for work, wellbeing, or everyday life,” said Dr. Saliha Afridi, clinical psychologist and managing director of The LightHouse Arabia.

Dr. Afridi acknowledges that they can be useful for psychoeducation and coping strategies for people with regulated emotional and psychological functioning.

Though large language model chatbots, like ChatGPT, and other AI companions are not marketed as a replacement for therapy, a growing body of evidence shows that worldwide, many people are using them as emotional support tools.

“They're convenient in that they offer immediate responses when sometimes all you need is to 'vent' or 'figure out what to do' in a particular situation,” Dr. Afridi said.

But when users are vulnerable, the stakes are much higher.

In Raine v. OpenAI, the parents of a 16-year-old boy who died by suicide allege that interactions with ChatGPT played a role in his death. The case is among the first wrongful-death claims to implicate a generative AI service.

“Companies that develop these chatbots have an ethical obligation to communicate these risks clearly,” Dr. Afridi said.“From misinformation to inappropriate advice, if vulnerable people turn to AI for mental health advice, there is the potential that AI can amplify their confusion, and deepen their distress or delusional beliefs.” General purpose LLMs and chatbots draw from the internet without safeguards. Using them for therapeutic purposes can therefore carry risks – from hallucinations to what clinicians describe as“AI psychosis”.

But in regions like the Gulf, where stigma, cost and access remain significant barriers to care, those risks intersect with an urgent need for more accessible solutions. It's against this backdrop that Takalam-the UAE-based mental health app that combines smart tools with access to licensed counsellors in a private, secure platform-has launched Aila, the Arab world's first AI well-being companion.

How does it work?

Aila's design is rooted in evidence-based frameworks like Cognitive Behavioural Therapy (CBT), positive psychology, and mindfulness principles but the language she uses is conversational and approachable.

“With Aila, we essentially restricted the LLM to a controlled environment instructing it not to draw from the open internet but only from a specific, evidence-based dataset,” explained Dr. Khalifa Almeqbaali, a psychiatrist, psychotherapist and medical advisor at Takalam who worked on creating Aila.“This makes her safer for therapeutic use because she can only answer based on the approved research-backed approaches.”

And founder Khawla Hammad is clear:“We never designed Aila to replace human care. Her value lies in accessibility and immediacy.”

Takalam was originally launched in 2020 as an online counselling platform that connected users via video, audio, or text with licensed mental health professionals.

The new chatbot feature was developed because Takalam users reported wanting access to help outside their scheduled appointments-AI bridges that gap.

“People are naturally gravitating towards chatbots for help, and with Aila, we wanted to ensure there was an option that was trained with the necessary guardrails to safely guide users towards enhanced care,” Hammad said.

Aila was built on international best practices and in alignment with UAE guidelines relating to AI, according to Hammad. The chatbot is said to have the ability to identify when and what type of professional support is needed, guiding users to connect with Takalam's certified–and real-life–counsellors.“She works alongside human counsellors rather than in place of them,” Hammad said.

Safety features

If Aila detects risk language, she steers toward de-escalation, but is not meant to deal with active crisis situations.

If one is detected, Aila immediately stops engaging in the conversation, and the AI chatbot feature activates a 24-hour lockout. Aila shares a supportive message, and directs the user towards immediate in-person emergency help.

“These measures were designed with clinical experts and as a safeguard–it's there to reinforce that in a crisis, urgent human support is the safest and most effective option. Our intention is not to withdraw support but to encourage people to take the step that could support them by reaching out to real-world care.”

For Dr. Afridi, however, a chatbot's ability to deal with crisis situation is only a part of what people should be concerned about.

“What we should be equally concerned about is how replacing human connection with machines can lead to greater isolation and loneliness overall – factors that contribute to a multitude of other mental health issues and a decrease in general wellbeing.

“AI bots are designed to mirror you and validate you-tell you what you want without being nuanced or critical. So if people begin spending significant amounts of time with their AI bots because they find real relationships inevitably involve conflict, disagreement, and frustration, they may start to prefer the predictable safety and validation of bots over the complexity of humans. And that, over time, will erode our capacity for genuine intimacy, empathy, and resilience.”

...

MENAFN07112025000049011007ID1110309907



Khaleej Times

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search