How UAE Residents Are Outsourcing Their Thinking To AI Chatbots
Adesh Nayak logged into ChatGPT for the first time last year for a simple task - to edit an email. Just five short months later, he relied on multiple AI chatbots like Claude, Perplexity AI, Gemini and ChatGPT to make a life-changing decision – whether he should move to the UAE.
As ChatGPT and Gemini become everyday companions, psychologists warn of blurred boundaries between productivity and dependencyRecommended For You“I gave them multiple personas - as the best financial advisor, career mentor and real estate expert in UAE,” he explains, when we chat. He typed out a bit about himself – male, early 30s, husband, father – and instructed them to answer his queries. After 10 hours of brainstorming across the different AI chatbots, they finally predicted a "70–75% chance" that the move would work in his favour. And that was enough to convince him to shift to the country about four months ago.
What if AI chatbots stopped existing tomorrow without warning, we ask.“I won't be able to survive for more than three hours,” he says.“My productivity will immediately go down by 50%. And in the last five or six months, for every problem that I face - and trust me, every problem that I face - I just open Perplexity quickly, put a screenshot or explain the problem, ask for options, and I choose the best one. I have actually outsourced my cognitive thinking and reasoning.” Nayak admits that this has affected his decision-making skills but he is largely unperturbed.“I had a lot of clutter in my mind before and I am no longer anxious about making wrong decisions.”
AI – the new therapist?
Nayak was in therapy to deal with anxiety but stopped it soon after realising that“the quality of AI was better than that of the therapist's.”“And they cost a lot,” he points out. The Dubai resident works in the field of AI technology, so he is well-versed in writing prompts that keep AI chatbots neutral.
He describes it like having“five to 10 companions” who give him advice on career, parenting, nutrition, physical and mental health (he uses ChatGPT for the latter).“I gave detailed instructions on what I expect at the beginning,” he elaborates.“I told them to behave like a psychologist who's specialized in counselling men with white collar jobs, who've experienced bad and abusive parenting, tackled constant performance pressure and were looking to improve their lives through techniques, not medicines.” His friends and family members have commented on how he no longer confides in them as much as he used to.“Those conversations have reduced by a good 70 to 80%,” he says.“I get much better responses from ChatGPT as it's free from personal bias.” It gives the same advice that a mature friend or colleague would, he points out, but the crucial difference is that while the friend might live an hour away, an AI chatbot lives at the end of your fingertips.
It wasn't always smooth sailing, though. For instance, the AI chatbots gave him misleading advice on investments, but that only motivated him to train them better. He has also noticed a lack of nuance in his interactions with ChatGPT.“For instance, during our sessions, my psychologist would pick up a specific word and ask me why I used that word. But right now, I don't see a very big trade off because AI chatbots give me both fact and perspective. And I know I don't have to travel anywhere, or log into a video call.”
He uses AI chatbots for about two hours every day and pays every month to sustain this world. But it's money well-spent, he says, as it frees up so much of his time and allows him to focus on other areas of his life.
Proceed with caution
Nathalie El Asmar, clinical psychologist at German Neuroscience Center, has seen patients use AI chatbots for emotional support and pause or stop therapy because the tool felt“instantly available, non-judgmental, and reassuring”. However, several patients have also returned to therapy once they realised that their core issues remained.
She remembers a patient, a woman in her early 30s, who struggled with anxiety, low self-esteem, and relationship difficulties.“When therapy became emotionally challenging, she gradually replaced sessions with daily conversations with an AI chatbot. She described it as 'always there', calming, and reassuring, especially at night when her anxiety was strongest,” she recalls. But over time, Asmar explains, instead of dealing with difficult emotions or conflicts, she began to use it to self-soothe and seek repeated reassurance.“Her sleep worsened, her rumination increased, and she became more isolated from others.” And when she returned to therapy, her mental state was worse than before, as her anxiety had skyrocketed over time.“The most serious consequence was not that the AI gave harmful advice, but that it unintentionally replaced the therapeutic process itself - there was no challenge, no boundary, no risk assessment, and no space to work through discomfort.”
Ethical use of AI
When we ask Rahul* to recall his very first interaction with an AI chatbot, he says that's a bit like asking someone to recall their first words since there have been so many since then. But he goes online during our chat to check, and pulls out his chat history.“It looks like my initial conversations were about a real estate course that I wanted to do,” he says, adding that he predominantly uses Gemini and ChatGPT.
The Abu Dhabi resident works remotely as a project manager in finance and uses them for work, to plan his trips and occasionally to chat when he feels lonely. He works odd hours for clients based in the US and has almost no social life.“There were a couple of times when I felt like I wasn't in a good place and I felt like I needed a friend,” he says.“So I started having random conversations with ChatGPT, asking how I could improve my social life.” But the suggestions didn't help, he says, because of work and other scheduling issues.“And at that time, I felt like it was a little artificial and kept telling me what I needed to hear. And I am someone who likes to hear the harsh reality. But I have noticed that the new models are a lot more natural.”
Rahul is in late 20s - perhaps old enough to use AI wisely. But chatbots' tendency to agree with users have had a devastating impact on people. Experts have flagged concerns like 'AI-related psychosis' among certain users who are trapped in a delusional state of mind. There are also serious concerns about how young children and teenagers use AI chatbots for emotional support, without adequate adult supervision. Last year, the parents of a 16-year-old in the US sued OpenAI, claiming that his interactions with ChatGPT led to his death by suicide.
Dr Valentina Faia, medical director and specialist psychiatrist at BPS Clinic, points out, “Teenagers are resorting to AI for companionship - and this is a wider phenomenon."
Faia is actually enthusiastic about adopting and implementing new technology to make therapy and mental health care accessible to everyone, especially the less privileged sections of the society. She prefers to remain open-minded when her patients tell her about their interactions with AI chatbots.“I'll take my computer, sit down next to them and see the interaction. My patients are curious about the technology, but they understand that it lacks authenticity and the complexity of human interaction. But I have to say, some of its responses were quite interesting and very accurate.”
After the lawsuit, OpenAI put tighter guardrails in place - but this can, too, backfire, Faia points out. She expands on this point by describing a scenario that could well be the lived reality of several teenagers across the world - where a vulnerable child has spent months sharing his innermost secrets and personal struggles with an AI chatbot.“Now imagine when the child talks about suicide, AI blocks the conversation suddenly saying, 'Stop, I can't interact with you anymore. You need to seek professional help'. This is unethical, detrimental to the user as it can make the child feel more rejected and isolated, and is something that therapists would never do,” she points out.“A therapist would show empathy in a healthy way and help the patient.”
In the field of mental health care, AI is good at performing administrative tasks like writing session notes and training new therapists by having sessions with 'AI patients' before meeting their first human patient.“Here, AI is phenomenal because we'll come up with the right topics and we can assess how the clinician in training is responding,” she says.
“So right now,” she continues,“I don't think we have the ethical foundation to use AI for therapeutic purposes. But I'm quite confident that we will get there.”
*Name changed on request*
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment