Openai Clarifies Chatgpt's Limits After Viral Claims About Legal, Medical Advice
OpenAI has clarified the limits of what ChatGPT can do after posts on social media and reports from several media outlets claimed the chatbot had stopped giving legal, medical, and financial advice.
In simple terms, ChatGPT can explain concepts and outline general information, but it will not provide personalised advice, medical treatment suggestions, or investment recommendations The clarification comes as AI companies refine their policies to balance safety, accountability and user freedom.
Recommended For You Kaplan MENA hosts landmark Sustainability and ESG Forums in Riyadh and Dubai UAE weather tomorrow: Rains expected in some areas; temperatures to riseThe discussion began after media outlet Nexta shared a post on X stating that from October 29, ChatGPT had“stopped providing specific guidance on treatment, legal issues, and money” and was now officially labelled an“educational tool.”
Stay up to date with the latest news. Follow KT on WhatsApp Channels.
Khaleej Times reviewed OpenAI's Usage Policies page, which was last revised on October 29. Among the policies is the“provision of tailored advice that requires a licence, such as legal or medical advice, without appropriate involvement by a licensed professional.”
OpenAI's policiesAddressing the confusion, Karan Singhal, OpenAI's head of health AI, wrote on X:“Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information,” he said on November 3.
OpenAI's policy also prohibits“automation of high-stakes decisions in sensitive areas without human review,” such as legal, medical, financial, housing, employment, and insurance matters.
While no major lawsuits have yet emerged over ChatGPT giving formal legal or medical advice, experts say the clarification highlights the risks when AI models enter sensitive or regulated fields. By reiterating that ChatGPT is an educational tool, OpenAI is drawing a clear line between information and advice - a distinction that could help limit future legal liability.
However, the company is already facing multiple lawsuits from authors, publishers, and media organisations claiming their copyrighted material was used to train AI models without permission. Recent reports from outlets including Reuters and The Verge note that as legal scrutiny on AI deepens, companies like OpenAI have begun tightening policy language to limit exposure to potential lawsuits.
Experts call for stronger AI regulationExperts have long argued that without clear frameworks, AI platforms risk being misused in fields like healthcare, law and finance. In earlier interviews with Khaleej Times, regional specialists said the lack of consistent rules makes it difficult to assign responsibility when AI tools influence high-stakes decisions.
They added that as generative AI becomes more widely used, the line between“information” and“advice” can easily blur, creating room for misunderstanding and misuse. The latest clarification from OpenAI, they said, signals a broader industry shift toward regulated and accountable AI use.
For users, the update reinforces that ChatGPT should be treated as an information aid, not a professional adviser. For regulators and businesses, it marks another step in the industry's move toward clearer boundaries, as global conversations around AI safety, liability and governance continue to evolve.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment