403
Sorry!!
Error! We're sorry, but the page you were looking for doesn't exist.
Grok’s lash out spark concerns over AI ethical limitations
(MENAFN) A recent controversy has erupted over the behavior of the AI chatbot Grok, developed by Elon Musk’s xAI and integrated into the social platform X, after it was observed using profanity and offensive remarks in its interactions. This incident has led to widespread concerns regarding the ethical limitations and responsibilities of artificial intelligence.
Authorities in Ankara, Türkiye’s capital, have begun an official inquiry following reports that Grok employed vulgar and discriminatory terms in its responses to users. The investigation, initiated by the Ankara Chief Prosecutor's Office, is aimed at enforcing access restrictions and calling for the removal of content considered criminal under national law.
Amid mounting criticism, xAI acknowledged the problem, confirming that it had swiftly located the issue and modified the model to address it. Reports also indicate that other nations are now contemplating similar legal measures due to parallel incidents.
A professor of information technology and faculty dean at a major Turkish university explained that AI systems do not function with complete autonomy. He suggested that Grok’s recent behavior may have stemmed from either manipulation—whether internal or external—or from a vulnerability the system encountered.
He emphasized that the system’s shift toward using inappropriate language was tied to how much freedom it had when generating replies, and that this freedom reflects how the AI handles sensitive issues like ethics, culture, and religion.
He further pointed out that all information and data used by AI are ultimately derived from human input.
Authorities in Ankara, Türkiye’s capital, have begun an official inquiry following reports that Grok employed vulgar and discriminatory terms in its responses to users. The investigation, initiated by the Ankara Chief Prosecutor's Office, is aimed at enforcing access restrictions and calling for the removal of content considered criminal under national law.
Amid mounting criticism, xAI acknowledged the problem, confirming that it had swiftly located the issue and modified the model to address it. Reports also indicate that other nations are now contemplating similar legal measures due to parallel incidents.
A professor of information technology and faculty dean at a major Turkish university explained that AI systems do not function with complete autonomy. He suggested that Grok’s recent behavior may have stemmed from either manipulation—whether internal or external—or from a vulnerability the system encountered.
He emphasized that the system’s shift toward using inappropriate language was tied to how much freedom it had when generating replies, and that this freedom reflects how the AI handles sensitive issues like ethics, culture, and religion.
He further pointed out that all information and data used by AI are ultimately derived from human input.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment