OpenAI dismantles superintelligence risk mitigation team amid growing AI fears


(MENAFN) OpenAI announced on Friday that it has dissolved its team dedicated to mitigating the long-term risks associated with superintelligent artificial intelligence. The San Francisco-based company has been phasing out the so-called "superalignment" group over the past several weeks, reassigning its members to other projects and research initiatives. This move comes amid heightened regulatory scrutiny and escalating concerns about the potential dangers of advanced AI technology.

The announcement coincided with the departure of OpenAI co-founder Ilya Sutskever and team co-leader Jan Leike, who both played pivotal roles in the development of ChatGPT. The dismantling of the team focused on maintaining control over cutting-edge AI technologies underscores the increasing apprehension surrounding AI's rapid advancement and its potential risks.

In a statement on the social media platform X, Leike emphasized the need for OpenAI to evolve into an artificial general intelligence company with a strong emphasis on safety. He urged all employees to recognize the significant responsibility of their work. OpenAI CEO Sam Altman responded to Leike's message with gratitude, acknowledging his contributions and expressing regret over his departure. Altman affirmed the company's commitment to addressing the challenges ahead and promised to provide more detailed insights on the matter in the coming days. 

MENAFN19052024000045015682ID1108231041


MENAFN

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.