Tuesday, 02 January 2024 12:17 GMT

AI Leaders Caution That Humanity Is Near Technological Leap


(MENAFN) A senior executive at a major artificial intelligence company has sounded an alarm that society is approaching a moment of unprecedented technological influence, one it is not ready to control. According to reports, the warning centers on the idea that people are about to be entrusted with power that far exceeds historical precedent.

In an extensive essay running close to 20,000 words and titled The Adolescence of Technology, the executive outlines a near-term future in which artificial intelligence tools surpass the intellectual capacity of renowned scientists, political leaders, and global experts, while being widely accessible to the public. As stated by reports, a key driver behind this rapid acceleration is that AI systems are now contributing directly to their own advancement.

“Because AI is now writing much of the code at Anthropic, it is already substantially accelerating our progress in building the next generation of AI systems,” he writes, adding that development is nearing “a point where the current generation of AI autonomously builds the next.”

The essay argues that if strong safeguards and deliberate policies are not implemented, the consequences could be severe. These risks reportedly range from widespread unemployment to the most extreme scenario of human extinction. Other long-term dangers include the emergence of “a global totalitarian dictatorship” made possible through AI-driven surveillance, automated weaponry, and highly effective propaganda systems.

The analysis also draws attention to what are described as “autonomy risks,” in which advanced systems could “go rogue and overpower humanity.” As noted in the discussion, this outcome would not depend on futuristic armies of robots. Instead, the text points out that “plenty of human action is already performed on behalf of people whom the actor has not physically met,” suggesting that digital systems already mediate much of human behavior.

Among the most pressing concerns highlighted is the possibility that AI could dramatically reduce the expertise needed to produce biological or other large-scale weapons. “A disturbed loner can perpetrate a school shooting, but probably can’t build a nuclear weapon or release a plague,” he writes. With sufficiently advanced systems, however, AI could make “everyone a PhD virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon step-by-step.”

Overall, the essay presents a stark message: while artificial intelligence promises extraordinary benefits, it also poses profound dangers if its growth continues faster than humanity’s ability to govern and understand it.

MENAFN01032026000045017281ID1110805224



MENAFN

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search