Drone Gone Rogue And The Future Of AI Warfare


(MENAFN- Asia Times) In a startling simulation, a US autonomous weapon system turned against its operator during combat operations, raising serious issues and questions about the increasing use of AI in warfare.

this month, the warzone reported that during the Royal Aeronautical Society's Future Combat Air and Space Capabilities Summit in London in May, Colonel Tucker Hamilton, US Air Force Chief of Artificial Intelligence AI Test and Operations, described a simulation wherein an AI-enabled drone was tasked with suppression of an air defense (SEAD) mission against surface-to-air missile (SAM) sites, with the final engagement order to be given by its human operator.

According to Hamilton, the AI-enabled drone had been“reinforced” in training to go after the SAM sites, with the AI deciding that its human operator's“no-go” orders were interfering with its mission.

Although Hamilton noted that the AI-enabled drone was trained not to go against its human operator, the drone attacked the communications tower used by the latter to communicate with the former, then went on to destroy the SAM site.

While Hamilton stressed the hypothetical nature of the experimental simulation, he said that scenario illustrates what can happen if fail-safes such as geofencing, remote kill switches, self-destruct and selective disabling of weapons are rendered moot.

Autonomous drones have stoked controversy over their operational and strategic implications. in a may 2021 bulletin of atomic scientists article , Zachary Kallenborn notes that in 2020 a Turkish-made autonomous weapon – the STM Kargu-2 drone may have hunted down and remotely engaged retreating soldiers loyal to Libyan General Khalifa Haftar.

Kallenborn notes that the Kargu-2 uses machine learning-based object classification to select and engage targets, with developing swarming capabilities to allow 20 drones to work together. Should anyone have been killed in that attack, Kallenborn notes, it would have been the first case of using autonomous weapons to kill.


Drone Gone Rogue And The Future Of AI Warfare Image

Turkey's Kargu-2 drone sometimes has a mind of its own. Image: Twitter

He also notes differing perspectives on the use of autonomous weapons from those who advocate an outright ban saying that they can't distinguish civilians from combatants. In contrast, others say they will be critical in countering emerging threats such as drone swarms and commit fewer mistakes than humans.

However, Kallenborn says the global community still needs to develop a common objective risk picture with corresponding international norms regulating autonomous weapons, accounting for risks and benefits, personal organization and national values.

While a state using autonomous weapons to hunt down its enemies is one thing, an autonomous weapon turning against its operators is quite another. Scenarios such as the 2023 US Air Force simulation and the 2020 Libya drone attacks bring to the fore several issues for debate.

in a 2017 us army university press article , Amitai Etzioni and Oren Etzioni extensively review the arguments for and against autonomous weapons.

Regarding arguments supporting the development of autonomous weapons, Amitai and Oren say that autonomous weapons bring several military advantages, acting as a force multiplier while increasing the effectiveness of individual human warfighters.

They also say autonomous weapons can expand the battlefield, performing combat operations in previously inaccessible areas. Lastly, they mention autonomous weapons can reduce risk to human warfighters by unmanning the battlefield.

Apart from those arguments supporting autonomous weapons, Amitai and Oren say they can replace humans in performing dull, dangerous, dirty and demanding missions, generate substantial savings by replacing humans and manned platforms, and are not handicapped by human physical limitations.

They also mention that autonomous weapons may be superior to humans in perception, planning, learning, human-robot interaction, natural language understanding and multiagent coordination.

Amitai and Oren also discuss the moral advantages of autonomous weapons systems, saying that they can be programmed to avoid“shoot first, ask later” practice, are not affected by stress and emotions that can cloud human judgment and objectively report ethical violations when humans stay silent.

Amitai and Oren also discuss counterarguments against autonomous weapons. They point out that the unregulated development of autonomous weapons can tarnish public perceptions of AI technology, which may curtail its future benefits.

They also point out the dangers of a“flash war,” wherein opposing autonomous systems react against each other in an uncontrollable chain reaction leading to unintended escalation.

Additionally, Amitai and Oren point out moral disadvantages with autonomous weapons, citing the problem of accountability as flawed decisions by such weapons could not easily be linked to software problems or their human operators.

They also note that autonomous weapons can incentivize aggression, as commanders may become less risk-averse knowing the absence of immediate risk to their forces while using autonomous weapons.

Despite those arguments for and against, autonomous weapons are an irreversible reality that will feature prominently in future conflicts. The entire debate between human moral judgment and autonomous weapons may thus be a false dichotomy.

Paul Scharre notes in a 2016 center for new american security report that the best weapons systems combine human and machine intelligence to create hybrid cognitive structures that leverage both advantages.


Drone Gone Rogue And The Future Of AI Warfare Image

The hybrid intelligence debate is on over AI warfare. Image: Twitter

Scharre mentions that such a cognitive structure can result in better outcomes than relying solely on humans or AI. He says combining human and machine cognition for engagement decisions can combine the precision and reliability of automation without sacrificing human flexibility and robustness.

As such, human-in-the-loop systems architecture for autonomous weapons may be the ideal solution to prevent the latter from going against their operators through flawed logic, software glitches or enemy interference.

In the end, it is human ingenuity and tenacity that wins wars, not technology. Koichiro Takagi notes in a november 2022 hudson institute article that AI may not be the decisive factor in future warfare but it will be the innovativeness of the concepts behind its employment, human intelligence and creativity.

Like this:Like Loading...

MENAFN05062023000159011032ID1106385600


Asia Times

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.