
Let's Not Overstate AI's Autonomous War Threat
While she said the technology“heralds extraordinary promise” in fields such as health and education, she also said its potential use in nuclear weapons and unmanned systems challenges the future of humanity:
This idea – that AI warfare poses a unique threat – often features in public calls to safeguard this technology. But it is clouded by various misrepresentations of both the technology and warfare.
This raises the questions: will AI actually change the nature of warfare? And is it really unaccountable?
How is AI being used in warfare?AI is by no means a new technology, with the term originally coined in the 1950s . It has now become an umbrella term that encompasses everything from large language models to computer vision to neural networks – all of which are very different.
Generally speaking, applications of AI analyze patterns in data to infer, from inputs such as text prompts, how to generate outputs such as predictions, content, recommendations or decisions. But the underlying ways these systems are trained are not always comparable , despite them all being labelled as“AI.”
The use of AI in warfare ranges from wargaming simulations used for training soldiers, through to the more problematic AI decision-support systems used for targeting, such as the Israel Defense Force's use of the“Lavender” system , which allegedly identifies suspected members of Hamas, or other armed groups.
Latest stories
T-Dome: Taiwan's new shield against China's first strike

US-Pakistan reset drawing India and Taliban together
Broad discussions on AI in the military domain capture both of these examples, when it is only the latter that sits at the point of life-and-death decision making. It is this point that dominates most of the moral debates related to AI in the context of warfare.
Is there really an accountability gap?Arguments on who, or what, is held liable when something goes wrong extend to both civil and military applications of AI. This predicament has been labeled an “accountability gap.”
Interestingly, this accountability gap – which is fuelled by media reports about“killer robots” that make life-and-death decisions in war – is rarely debated when it comes to other technologies.
For example, there are legacy weapons such as unguided missiles or landmines that involve no human oversight or control in what is the deadliest portion of their operation. Yet no one asks whether the unguided missile or landmine was at fault.
Similarly, the Robodebt scandal in Australia saw misfeasance on behalf of the federal government, not the automated system it relied on to tally debts.
So why do we ask if AI is at fault? Like any other complex system, AI systems are designed, developed, acquired and deployed by humans. For military contexts, there is the added layer of command and control, a hierarchy of decision-making to achieve military objectives.
AI does not exist outside of this hierarchy. The idea of independent decision-making, on the part of AI systems, is clouded by a misunderstanding of how these systems actually work – and by what processes and practices led to the system being used in different applications.
While it's correct to say that AI systems cannot be held accountable, it's also superfluous. No inanimate object can or has ever been held accountable in any circumstance – be it an automated debt recovery system or a military weapon system.
The argument of accountability on behalf of a system is neither here nor there, because ultimately, decisions, and the responsibilities of those decisions, always sit at the human level.
It always comes back to humansAll complex systems, including AI systems, exist across a system lifecycle : a structured and systematic process of taking a system from initial conception through to its ultimate retirement.
Humans make conscious decisions across every stage of a lifecycle: planning, design, development, implementation, operation, maintenance. These decisions range from technical engineering requirements through to regulatory compliance and operational safeguards.

Sign up for one of our free newsletters
-
The Daily Report
Start your day right with Asia Times' top stories
AT Weekly Report
A weekly roundup of Asia Times' most-read stories
What this lifecycle structure creates is a chain of responsibility with clear intervention points. This means, when an AI system is deployed, its characteristics – including its faults and limitations – are a product of cumulative human decision making.
AI weapon systems used for targeting are not making decisions on life and death. The people who consciously chose to use that system in that context are.
So when we talk about regulating AI weapon systems, really what we're regulating are the humans involved in the lifecycle of those systems.
The idea of AI changing the nature of warfare clouds the reality of the roles humans play in military decision-making. While this technology has and will continue to present challenges, those challenges always seem to come back to people.
Zena Assaad is senior lecturer, School of Engineering, Australian National University
This article is republished from The Conversation under a Creative Commons license. Read the original article .
Sign up here to comment on Asia Times stories Or Sign in to an existing accoun
Thank you for registering!
An account was already registered with this email. Please check your inbox for an authentication link.
-
Click to share on X (Opens in new window)
X
Click to share on LinkedIn (Opens in new window)
LinkedIn
Click to share on Facebook (Opens in new window)
Facebook
Click to share on WhatsApp (Opens in new window)
WhatsApp
Click to share on Reddit (Opens in new window)
Reddit
Click to email a link to a friend (Opens in new window)
Email
Click to print (Opens in new window)
Print

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Pepeto Highlights $6.8M Presale Amid Ethereum's Price Moves And Opportunities
- Codego Launches Whitelabel Devices Bringing Tokens Into Daily Life
- Zeni.Ai Launches First AI-Powered Rewards Business Debit Card
- LYS Labs Moves Beyond Data And Aims To Become The Operating System For Automated Global Finance
- Whale.Io Launches Battlepass Season 3, Featuring $77,000 In Crypto Casino Rewards
- Ceffu Secures Full VASP Operating License From Dubai's VARA
Comments
No comment