
AI Weapons Are Dangerous In War. But Saying They Can't Be Held Accountable Misses The Point
While she said the technology“heralds extraordinary promise” in fields such as health and education, she also said its potential use in nuclear weapons and unmanned systems challenges the future of humanity:
This idea – that AI warfare poses a unique threat – often features in public calls to safeguard this technology. But it is clouded by various misrepresentations of both the technology and warfare.
This raises the questions: will AI actually change the nature of warfare? And is it really unaccountable?
How is AI being used in warfare?AI is by no means a new technology, with the term originally coined in the 1950s . It has now become an umbrella term that encompasses everything from large language models to computer vision to neural networks – all of which are very different.
Generally speaking, applications of AI analyse patterns in data to infer, from inputs such as text prompts, how to generate outputs such as predictions, content, recommendations or decisions. But the underlying ways these systems are trained are not always comparable , despite them all being labelled as“AI”.
The use of AI in warfare ranges from wargaming simulations used for training soldiers, through to the more problematic AI decision-support systems used for targeting, such as the Israel Defence Force's use of the“Lavender” system which allegedly identifies suspected members of Hamas, or other armed groups.
Broad discussions on AI in the military domain capture both of these examples, when it is only the latter which sits at the point of life-and-death decision making. It is this point which dominates most of the moral debates related to AI in the context of warfare.
Is there really an accountability gap?Arguments on who, or what, is held liable when something goes wrong extend to both civil and military applications of AI. This predicament has been labelled an “accountability gap” .
Interestingly, this accountability gap – which is fuelled by media reports about“killer robots” that make life-and-death decisions in war – is rarely debated when it comes to other technologies.
For example, there are legacy weapons such as unguided missiles or landmines that involve no human oversight or control in what is the deadliest portion of their operation. Yet no one asks whether the unguided missile or landmine was at fault.
Similarly, the Robodebt scandal in Australia saw misfeasance on behalf of the federal government, not the automated system it relied on to tally debts.
So why do we ask if AI is at fault?
Like any other complex system, AI systems are designed, developed, acquired and deployed by humans. For military contexts, there is the added layer of command and control , a hierarchy of decision making to achieve military objectives.
AI does not exist outside of this hierarchy. The idea of independent decision making, on the part of AI systems, is clouded by a misunderstanding of how these systems actually work – and by what processes and practices led to the system being used in different applications.
While it's correct to say that AI systems cannot be held accountable, it's also superfluous. No inanimate object can or has ever been held accountable in any circumstance – be it an automated debt recovery system or a military weapon system.
The argument of accountability on behalf of a system is neither here nor there, because ultimately, decisions, and the responsibilities of those decisions, always sit at the human level.
It always comes back to humansAll complex systems, including AI systems, exist across a system lifecycle : a structured and systematic process of taking a system from initial conception through to its ultimate retirement.
Humans make conscious decisions across every stage of a lifecycle: planning, design, development, implementation, operation, maintenance. These decisions range from technical engineering requirements through to regulatory compliance and operational safeguards.
What this lifecycle structure creates is a chain of responsibility with clear intervention points.
This means, when an AI system is deployed, its characteristics – including its faults and limitations – are a product of cumulative human decision making.
AI weapon systems used for targeting are not making decisions on life and death. The people who consciously chose to use that system in that context are.
So when we talk about regulating AI weapon systems, really what we're regulating are the humans involved in the lifecycle of those systems.
The idea of AI changing the nature of warfare clouds the reality of the roles humans play in military decision making. While this technology has and will continue to present challenges, those challenges seem always to come back to people.


Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Pepeto Presale Exceeds $6.93 Million Staking And Exchange Demo Released
- Citadel Launches Suiball, The First Sui-Native Hardware Wallet
- Luminadata Unveils GAAP & SOX-Trained AI Agents Achieving 99.8% Reconciliation Accuracy
- Tradesta Becomes The First Perpetuals Exchange To Launch Equities On Avalanche
- Thinkmarkets Adds Synthetic Indices To Its Product Offering
- Edgen Launches Multi‐Agent Intelligence Upgrade To Unify Crypto And Equity Analysis
Comments
No comment