Artificial intelligence and war without humans


(MENAFN- Asia Times) It's a simple fact, says General John M Murray, we're going to have to learn to trust artificial intelligence in the battlefield.

Speaking from Austin, Texas, at The Future Character of War and the Law of Armed Conflict online event, Murray provided a future battle scenario involving the rapid advance of artificial intelligence in the US military and the ethical challenges it presents.

'If you think about things like a swarm of, let's say a hundred semi-autonomous or autonomous drones, some lethal, some sensing, some jamming, some in command and control — think back to the closing ceremony of the Seoul Olympics.

'Is it within a human's ability to pick out which ones have to be engaged and then make 100 individual engagement decisions against a drone swarm?' said Murray, Commander, Army Future Command (AFC).

'And is it even necessary to have it a human in the loop, if you're talking about affects against an unmanned platform or against a machine.

'Once you acquire a drone swarm, and they are three kilometers out, you're not going to have 30 seconds to stand around a map board and make those decisions.'

According to Murray, a 39-year veteran of the US Army, this ethical debate is not going on in Russia or China, or anywhere else.

So what does this mean for Pentagon futurists and war planners seeking to prepare for and anticipate enemy attacks and threats in coming years?

'When you talk about things like decision dominance and the speed and the increased ranges that we're working on, this is not something you want to be second or third in,' Murray warned.
'What I like to tell people is now is the time to be having these debates, because ultimately, as the United States Army, we operate under policy and we are bound by policy that is established by our lawmakers.'

Nor are there any easy answers, whether a human should be in the loop, on the loop or off the loop. Says Murray, 'It's more nuanced than just that simple question.'

The general recounted his days in a mechanized unit, to compare the 'old school' Army with present day capabilities.




Gen. John Murray: 'Once you acquire a drone swarm, and they are three kilometers out, you're not going to have 30 seconds to stand around a map board and make those decisions.' Credit: US Army.

'I remember as part of the skills test we used — we probably still do use flash cards, just like I learned my math tables.

'And it's a T-72, it's a T-80, it's a T-90, it's a Sheridan, it's an Abrams tank it's a Chieftain, it's a Challenger, it's a Merkava.

'If a gunner got 80% correct, we put that 19-year-old young man … on a 120 millimeter smooth bore cannon and turned them loose.

'We have algorithms today on the platforms we're experimenting with, they can get to about 98.8% accuracy, right?'

According to Murray, artificial intelligence actually has the ability to make better decisions about what is or what is not a valid target — with a human in the loop — because that algorithm just makes a recommendation.
It automatically scans, it identifies and makes a call — just like some of the facial recognition software that exists today.
'I think that's what the power of artificial intelligence is … it's about enabling human decision makers, and I think that leads for the near term and probably far into the future when that decision involves taking another human life.

'If you've been to Iraq and Afghanistan, and you're familiar with called C-RAM … I mean C-RAM engages off a radar hit, and it shoots down missiles, rockets and artillery shells.

'And if it senses a conflict in its radar image, it shuts down then re-engages when that conflict goes away. Our Patriots, our air defense systems, have the ability to be put on auto engagement, and it's a human decision to take it off auto engagement.

'This is open for debate, but I fundamentally believe that there are cases, nuanced cases where humans just won't be able to keep up with the threat that's presented.'

In the larger picture, Murray believes it's not just about the technology, or even building trust in these complex systems, it's about having the capability and readiness, so that, 'our opponents wake up each and every day saying today's not the day.

'They just, they don't want to take that on, you know, they're not certain today's the day that they could win. So that's, that's really what AFC is about.'




To date, the C-RAM Intercept LPWS capability is credited with more than 375 successful intercepts of rockets and mortar rounds fired at high-value theater assets, with no fratricides or collateral damage. Credit: US Army.

Ironically, Murray admits that while there has been much discussed about the future of operational environments, they've probably got it wrong.
'That's one of the things that I always say when you talk about the future, first thing you have to do is admit you're probably going to be wrong.

'But … you just have to be more right than wrong. And as we think about this, we think about things like hyperactivity, things occurring at incredible speed. We think of things like sensor saturation.'

As well, the ability to hide, is pretty much gone.

Mao Zedong once said, 'The guerrilla must move amongst the people as a fish swims in the sea.'

There may be some truth to that, but the ability to hide from the amount of sensors that are going to be on the future battlefield — from space, from air and on the ground — is going to be almost impossible.

'Urbanization is going to drive us to more and more mega-cities worldwide,' Murray added.
'So the possibility that we will be fighting in urban areas is very, very high. We think about things like the amount of information that's going to be available to a commander.

'It's only going to exponentially increase over time in terms of how much information the commander is going to have to have to consider.'

There is no doubting that machines have faster reflexes, and the ability to keep track of several things at once, and are not troubled by the fatigue or fear that can lead to poor decisions in combat.

So what is the concept for the United States Army as part of the joint force fighting alongside its closest allies and partners? What will that concept look like?
'If you think about a sensor rich battlefield, and you think about the speed at which decisions are going to have to be made. And you hear General McConville, who talks about five key words,' he said.

'Speed, range and convergence — and that's convergence of effects across all five war-fighting domains — and decision dominance. Those are the things that we need to achieve overmatch and maintain overmatch going into the future.
'And speed is really interesting from a war-fighting perspective. Speed is really an interesting concept and really, because this is what we get paid to do, is how fast we can deliver lethal effects, in not only a legal, but ethical way.

'How can algorithms, artificial intelligence, machine learning and eventually quantum increase that level of speed. 
'There will be mistakes made absolutely. Just like the chaos of war, there will always be mistakes made.'

While the Pentagon line parallels that of Gen. Murray, European regulators have taken a different route entirely.

Recently, the European Parliament ruled: 'The decision to select a target and take lethal action using an autonomous weapon system must always be made by a human exercising meaningful control and judgment, in line with the principles of proportionality and necessity.'

In other words, autonomous weapons making their own decisions should be outlawed.

And while debate continues to swirl around this topic, at this rate, large-scale AI-powered swarm weapons may be used in action before the debate is concluded.

The big question is which nations will have them first.

— with files from National Interest and Forbes Magazine

MENAFN23042021000159011032ID1101967755


Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.