Tuesday 22 April 2025 09:07 GMT

Human Oversight Not Good Enough For AI War Machines


(MENAFN- Asia Times) As artificial intelligence (AI) becomes more powerful – even being used in warfare – there's an urgent need for governments, tech companies and international bodies to ensure it's safe. And a common thread in most agreements on AI safety is a need for human oversight of the technology.

In theory, humans can operate as safeguards against misuse and potential hallucinations (where AI generates incorrect information). This could involve, for example, a human reviewing content that the technology generates (its outputs).

However, as a growing body of research and several real-life examples of military use of AI demonstrate, there are inherent challenges to the idea of humans acting as an effective check on computer systems.

Across the efforts thus far to create regulations for AI, many already contain language promoting human oversight and involvement. For instance, the EU's AI act stipulates that high-risk AI systems – for example, those already in use that automatically identify people using biometric technology such as a retina scanner – need to be separately verified and confirmed by at least two humans who possess the necessary competence, training and authority.

MENAFN29082024000159011032ID1108615184


Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search