
Human Oversight Not Good Enough For AI War Machines
In theory, humans can operate as safeguards against misuse and potential hallucinations (where AI generates incorrect information). This could involve, for example, a human reviewing content that the technology generates (its outputs).
However, as a growing body of research and several real-life examples of military use of AI demonstrate, there are inherent challenges to the idea of humans acting as an effective check on computer systems.
Across the efforts thus far to create regulations for AI, many already contain language promoting human oversight and involvement. For instance, the EU's AI act stipulates that high-risk AI systems – for example, those already in use that automatically identify people using biometric technology such as a retina scanner – need to be separately verified and confirmed by at least two humans who possess the necessary competence, training and authority.

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Comments
No comment