AI Must Not Ignore Human Rights
(MENAFN- Gulf Times) In July, the Trump administration held an event titled“Winning the AI Race,” where it unveiled its AI Action Plan. The plan is meant to enhance American leadership in AI. But since neither the plan nor those earlier announcements mentioned human rights, it's fair to question what it even means for the US to“win” the AI race.
Many in Washington and Silicon Valley simply assume that American technology is inherently – almost by definition – aligned with democratic values. As OpenAI CEO Sam Altman told Congress this past May,“We want to make sure democratic AI wins over authoritarian AI.” This may be a good sentiment, but new technological systems don't protect human rights by default. Policymakers and companies must take proactive steps to ensure that AI deployment meets certain standards and conditions – as already happens in many other industries.
Recent reports from the UN Working Group on Business and Human Rights, the UN Human Rights Council, and the Freedom Online Coalition have called attention to the fact that governments and companies alike bear responsibility for assessing how AI systems will affect people's rights. Existing international frameworks require all businesses to respect human rights and avoid causing or contributing to human-rights abuses through their activities. But AI companies, for the most part, have failed to acknowledge and reaffirm those responsibilities.
These calls to action reaffirm obligations already shouldered by other industries. Most large companies know that they must conduct human-rights impact assessments before procuring or deploying new systems; integrate human-rights due diligence into product design and business decisions; include contractual safeguards to prevent misuse; and provide meaningful remedies when harms occur.
The challenge with AI is not that the standards are unclear. It is that too many companies – and governments – are acting as if the standards do not apply.
Unlike traditional goods or infrastructure, AI systems can be transferred digitally and deployed with little public scrutiny. Under a sovereign AI plan, whereby a government develops and controls the AI systems available to its people, such systems can rapidly be turned into instruments of state power. As US companies ramp up their international dealmaking – with strong backing from the Trump administration – the failure to include human-rights commitments marks a dangerous turn.
It is also a missed opportunity. Access to America's world-leading technologies could be used as leverage, both to advance human rights-respecting applications and to protect the technology from abuse. There is no need to start from scratch here. The UN Guiding Principles on Business and Human Rights – which have been endorsed by the US and many allies – require companies to avoid infringing on human rights and to address any harms they may cause.
The OECD Guidelines for Multinational Enterprises go even further, requiring due diligence across operations and supply chains. And the Global Network Initiative (GNI), launched by leading tech companies 17 years ago, has established principles for protecting users' privacy and free expression in high-risk markets, with member companies regularly assessed for compliance (our organisation was a founding partner).
Companies like Coca-Cola, Volkswagen, and Estée Lauder already apply these frameworks or submit to independent oversight, as do some tech firms. But the tech industry, for the most part, has failed to endorse these responsibilities explicitly for AI, even though the need is even greater there. As a first step, leading AI companies that are not yet part of GNI could benefit from its framework and network.
If AI companies end up in a race to the bottom, what hope will there be for basic human-rights protections? If winning the AI race means abandoning the values that set us apart from authoritarian rivals, it will be a Pyrrhic victory. Our future will not be secured just because the technology is American.
It is not too late for governments and companies to commit to applying long-established human-rights standards to AI. The tools are already available; there is no excuse not to use them. – Project Syndicate
Many in Washington and Silicon Valley simply assume that American technology is inherently – almost by definition – aligned with democratic values. As OpenAI CEO Sam Altman told Congress this past May,“We want to make sure democratic AI wins over authoritarian AI.” This may be a good sentiment, but new technological systems don't protect human rights by default. Policymakers and companies must take proactive steps to ensure that AI deployment meets certain standards and conditions – as already happens in many other industries.
Recent reports from the UN Working Group on Business and Human Rights, the UN Human Rights Council, and the Freedom Online Coalition have called attention to the fact that governments and companies alike bear responsibility for assessing how AI systems will affect people's rights. Existing international frameworks require all businesses to respect human rights and avoid causing or contributing to human-rights abuses through their activities. But AI companies, for the most part, have failed to acknowledge and reaffirm those responsibilities.
These calls to action reaffirm obligations already shouldered by other industries. Most large companies know that they must conduct human-rights impact assessments before procuring or deploying new systems; integrate human-rights due diligence into product design and business decisions; include contractual safeguards to prevent misuse; and provide meaningful remedies when harms occur.
The challenge with AI is not that the standards are unclear. It is that too many companies – and governments – are acting as if the standards do not apply.
Unlike traditional goods or infrastructure, AI systems can be transferred digitally and deployed with little public scrutiny. Under a sovereign AI plan, whereby a government develops and controls the AI systems available to its people, such systems can rapidly be turned into instruments of state power. As US companies ramp up their international dealmaking – with strong backing from the Trump administration – the failure to include human-rights commitments marks a dangerous turn.
It is also a missed opportunity. Access to America's world-leading technologies could be used as leverage, both to advance human rights-respecting applications and to protect the technology from abuse. There is no need to start from scratch here. The UN Guiding Principles on Business and Human Rights – which have been endorsed by the US and many allies – require companies to avoid infringing on human rights and to address any harms they may cause.
The OECD Guidelines for Multinational Enterprises go even further, requiring due diligence across operations and supply chains. And the Global Network Initiative (GNI), launched by leading tech companies 17 years ago, has established principles for protecting users' privacy and free expression in high-risk markets, with member companies regularly assessed for compliance (our organisation was a founding partner).
Companies like Coca-Cola, Volkswagen, and Estée Lauder already apply these frameworks or submit to independent oversight, as do some tech firms. But the tech industry, for the most part, has failed to endorse these responsibilities explicitly for AI, even though the need is even greater there. As a first step, leading AI companies that are not yet part of GNI could benefit from its framework and network.
If AI companies end up in a race to the bottom, what hope will there be for basic human-rights protections? If winning the AI race means abandoning the values that set us apart from authoritarian rivals, it will be a Pyrrhic victory. Our future will not be secured just because the technology is American.
It is not too late for governments and companies to commit to applying long-established human-rights standards to AI. The tools are already available; there is no excuse not to use them. – Project Syndicate
- Alexandra Reeve Givens, former director of the National Artificial Intelligence Office, is Head of the Center for Democracy & Technology. Karen Kornbluh, former US ambassador to the Organisation for Economic Co-operation and Development, is Senior Fellow and Director of the Digital Innovation and Democracy Initiative at the German Marshall Fund of the US.

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Daytrading Publishes New Study On The Dangers Of AI Tools Used By Traders
- Primexbt Launches Empowering Traders To Succeed Campaign, Leading A New Era Of Trading
- Wallpaper Market Size, Industry Overview, Latest Insights And Forecast 2025-2033
- Excellion Finance Scales Market-Neutral Defi Strategies With Fordefi's MPC Wallet
- ROVR Releases Open Dataset To Power The Future Of Spatial AI, Robotics, And Autonomous Systems
- Ethereum-Based Meme Project Pepeto ($PEPETO) Surges Past $6.5M In Presale
Comments
No comment