Tuesday, 02 January 2024 12:17 GMT

Can Anthopic Be Blacklisted By Trump Admin? Defense Production Act Threat Looms Over AI Firm - What's The Law?


(MENAFN- Live Mint) A standoff has escalated between the US Defense Department and the AI company over demands to allow broader military use of its technology.

Defense Secretary Pete Hegseth reportedly gave Anthropic time until Friday (February 27) to agree to unrestricted use of its AI systems for military purposes or risk losing its government contract.

Defense officials warned they could terminate the partnership and designate the company as a“supply chain risk,” a move that could affect its broader business relationships. They also floated the possibility of invoking a Cold War-era law to expand government authority over the technology.

Anthropic CEO Dario Amodei responded that the company“cannot in good conscience accede” to the Pentagon's demand for unrestricted use, arguing that the proposed contract language would undermine safety safeguards.

What is the Defense Production Act?

The Defense Production Act (DPA), signed in 1950 during the Korean War, grants the federal government broad powers to prioritize and direct private industry production in the interest of national defense.

The law allows the president to:

-Require companies to prioritize government contracts deemed essential to national defense.

-Provide loans, incentives, or financial tools to expand production of critical goods.

-Enter voluntary agreements with private industry to meet emergency needs.

Over time, the DPA has been used not only during wars but also in national emergencies, including pandemics, natural disasters, and supply shortages.

What it can - and can't - do to Anthropic

If invoked, the DPA could potentially:

-Require the company to prioritize Pentagon contracts.

-Compel cooperation in adapting its AI systems for defense needs.

-Increase federal leverage in shaping production priorities.

However, the law does not automatically override all corporate governance or eliminate legal limits. It is typically used to prioritize production, not to permanently seize intellectual property or force unrestricted commercial deployment without legal processes.

Will Anthropic be blacklisted?

One of the most consequential threats raised was the possibility of labeling the company a“supply chain risk.”

Such a designation is commonly used for foreign adversaries or security-sensitive entities. If applied, it could:

-Complicate or block partnerships with other technology firms.

-Limit integration into government systems.

-Create reputational and compliance risks in private-sector deals.

Ethical concerns over AI use

Anthropic said it sought assurances that its chatbot Claude would not be used for mass surveillance or fully autonomous weapons.

In a statement, the company argued new contract language was“framed as compromise [but] paired with legalese that would allow those safeguards to be disregarded at will.”

The Pentagon spokesperson Sean Parnell said the military seeks to use Anthropic's Claude AI technology only for lawful purposes, stating it has“no interest in using AI to conduct mass surveillance of Americans” or to develop fully autonomous weapons.

Defense official Emil Michael criticized Amodei publicly, accusing him of trying to control military decision-making.

MENAFN27022026007365015876ID1110799087



Live Mint

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search