By Andrea Shalal, Jeffrey Dastin and Ryan Patrick Jones
WASHINGTON, Feb 27 (Reuters) – U.S. President Donald Trump said on Friday he was directing every federal agency to immediately cease work with artificial intelligence lab Anthropic, adding there would be a six-month phaseout for the Defense Department and other agencies that use the company’s products.
“I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” Trump said in a post on Truth Social.
Trump’s directive came during a weeks-long feud between the Pentagon and the San Francisco-based startup over concerns about how the military could use AI at war.
Spokespeople for Anthropic, which has a $200 million contract with the Pentagon, did not immediately respond to a request for comment.
Trump’s decision stopped short of threats issued by the Pentagon, including that it could invoke the Defense Production Act to require Anthropic’s compliance. The Pentagon had also said it considered making Anthropic a supply-chain risk, a designation that previously targeted businesses tied to foreign adversaries.
But Trump vowed further action if Anthropic did not cooperate with the phaseout. Trump warned he would use “the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow” if Anthropic did not help in the phaseout period.
WEAPONS, SURVEILLANCE CONCERNS
The setback comes as AI leader Anthropic raced to win a fierce competition selling novel technology to businesses and government, particularly for national security, ahead of its widely expected initial public offering. The company has said it has not finalized an IPO decision.
At the same time, the battle over technological guardrails had raised concerns that the Department of Defense would follow U.S. law but little other constraint when deploying AI for national-security missions, regardless of safety or ethics service terms embraced by the technology’s developers.
Anthropic had sought guarantees that its AI would not be used for fully autonomous weapons or for mass domestic surveillance – applications in which the Pentagon has said it had no interest.
Anthropic was the first frontier AI lab to put its models on classified networks via cloud provider Amazon.com and the first to build customized models for national security customers, the startup has said.
Its product Claude is in use across the intelligence community and armed services.
U.S. Senator Mark Warner, a Democrat and vice chairman of the Select Committee on Intelligence, criticized the action taken by Trump, a Republican.
“The president’s directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”
The conflict is the latest eruption in a saga that dates back at least to 2018. That year, employees at Alphabet’s Google protested the Pentagon’s use of the company’s AI to analyze drone footage, straining relations between Silicon Valley and Washington. A rapprochement ensued, with companies including Amazon and Microsoft jousting for defense business, and still more CEOs pledging cooperation last year with the Trump administration.
But theoretical “killer robots” have remained a concern held by human-rights and technology activists. At the same time, Ukraine and Gaza have become theaters for increasingly automated systems on the battlefield.
(Reporting by Jeffrey Dastin in San Francisco, Ryan Patrick Jones in Toronto, Andrea Shalal in Washington and Ismail Shakil in Ottawa; Writing by Daphne Psaledakis; Editing by Caitlin Webber and Matthew Lewis)


Comments