
Just a fewer days ago We informed of an unprecedented decision by the U.S. administration that cut Anthropic off from government orders to OpenAI. However, it turns out that the complete cut-off of the military from advanced language models is not as simple as writing a decree. According to the Wall Street diary report, just hours after the ban was imposed, American command utilized Claude's model for critical operations in the mediate East.
The situation on the Anthropic line – Washington is now 1 of the most fascinating conflicts at the interface between technology and national security. On the 1 hand, Donald Trump publically banned the usage of the tools of this company, calling its creators "left-wing lunatics". On the another hand, as the "Wall Street Journal" established, the Central Command of the USA (CENTCOM) played a key function in fresh military operations in Iran, based precisely on Anthropic algorithms.
Claude on the virtual battlefield
What precisely does an army usage an advanced chatbot for? According to the journal's sources, Claude's artificial intelligence served military experts to rapidly measure incoming intelligence data, accurately identify targets and execute advanced simulations of conflict scenarios. Earlier a akin usage of this technology was recorded during the operation to capture Nicolas Maduro in Venezuela.
Why did the army usage the forbidden tool? The answer lies in the loopholes of legal and technological reality. The Claude model proved to be sufficiently profoundly integrated with the Department of Defense's secret systems (the contract value was estimated at $200 million) that the administration had to set a six-month transition period for the complete withdrawal of this technology from use. In crisis situations, the military is inactive reaching for what works best.
AI ethics versus Pentagon requirements
The foundation of all the confusion is not technology, but ethics. The head of Anthropic, Dario Amodei, put the Pentagon on hard terms for further cooperation. The company demanded the introduction of provisions into the contract that would categorically prohibit the usage of Claude for mass surveillance of American citizens and for powering full autonomous weapons (systems that decide to attack without human involvement). According to the company's board, the current AI generation is not reliable adequate to entrust its human life.
The Pentagon has rejected these demands. The Department of defence has taken the position that a private corp will not dictate US Armed Forces the rules for utilizing software in the battlefield. The military even threatened to include Anthropic on the list of entities that constitute a ‘threat for the supply chain’ (which is simply a sanction usually applied to companies from hostile states), after which it yet withdrew from the contract.
"We have never objected to circumstantial military operations or tried to limit the usage of our technology temporarily. We realize that decisions are made by the Department of defence and not by private companies," he wrote in Amodei's statement, recalling that his company had previously resigned from highly lucrative contracts in China in the name of US security.
Government takeover?
However, the most worrying aspect of this case for the full technology manufacture is the ace up the sleeve that the American administration is facing. defence Secretary Pete Hegseth and Donald Trump themselves suggested that the defence Production Act could be applied to Anthropic.
It is simply a powerful legal tool that allows the president of the United States to consider a given product absolutely critical for national safety and to force a private company to supply services to the government – under the threat of serious civilian and criminal consequences.
Applying this law to the creators of artificial intelligence would make an highly dangerous precedent. This would mean that technology companies developing LLM can at any time be legally forced to make their models available to the military, regardless of their own regulations, moral objections or concerns about the safety of technology itself.
Shock in the AI industry. The U.S. government cuts off Anthropic and OpenAI takes over
If article The authoritative ban is not enough. U.S. Army continues to usage “forbidden” artificial intelligence from Anthropic does not look right in your RSS reader, then see it on iMagazine.












![[FOTO] Bystra: pijacka szarża zakończona kraksą po policyjnym pościgu](https://img.bielskiedrogi.pl/2026/04/whatsapp_image_2026_04_11_at_16_02_13_1401.jpeg)
