
A clash of ideals with a war machine. According to U.S. media, the U.S. Department of defence is considering abandoning Anthropic tools. Reason? The creators of Claude's powerful model have the nerve to put the military on hard, ethical boundaries regarding autonomous weapons and the surveillance of citizens.
According to the information reached by the Axios service (which is powerfully commented on, among others, by Gizmodo), the relation between the Pentagon and Anthropic was hit hard. An anonymous DOD authoritative was expected to say that of all AI companies cooperating with the government, Anthropic is by far the most "ideological".
It's evidently a flaw for the American army. Just last year, Anthropic boasted about winning a $200 million contract with the Pentagon, calling it a "new chapter in supporting US national security." But now it turns out that the company is not going to supply the military with a blank check.
Drones Without Human Conscience
The bone of disagreement is 2 circumstantial areas that the company considers to be absolute red lines: a full autonomous weapon (e.g. drone swarms) and mass national surveillance.
Dario Amodei, the founder and CEO of Anthropic, in the fresh fresh York Times podcast put his concerns straight. He noted that the foundation of constitutional safeguards in military structures is that yet at the end of the chain of command always stands man. A man who in utmost circumstances can (and should) refuse to execute an illegal or criminal order.
"With full autonomous weapons, we do not necessarily have these safeguards," Amodei warned. Removing the human origin and giving decision to kill algorithmic systems is simply a step that Anthropic does not want to apply his hand to.
How Ukrainian drones neutralized 2 NATO battalions. Conclusions of manoeuvres in Estonia
Surveillance on steroids
The second problem concerns privacy. Amodei outlined a terrifying, although technically possible scenario. Cameras and microphones in public space are nothing new. The difference is that so far no government has had the physical anticipation to analyse this gigantic stream of data on an ongoing basis.
With the aid of advanced AI models, specified as Claude, the strategy can in real time transcribe conversations from the street, combine them with facial designation systems and rapidly make profiles of citizens (e.g. by catching opposition members or protest participants).
Venezuelan Incident
The inflammatory point that led the Pentagon to rage was alleged questions from Anthropic addressed to Palantir (a major technology integrator for the American army). According to Axios' informants, Anthropic questioned whether their software played any function during the US kinetic attack in Venezuela on 3 January, during which shots were fired and injured.
Although Anthropic denies interfering with the "current operations" in this way, the very fact of asking specified questions was received by the military as a public insinuation of disapproval for military activities.
So why doesn't the Pentagon just break the deal and go to competition? The answer, which came from a military informant himself, is brutally honest: due to the fact that models from another companies are technologically "just behind" behind Claude. So the Army faces a classical dilemma: to work with the best tool, whose creators ask uncomfortable questions, or to settle for worse software than companies for which morality ends where the government's transfer begins.
If article Pentagon furious at Anthropic. The creators of artificial intelligence refuse to blindly obey the army does not look right in your RSS reader, then see it on iMagazine.
