Are thee human or are thee machine?

As of this writing, Kuwait has shot down three U.S. F-15s in “friendly fire” incidents. Reports say all crew members are safe, whatever that means for airman ejecting into an airstream at hundreds of miles per hour. Four U.S. servicemembers died in an Iranian missile attack on a tactical operations center in Kuwait. A “squirter” slipped through air defenses, says former Fox News Weekend’s Pete Hegseth.
The New Republic‘s Siva Vaidhyanathan wonders just who is minding the war: humans or AI? Commenting on the reported attack on a girls’ school in Southern Iran, Vaidhyanathan wonders (without citing confirming facts) if autonomous weapons are in play. The question arises in the wake of the Pentagon’s dispute with Anthropic over use of its AI technology for operating fully autonomous killing machines. Yes, the kind featured in dystopian movies.
The Pentagon insists its contracts with AI firms sanction “any lawful use” of the technology. Anthropic balked. The Trump administration’s definition of lawful is as suspect as its adherence to any laws with which it disagrees:
To understand what “any lawful use” means in practice, it helps to understand what it is designed to eliminate: The possibility that a private company could tell the United States military how its technology may or may not be used. In the Pentagon’s view, once a tool is purchased, the buyer sets the terms of its application. The vendor’s values, safety commitments, and ethical frameworks become, at the moment of transaction, irrelevant. The military has its own lawyers. It has its own review processes. It has its own standards. And given the degradation of legal safeguards and restrictions on the entire executive branch in the last year, almost any act of depravity or mass murder could be ruled “lawful” by a Pentagon that has purged itself of its most moral and ethical lawyers and leaders and a Supreme Court devoted to maximizing Trump’s autocracy.
The same logic—that internal military review is sufficient to govern the deployment of powerful technologies—underwrote the expansion of the NSA surveillance state revealed by Edward Snowden. It underwrote the algorithmic targeting programs in Yemen and Somalia, where AI-assisted kill lists generated strikes that killed the wrong people with a regularity that official reviews consistently declined to examine.
Just picture a self-driving taxi with a missile strapped to the hood.
In his February 26 statement, [Anthropic CEO Dario] Amodei said: “Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” This technical conclusion is shared by a significant portion of the AI research community, grounded in the basic observation that large language models hallucinate, misidentify, and fail in ways that are not fully predictable. In a military context, an unpredictable failure is a dead civilian with no accountable author.
And your point is? asks Pete Hegseth. His Pentagon “wants an AI that will do what it is told without the inconvenience of a conscience embedded in its terms of service,” writes Vaidhyanathan.
So Hegseth declared Anthropic a “supply chain risk to national security,” immediately barred any U.S. military contractor, supplier, or partner from doing business with Anthropic. Naturally, Sam Altman’s OpenAI, the maker of ChatGPT, stepped forward and accepted the “any lawful use” language.
I’ve long argued that corporate “persons” possess only appetite and instinct, and no soul and no conscience. Anthropic at least retains their memory.








