
Here’s some nightmare fuel for you:
Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.
Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.
The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.
In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne.
What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.
“From a nuclear-risk perspective, the findings are unsettling,” says James Johnson at the University of Aberdeen, UK. He worries that, in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others’ responses with potentially catastrophic consequences.
Well, it’s just theoretical, right? They’re never going to let AI do any real decision making. That would be crazy.
Well, we don’t know:
This matters because AI is already being tested in war gaming by countries across the world. “Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” says Tong Zhao at Princeton University.
hao believes that, as standard, countries will be reticent to incorporate AI into their decision making regarding nuclear weapons. That is something Payne agrees with. “I don’t think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them,” he says.
But there are ways it could happen. “Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI,” says Zhao.
He wonders whether the idea that the AI models lack the human fear of pressing a big red button is the only factor in why they are so trigger happy. “It is possible the issue goes beyond the absence of emotion,” he says. “More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”
I’m not reflexively anti-AI, I understand that it likely has many useful purposes. But what happens if you combine AI with Donald Trump and Pete Hegseth? Neither one of them have normal human emotions or understand the stakes in anything.
Do you think that’s a stretch? Get a load of this:
Defense Secretary Pete Hegseth is threatening to blacklist Anthropic from working with the U.S. military over the artificial intelligence company’s refusal to loosen its safety standards. The threat came on Tuesday during a meeting between Hegseth and Anthropic CEO Dario Amodei, according to two people with direct knowledge of the meeting who were not authorized to speak publicly.
While both sides agreed Hegseth vowed to punish Anthropic for not bending to the administration’s demands, accounts of what exactly the threat was vary. One person close to the discussion said Hegseth dangled the possibility of canceling Anthropic’s $200 million contract with the Defense Department, while a Pentagon official said repercussions could include forcing Anthropic to allow the federal government to use its AI tools against its will and blacklisting the company from receiving future work with the U.S. military.
For months, Amodei has insisted that using AI for domestic mass surveillance and AI-controlled weapons are ethical lines the company will not cross, calling such use “illegitimate” and “prone to abuse.” According to a source familiar with the Hegseth meeting, Amodeo stressed those positions again on Tuesday.
Hegseth believes that following the law is “woke.” No surprise there. After all, this is a person who believes that laws against war crimes are woke. There is every reason to believe that he believes the prohibition against using nuclear weapons is as well.














