Letter to the editor: Iran war reveals the West’s coming AI dilemma
· The Washington TimesOPINION:
The conflict in Iran is not just a regional war, but the opening salvo of an algorithmic arms race. On the first day of the campaign, American forces struck 1,000 targets, relying on Palantir’s Maven AI and tools derived from Anthropic’s Claude to sift intelligence and identify targets at unprecedented speed.
You might expect Washington to guard this advantage fiercely. Instead, it’s moving to blacklist Anthropic from defense programs after the company refused to allow its systems to support autonomous weapons or mass surveillance.
The dispute reflects a deeper shift in military AI. The Pentagon wants the; freedom to develop agentic systems capable of executing operations at machine speed, but Anthropic insists human accountability must remain.
As warfare moves toward autonomy, a question becomes unavoidable: If AI helps select a target, who bears responsibility when it’s wrong and lives are lost?
That question matters because the technological arms race is already underway. Russia is waging a persistent hybrid campaign against Europe using AI-powered attacks on critical energy infrastructure to automate reconnaissance and coordinate disruptions across networks.
China is moving too, adapting Western AI models, including Meta’s LLaMA, for military use.
But Western enthusiasm for AI masks a serious weakness. As British AI expert Sachin Dev Duggal has argued, today’s large language models are optimized for plausibility rather than truth. They can hallucinate sources, misinterpret data and produce confident but incorrect conclusions.
In a battlefield environment where decisions carry lethal consequences, that vulnerability is dangerous.
Advertisement Advertisement
This is why researchers are exploring alternatives like neuro-symbolic AI. By combining neural pattern recognition with rule-based reasoning, these systems aim to produce results that are transparent and verifiable. Duggal’s SeKondBrain project illustrates this approach in the civilian world, building knowledge structures that anchor AI answers in evidence rather than probability.
Applied to defense, this could help track sabotage campaigns across infrastructure, cyber networks and logistics chains while explaining their reasoning to the commanders who ultimately authorize action.
Anthropic’s red lines may frustrate the Pentagon, but in an algorithmic arms race waged by rival powers, restraint may prove the West’s most valuable strategic advantage yet.
MAURIZIO GERI
Brussels, Belgium
Advertisement Advertisement