The Iranian flashpoint does not merely test missiles, air defences, and proxy networks.
It accelerates the shift from human-paced warfare to machine-speed conflict. Striving to pierce the dense information fog, I wrote of the growing influence of battlefield lies, fellow BIG Media contributor Grant Wilde documented how intelligent machines are taking over on the front lines, and we shared a CNN piece on the United States’ urgent push for laser weapons.
Autonomous systems, AI-driven targeting, and compressed decision loops move from experimental – even speculative – edges to central features of operations. What insiders quietly confess (and what public narratives often obscure) is the wager at the heart of this shift: the speculation that faster, algorithm-assisted killing would deliver decisive advantage without breaking the fragile ethical and strategic foundations that have governed deterrence for decades.
Success in deterrence relies in part on luck as well as unrealistic assumptions about human behaviour. The latter is further complicated when actors “on the other side” do not think the same way and have orthogonal incentives. Deterrence itself can sclerotize institutional behaviour, the norm accepted less because it makes sense and more because it is the practice.
On the systems thinking side of capabilities, many of these systems need robust conflict-testing, not field-testing or the hit-an-run approach to Venezuela.
Conflicts have often been used to test new weapons and systems.
World War I introduced mechanization (mostly tanks) and particularly nasty chemical weapons (mustard gas was lethal, and winds changed direction). World War II saw advanced aircraft, radar technology, and a looming threat of sophisticated rockets by Von Braun with consequential changes to everyone’s calculus. The Korean and Vietnam Wars saw the deployment of new military technologies including helicopters and jet fighters, as well as defoliation chemicals; one an agent in a colour now associated with Dutch independence and soccer jerseys, and a jelly-like substance that inspired a memorable line in Apocalypse Now.
The automation trap
On one side stands the promise of compression: intelligence fused from satellites, drones, signals, and open sources processed in minutes rather than hours or days. Systems such as the U.S. Maven Smart System (integrated with tools drawing on Anthropic’s Claude and other models) and Israel’s Gospel and Lavender platforms enabled target nomination at unprecedented scale. During the intense phases of strikes – reports of more than 1,000 targets hit in the opening 24 hours of major 2026 operations – AI helped cut through the noise, prioritize, and recommend weapons and strike packages. Human operators remained nominally in the loop, but the tempo often shrank approval windows to seconds. Automation bias, long studied in aviation and other high-stakes domains, became operationally relevant: humans are inclined to trust the machine’s recommendation under time pressure.
A walk through U.S. startups already positioned, profiting, and investing in the space reveals the notable Palantir (seeing orb), but also Anduril (refurbished sword), Helsing (vampire slayer) and Chaos (gaping void). It is worth noting that two originate via j.R.R. Tolkien; ironic that his message in Lord of the Rings – the rising monster of technology – is lost on Silicon Valley’s lexicographers.
At the same poker table with the philiacs of military-industrialism sits erosion, adorned with baseball cap and sporting dark shades.
Traditional deterrence rested on calculable risk, clear signalling, and human judgment capable of exercising proportionality and restraint. When algorithms accelerate the kill chain – identifying patterns, flagging individuals or facilities, and suggesting engagements – the human role risks shifting from deliberate decision-maker to validator or rubber-stamp.
In the Iran campaign, this manifests in both offensive targeting and defensive responses. Saturation attacks with low-cost drones and missiles tested AI-enabled intercept systems, while offensive AI support enabled rapid scaling of strikes. Iran airspace was rapidly rendered impotent twice running now. Claims and counter-claims about F-35 strikes, Airborne Warning and Control System (AWACS)-related operations, and collateral incidents (including disputed strikes on civilian-adjacent sites) quickly entered the battlefield narrative, where verification lagged and competing “truths” ossified.
Do the purveyors of systems, weapons, and instruments have precautionary principle or fervour. Even in failure, more money will flow – at the very least to make it better, fool-proof, more robust.
The language writes itself.
An insider perspective, synthesized from those who have advised on or observed these systems in development and early deployment, reveals the quiet reckoning: watching decision authority migrate from deliberate human chains to algorithmic recommendation engines is disorienting.
One former advisor on deterrence modelling described it as seeing the OODA loop (observe, orient, decide, act) collapse inward, particularly for grey zones activities. What once required staffs, debate, deliberation, and explicit command intent now risks being optimized for speed at the expense of contextual wisdom. Precision munitions and stealth platforms such as the F-35 were already sensor-heavy data nodes; layering AI on top turns individual platforms and entire battle networks into extensions of a larger cognitive machine.
Rehoboam awaits.
The “Oy!” moment comes when even these advanced assets face real attrition – whether from ground fire, electronic warfare, or saturation tactics – and the systems meant to provide overmatch reveal their own brittleness under contested conditions. As complexity rises, fragility bounds in lockstep.
This shift makes traditional deterrence models increasingly obsolete in practice, even while they remain doctrinally sacred. Deterrence theory assumed rational actors weighing costs against benefits with roughly comparable information and decision timelines. Machine-speed warfare introduces many disruptions and emergent stochastic properties.
First, the tempo advantage favours the side better integrated with its algorithms, but it raises miscalculation risks: an AI-flagged “high-confidence” target that proves mistaken can escalate faster than diplomats or political leaders can intervene.
Second, accountability diffuses. When an AI-assisted strike causes disproportionate civilian harm – as raised in debates over rapid targeting during the Iran operations – responsibility fragments across coders, data curators, operators, commanders, and policymakers. Humans remain in the loop, but the loop is tenuous and machines carry increased gravitas, in sympathy. International humanitarian law, predating today’s equipment, and machine integration, suffers around individual strikes and human intent, strains under the sheer volume and velocity of staccato intelligence: thousands of individual “lawful” micro-decisions can produce outcomes resembling indiscriminate effects even though each relies upon what is, in truth, a veneer of precision. It takes but one domino.
The Iranian theatre illustrates both tactical gains and deeper uncertainties. U.S. and Israeli forces leverage AI for intelligence fusion and battle management, achieving high sortie rates and target throughput that would have been unimaginable in earlier eras. Iranian responses, blending ballistic missiles, drones, and proxies, tested the resilience of those systems through volume and diffused adaptation. Yet propaganda on all sides quickly filled the gap: exaggerated claims of downed F-35s, pristine intercepts, or surgically perfect strikes. Generative AI amplified the brume, producing imagery and narratives that outpaced kinetic reality.
While some machines changed the physical fight; the information machines changed the meaning ascribed to it.
Broader reckonings follow.
In a world of mineral chokeholds – where the rare earths and critical elements powering magnets, sensors, processors, and batteries remain vulnerable to disruption – the side that can sustain machine production and adaptation holds a structural edge. Weaker opponents in history have often encouraged onslaughts to diminish materiel and capacity, en masse or vis-a-vis tactical engagements. Despite autonomous systems lowering the human cost for the deployer, they can inadvertently lower the perceived threshold for initiating or escalating conflict. If machines absorb the risk, political leaders may feel freer to probe the tint of red lines. Simultaneously, the over-reliance on brittle, data-hungry systems creates single points of failure: jamming, spoofing, adversarial AI, or supply interruptions could blind or mislead the very networks meant to provide dominance.
Subterfuge is war’s greatest weapon. It can blind all sides.
Dirty resolution, low fidelity
The automation dilemma has no clean resolution. Rejecting machine augmentation cedes advantage to adversaries racing ahead (as seen in Ukraine’s drone innovations and broader global experimentation). Embracing it without rigorous guardrails risks hollowing out the moral and strategic ballast between calculated force and runaway escalation. Honest assessment requires acknowledging that much remains classified or contested: success rates of AI targeting, true civilian tolls relative to claims of precision, and the frequency of meaningful human overrides versus automation bias.
Technophilia is as much an issue now as it was in Tolkien’s time.
Propaganda thrives in these gaps, as we saw in the competing narratives around the flashpoint. And propaganda buys time for denial, deflection, distraction. Bob Scott nods here about hundreds of My Lais, not one.
Iran also reveals that machines on the battlefield are no longer futuristic. They are here now, reshaping the speed, scale, and ethics of killing. This trifecta forces the issue. The dilemma demands a clearer doctrine on human responsibility in AI-assisted loops, investment in resilient and explainable systems, and renewed emphasis on the slower arts of strategy: deception, supply security, alliance management, and narrative discipline.
Traditional deterrence is not dead, but it must evolve to account for algorithmic actors that compress time, diffuse judgment and risk foreclosing prudence in the heat of action.
Echoing the detached rationality satirized in Stanley Kubrick’s Dr. Strangelove, this essay updates McNamara’s Folly – the Vietnam-era technocratic belief that superior data, models, and quantitative precision could master the fog and friction of war. Today’s version substitutes algorithmic confidence and real-time targeting data for 1960s body counts and systems analysis. The peril remains: precision tools and purposefulness metrics can intoxicate planners, creating false confidence in calibrated escalation while underestimating human, political, and emergent complexities. History shows how quickly quantitative seduction produces strategic failure.
Future conflicts will test this wager repeatedly, with increasing stacks, including all-in for a high-stakes, high-velocity test of the power and peril of speed betting. The side that best integrates machines without losing its humanity, or at the very least its capacity for prudent restraint, will hold the edge. Yet a deeper truth exposed here is sobering: in handing more of the killing decision loop to algorithms, we wager that speed and data will compensate for the loss of deliberate human wisdom under fire.
Navigating the dilemma will define not only who wins the next war, but what kind of war we are willing to fight.
And who we become when we fight them.
(Richard LeBlanc – BIG Media Ltd., 2026)


