Now Reading
War at the speed of AI
Dark Light

War at the speed of AI

Randy David

While United States President Donald Trump has been epically vague about the goals of his unauthorized war against Iran (The Atlantic says he has offered at least 10 rationales), the one thing he has stressed is America’s unmatched military power in lethality, speed, and precision. His Defense Secretary Pete Hegseth boasts that Operation Epic Fury is the “most precise aerial operation in history.”

Day one of the war began with surgical strikes aimed at decapitating Iran’s leadership. Before this could be confirmed, Trump went on Truth Social to announce the killing of supreme leader Ayatollah Ali Khamenei and other key figures.

Largely unmentioned was the bombing that same day of a girls’ elementary school that left about 175 dead, most of them students and teachers. Initial reports said the school stood near a naval facility of the Islamic Revolutionary Guards Corps (IRGC) in the southern town of Minab near the Strait of Hormuz.

A reconstruction by The New York Times later suggested that the school occupied a compound once used by the IRGC but partitioned for civilian use about 15 years ago. The Pentagon declined to comment, saying the matter remains under investigation.

Part of that inquiry will likely examine whether the intelligence and targeting systems used in the strikes relied on outdated information. As the Times noted in its March 5 report, “one question is likely to be whether the school strike was a mistake or whether it was targeted based on outdated information.”

Attention has therefore turned to the role of artificial intelligence (AI) in modern warfare. Some reports say the AI tool Claude, developed by the tech company Anthropic, was embedded in systems used by Palantir, a contractor supplying data analytics to the US government. Its more famous rival is ChatGPT, developed by OpenAI.

Wall Street Journal reporter Marcus Weisgerber wrote on Feb. 28: “Within hours of declaring that the federal government will end its use of artificial intelligence tools made by tech company Anthropic, President Trump launched a major attack in Iran with the help of those very same tools.”

Claude was reportedly used for intelligence assessment, target analysis, and battle simulations during the strikes. Hours before the operation, however, the Pentagon declared Anthropic a “supply chain risk to national security” because of restrictions in the company’s terms of use.

Anthropic’s conditions prohibit mass surveillance of American citizens and the use of its AI models in fully autonomous weapons with no human oversight. While the Biden administration had accepted these terms, Trump officials argued that private firms should not impose limits on how the United States military uses their technology. Anthropic refused.

Trump ordered federal agencies to stop using Anthropic tools and barred Pentagon contractors from working with the company. The Pentagon reportedly shifted to systems linked to OpenAI’s ChatGPT and Elon Musk’s AI model Grok, after supposedly finding these to be less restrictive in their terms of use. But because these tools are embedded in larger software platforms, such transitions cannot happen overnight.

Anthropic maintains that Claude was never used to control weapons systems or make lethal decisions without human oversight. Still, the broader picture is troubling. A Fortune magazine headline on March 3 captured this concern thus: “Trump’s strike on Iran and the new breed of AI wars mean bombs can drop faster than the speed of thought.”

AI can synthesize vast streams of intelligence—from satellite imagery to intercepted communications—compressing the military “kill chain,” from identifying a target to destroying it. Experts say strikes on the scale seen in Iran would have been difficult without such tools.

Yet the same speed that enables these operations also magnifies their risks. If the Minab school bombing proves to have been based on faulty or outdated data, it will be a grim reminder of how lethal such errors can be.

See Also

But there’s a deeper danger. It is not simply that machines can simulate human reasoning; it is that we may find it too easy to outsource judgment to systems whose fluency and analytical power impress us.

Sociologist Niklas Luhmann observed that modern technology compresses the time between decision and action. Artificial intelligence appears to be doing exactly that in warfare. Responsibility remains human, but the judgment that guides it may no longer be fully our own.

—————-

public.lives@gmail.com

******

Get real-time news updates: inqnews.net/inqviber

Have problems with your subscription? Contact us via
Email: plus@inquirer.net, subscription@inquirer.net
Landline: (02) 8896-6000
SMS/Viber: 0908-8966000, 0919-0838000

© 2025 Inquirer Interactive, Inc.
All Rights Reserved.

Scroll To Top