Now Reading
Things to know about Pentagon’s Project Maven
Dark Light

Things to know about Pentagon’s Project Maven

AFP

WASHINGTON, USA—A Pentagon artificial intelligence (AI) program called Project Maven is at the center of the US strikes against Iran and potentially one of the most consequential transformations of modern warfare.

What is it?

Project Maven is the Pentagon’s flagship AI program, launched in 2017 as a narrow experiment to help military analysts make sense of the torrent of drone footage pouring in from conflict zones.

Operators were drowning in imagery, searching frame by frame for objects of interest that might appear for only a moment before vanishing. Maven was built to find the needle in the haystack.

Eight years later, the program has evolved into something far more expansive: an AI-assisted targeting and battlefield management system that has vastly accelerated what is known in war-making as the kill chain—the process from initial detection to destruction.

How does it work?

Maven functions like both the air traffic control of battle and its cockpit.

Aalok Mehta, director of the CSIS Wadhwani AI Center, described the system as “essentially an overlay” that fuses sensor data, enemy troop intelligence, satellite imagery and information on troop deployment.

In practice, that means rapidly scanning satellite feeds to detect troop movements or identify targets, while also “taking a snapshot of the operational theater” to determine the best course of action for striking a specific target.

A Pentagon official described how Maven “magically” turns an observed threat into a targeting workflow, weighing available assets and presenting a commander with options.

See Also

The emergence of ChatGPT was another leap forward, broadening the use of the technology to a far greater range of users who can interact with Maven in natural language.

For now, this capability is supplied by Anthropic’s Claude —though that arrangement is coming to a bitter end after the Pentagon bristled at the AI lab’s demand that its model not be used for fully automated strikes or the tracking of US citizens. Why did Google say no?

The ethical question was a factor in Maven’s early years, when Google was the program’s original AI contractor.

In 2018, more than 3,000 employees signed an open letter protesting the company’s involvement.

Have problems with your subscription? Contact us via
Email: plus@inquirer.net, subscription@inquirer.net
Landline: (02) 8896-6000
SMS/Viber: 0908-8966000, 0919-0838000

© 2025 Inquirer Interactive, Inc.
All Rights Reserved.

Scroll To Top