The official described this for instance of how things might work but wouldn’t confirm or deny whether it represents how AI systems are currently getting used.
Other outlets have reported that Anthropic’s Claude has been integrated into existing military AI systems and utilized in operations in Iran and Venezuela, however the official’s comments add insight into the precise role chatbots may play, particularly in accelerating the seek for targets. In addition they make clear the way in which the military is deploying two different AI technologies, each with distinct limitations.
Since not less than 2017, the US military has been working on a “big data” initiative called Maven. It uses older kinds of AI, particularly computer vision, to research the oceans of knowledge and imagery collected by the Pentagon. Maven might take hundreds of hours of aerial drone footage, for instance, and algorithmically discover targets. A 2024 report from Georgetown University showed soldiers using the system to pick out targets and vet them, which sped up the method to get approval for these targets. Soldiers interacted with Maven through an interface with a battlefield map and dashboard, which could highlight potential targets in a single color and friendly forces in one other.
The official’s comments suggest that generative AI is now being added as a conversational chatbot layer—one the military may use to search out and analyze data more quickly because it makes decisions like which targets to prioritize.
Generative AI systems, like those who underpin ChatGPT, Claude, and Grok, are a fundamentally different technology from the AI that has primarily powered Maven. Built on large language models, they’re much less battle-tested. And while Maven’s interface forced users to directly inspect and interpret data on the map, the outputs produced by generative AI models are easier to access but harder to confirm.
Using generative AI for such decisions is reducing the time required within the targeting process, added the official, who didn’t provide details when asked how much additional speed is feasible if humans are required to spend time double-checking a model’s outputs.
Using military AI systems is under increased public scrutiny following the recent strike on a girls’ school in Iran through which greater than 100 children died. Multiple news outlets have reported that the strike was from a US missile, though the Pentagon has said it continues to be under investigation. And while the has reported that Claude and Maven have been involved in targeting decisions in Iran, there is no such thing as a evidence yet to elucidate what role generative AI systems played, if any. The reported on Wednesday that a preliminary investigation found outdated targeting data to be partly accountable for the strike.
