Where OpenAI’s technology could show up in Iran

-

It’s unclear what OpenAI’s motivations are. It’s not the primary tech giant to embrace military contracts it had once vowed never to enter into, however the speed of the pivot was notable. Perhaps it’s nearly money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads). Or perhaps Altman truly believes the ideological framing he often invokes: that liberal democracies (and their militaries) will need to have access to probably the most powerful AI to compete with China.

The more consequential query is what happens next. OpenAI has decided it’s comfortable operating right within the messy heart of combat, just because the US escalates its strikes against Iran (with AI playing a bigger role in that than ever before). So where exactly could OpenAI’s tech show up on this fight? And which applications will its customers (and employees) tolerate?

Targets and strikes

Though its Pentagon agreement is in place, it’s unclear when OpenAI’s technology can be ready for classified environments, because it have to be integrated with other tools the military uses (Elon Musk’s xAI, which recently struck its own cope with the Pentagon, is predicted to undergo the identical process with its AI model Grok). But there’s pressure to do that quickly due to controversy across the technology in use to this point: After Anthropic refused to permit its AI for use for “any lawful use,” President Trump ordered the military to stop using it, and Anthropic was designated a supply chain risk by the Pentagon. (Anthropic is fighting the designation in court.)

If the Iran conflict continues to be underway by the point OpenAI’s tech is within the system, what could or not it’s used for? A recent conversation I had with a defense official suggests it’d look something like this: A human analyst could put an inventory of potential targets into the AI model and ask it to investigate the data and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are situated. It could analyze a lot of different inputs in the shape of text, image, and video. 

A human would then be answerable for manually checking these outputs, the official said. But that raises an obvious query: If an individual is really double-checking AI’s outputs, how is it speeding up targeting and strike decisions?

For years the military has been using one other AI system, called Maven, which may handle things like routinely analyzing drone footage to discover possible targets. It’s likely that OpenAI’s models, like Anthropic’s Claude, will offer a conversational interface on top of that, allowing users to ask for interpretations of intelligence and suggestions for which targets to strike first. 

It’s hard to overstate how recent that is: AI has long done evaluation for the military, drawing insights out of oceans of information. But using generative AI’s advice about which actions to absorb the sector is being tested in earnest for the primary time in Iran.

Drone defense

At the top of 2024, OpenAI announced a partnership with Anduril, which makes each drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive evaluation of drones attacking US forces and help take them down. An OpenAI spokesperson told me on the time that this didn’t violate the corporate’s policies, which prohibited “systems designed to harm others,” since the technology was getting used to focus on drones and never people. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x