Phase two of military AI has arrived

-

As I also write in my story, this push raises alarms from some AI safety experts about whether large language models are fit to research subtle pieces of intelligence in situations with high geopolitical stakes. It also accelerates the US toward a world where AI will not be just analyzing military data but suggesting actions—for instance, generating lists of targets. Proponents say this guarantees accuracy and fewer civilian deaths, but many human rights groups argue the alternative. 

With that in mind, listed below are three open inquiries to keep your eye on because the US military, and others world wide, bring generative AI to more parts of the so-called “kill chain.”

What are the bounds of “human within the loop”?

Seek advice from as many defense-tech firms as I actually have and also you’ll hear one phrase repeated very often: “human within the loop.” It signifies that the AI is liable for particular tasks, and humans are there to examine its work. It’s meant to be a safeguard against probably the most dismal scenarios—AI wrongfully ordering a deadly strike, for instance—but in addition against more trivial mishaps. Implicit in this concept is an admission that AI will make mistakes, and a promise that humans will catch them.

However the complexity of AI systems, which pull from hundreds of pieces of information, make that a herculean task for humans, says Heidy Khlaaf, who’s chief AI scientist on the AI Now Institute, a research organization, and previously led safety audits for AI-powered systems.

“‘Human within the loop’ will not be all the time a meaningful mitigation,” she says. When an AI model relies on hundreds of information points to attract conclusions, “it wouldn’t really be possible for a human to sift through that quantity of data to find out if the AI output was erroneous.” As AI systems depend on increasingly more data, this problem scales up. 

Is AI making it easier or harder to know what needs to be classified?

Within the Cold War era of US military intelligence, information was captured through covert means, written up into reports by experts in Washington, after which stamped “Top Secret,” with access restricted to those with proper clearances. The age of huge data, and now the appearance of generative AI to research that data, is upending the old paradigm in plenty of ways.

One specific problem is known as classification by compilation. Imagine that a whole bunch of documents all contain separate details of a military system. Someone who managed to piece those together could reveal vital information that by itself could be classified. For years, it was reasonable to assume that no human could connect the dots, but this is precisely the type of thing that enormous language models excel at. 

With the mountain of information growing every day, after which AI continually creating latest analyses, “I don’t think anyone’s give you great answers for what the suitable classification of all these products needs to be,” says Chris Mouton, a senior engineer for RAND, who recently tested how well suited generative AI is for intelligence and evaluation. Underclassifying is a US security concern, but lawmakers have also criticized the Pentagon for overclassifying information. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x