The State of AI: How war might be modified without end

-

Helen Warrell, investigations reporter 

It’s July 2027, and China is on the point of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. Within the meantime, an unlimited disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.

Scenarios reminiscent of this have brought dystopian horror to the controversy in regards to the use of AI in warfare. Military commanders hope for a digitally enhanced force that is quicker and more accurate than human-directed combat. But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Henry Kissinger, the previous US secretary of state, spent his final years warning in regards to the coming catastrophe of AI-driven warfare.

Grasping and mitigating these risks is the military priority—some would say the “Oppenheimer moment”—of our age. One emerging consensus within the West is that decisions across the deployment of nuclear weapons mustn’t be outsourced to AI. UN secretary-general António Guterres has gone further, calling for an outright ban on fully autonomous lethal weapons systems. It is important that regulation keep pace with evolving technology. But within the sci-fi-fueled excitement, it is simple to lose track of what is definitely possible. As researchers at Harvard’s Belfer Center indicate, AI optimists often underestimate the challenges of fielding fully autonomous weapon systems. It’s entirely possible that the capabilities of AI in combat are being overhyped.

Anthony King, Director of the Strategy and Security Institute on the University of Exeter and a key proponent of this argument, suggests that moderately than replacing humans, AI might be used to enhance military insight. Even when the character of war is changing and distant technology is refining weapon systems, he insists, “the whole automation of war itself is just an illusion.”

Of the three current military use cases of AI, none involves full autonomy. It’s being developed for planning and logistics, cyber warfare (in sabotage, espionage, hacking, and knowledge operations; and—most controversially—for weapons targeting, an application already in use on the battlefields of Ukraine and Gaza. Kyiv’s troops use AI software to direct drones in a position to evade Russian jammers as they close in on sensitive sites. The Israel Defense Forces have developed an AI-assisted decision support system often called Lavender, which has helped discover around 37,000 potential human targets inside Gaza. 

Helen Warrell and James O'Donnell

FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

There’s clearly a danger that the Lavender database replicates the biases of the information it’s trained on. But military personnel carry biases too. One Israeli intelligence officer who used Lavender claimed to have more faith within the fairness of a “statistical mechanism” than that of a grieving soldier.

Tech optimists designing AI weapons even deny that specific recent controls are needed to manage their capabilities. Keith Dear, a former UK military officer who now runs the strategic forecasting company Cassi AI, says existing laws are greater than sufficient: “You make sure that there’s nothing within the training data that may cause the system to go rogue … if you end up confident you deploy it—and also you, the human commander, are answerable for anything they may try this goes mistaken.”

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x