Silicon Valley players are poised to profit. One among them is Palmer Luckey, the founding father of the virtual-reality headset company Oculus, which he sold to Facebook for $2 billion. After Luckey’s highly public ousting from Meta, he founded Anduril, which focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense. The corporate is now valued at $14 billion. My colleague James O’Donnell interviewed Luckey about his latest pet project: headsets for the military.
Luckey is increasingly convinced that the military, not consumers, will see the worth of mixed-reality hardware first: “You’re going to see an AR headset on every soldier, long before you see it on every civilian,” he says. In the patron world, any headset company is competing with the ubiquity and ease of the smartphone, but he sees entirely different trade-offs in defense. Read the interview here.
Using AI for military purposes is controversial. Back in 2018, Google pulled out of the Pentagon’s Project Maven, an try and construct image recognition systems to enhance drone strikes, following staff walkouts over the ethics of the technology. (Google has since returned to offering services for the defense sector.) There was a long-standing campaign to ban autonomous weapons, also generally known as “killer robots,” which powerful militaries corresponding to the US have refused to comply with.
However the voices that boom even louder belong to an influential faction in Silicon Valley, corresponding to Google’s former CEO Eric Schmidt, who has called for the military to adopt and invest more in AI to get an edge over adversaries. Militaries everywhere in the world have been very receptive to this message.
That’s excellent news for the tech sector. Military contracts are long and lucrative, for a start. Most recently, the Pentagon purchased services from Microsoft and OpenAI to do search, natural-language processing, machine learning, and data processing, reports . Within the interview with James, Palmer Luckey says the military is an ideal testing ground for brand spanking new technologies. Soldiers do as they’re told and aren’t as picky as consumers, he explains. They’re also less price-sensitive: Militaries don’t mind spending a premium to get the most recent version of a technology.
But there are serious dangers in adopting powerful technologies prematurely in such high-risk areas. Foundation models pose serious national security and privacy threats by, for instance, leaking sensitive information, argue researchers on the AI Now Institute and Meredith Whittaker, president of the communication privacy organization Signal, in a latest paper. Whittaker, who was a core organizer of the Project Maven protests, has said that the push to militarize AI is actually more about enriching tech corporations than improving military operations.
Despite calls for stricter rules around transparency, we’re unlikely to see governments restrict their defense sectors in any meaningful way beyond voluntary ethical commitments. We’re within the age of AI experimentation, and militaries are fiddling with the best stakes of all. And due to the military’s secretive nature, tech corporations can experiment with the technology without the necessity for transparency and even much accountability. That suits Silicon Valley just nice.
Deeper Learning
How Wayve’s driverless cars will meet one among their biggest challenges yet