Firms like Flock and Axon sell suites of sensors—cameras, license plate readers, gunshot detectors, drones—after which offer AI tools to make sense of that ocean of information (eventually yr’s conference I saw schmoozing between countless AI-for-police startups and the chiefs they sell to on the expo floor). Departments say these technologies save time, ease officer shortages, and help cut down on response times.
Those sound like superb goals, but this pace of adoption raises an obvious query: Who makes the principles here? When does the usage of AI cross over from efficiency into surveillance, and what style of transparency is owed to the general public?
In some cases, AI-powered police tech is already driving a wedge between departments and the communities they serve. When the police in Chula Vista, California, were the primary within the country to get special waivers from the Federal Aviation Administration to fly their drones farther than normal, they said the drones can be deployed to resolve crimes and get people help sooner in emergencies. They’ve had some successes.
However the department has also been sued by a neighborhood media outlet alleging it has reneged on its promise to make drone footage public, and residents have said the drones buzzing overhead feel like an invasion of privacy. An investigation found that these drones were deployed more often in poor neighborhoods, and for minor issues like loud music.
Jay Stanley, a senior policy analyst on the ACLU, says there’s no overarching federal law that governs how local police departments adopt technologies just like the tracking software I wrote about. Departments often have the leeway to try it first, and see how their communities react after the very fact. (Veritone, which makes the tool I wrote about, said they couldn’t name or connect me with departments using it so the main points of the way it’s being deployed by police will not be yet clear).