The Expanding Frontier of AI and the Data It Demands
Artificial intelligence is rapidly changing how we live, work and govern. In public health and public services, AI tools promise more efficiency and faster decision-making. But beneath the surface of this transformation is a growing imbalance: our ability to gather data has outpaced our ability to manipulate it responsibly.
This goes beyond only a tech challenge to be a privacy crisis. From predictive policing software to surveillance tools and automatic license plate readers, data about individuals is being amassed, analyzed and acted upon at unprecedented speed. And yet, most residents do not know who owns their data, the way it’s used or whether it’s being safeguarded.
I’ve seen this up close. As a former FBI Cyber Special Agent and now the CEO of a number one public safety tech company, I’ve worked across each the federal government and personal sector. One thing is evident: if we don’t fix the way in which we handle data privacy now, AI will only make existing problems worse. And one in all the most important problems? Walled gardens.
What Are Walled Gardens And Why Are They Dangerous in Public Safety?
Walled gardens are closed systems where one company controls the access, flow and usage of knowledge. They’re common in promoting and social media (think platforms Facebook, Google and Amazon) but increasingly, they’re showing up in public safety too.
Public safety corporations play a key role in modern policing infrastructure, nevertheless, the proprietary nature of a few of these systems means they aren’t all the time designed to interact fluidly with tools from other vendors.
These walled gardens may offer powerful functionality like cloud-based bodycam footage or automated license plate readers, but additionally they create a monopoly over how data is stored, accessed and analyzed. Law enforcement agencies often find themselves locked into long-term contracts with proprietary systems that don’t discuss with one another. The result? Fragmentation, siloed insights and an inability to effectively respond in the neighborhood when it matters most.
The Public Doesn’t Know, and That’s a Problem
Most individuals don’t realize just how much of their personal information is flowing into these systems. In lots of cities, your location, vehicle, online activity and even emotional state will be inferred and tracked through a patchwork of AI-driven tools. These tools will be marketed as crime-fighting upgrades, but within the absence of transparency and regulation, they’ll easily be misused.
And it’s not only that the information exists, but that it exists in walled ecosystems which might be controlled by private corporations with minimal oversight. For instance, tools like license plate readers at the moment are in hundreds of communities across the U.S., collecting data and feeding it into their proprietary network. Police departments often don’t even own the hardware, they rent it, meaning the information pipeline, evaluation and alerts are dictated by a vendor and never by public consensus.
Why This Should Raise Red Flags
AI needs data to operate. But when data is locked inside walled gardens, it may possibly’t be cross-referenced, validated or challenged. This implies decisions about who’s pulled over, where resources go or who’s flagged as a threat are being made based on partial, sometimes inaccurate information.
The danger? Poor decisions, potential civil liberties violations and a growing gap between police departments and the communities they serve. Transparency erodes. Trust evaporates. And innovation is stifled, because recent tools can’t enter the market unless they conform to the constraints of those walled systems.
In a scenario where a license plate recognition system incorrectly flags a stolen vehicle based on outdated or shared data, without the power to confirm that information across platforms or audit how that call was made, officers may act on false positives. We’ve already seen incidents where flawed technology led to wrongful arrests or escalated confrontations. These outcomes aren’t hypothetical, they’re happening in communities across the country.
What Law Enforcement Actually Needs
As an alternative of locking data away, we’d like open ecosystems that support secure, standardized and interoperable data sharing. That doesn’t mean sacrificing privacy. Quite the opposite, it’s the one strategy to ensure privacy protections are enforced.
Some platforms are working toward this. For instance, FirstTwo offers real-time situational awareness tools that emphasize responsible integration of publically-available data. Others, like ForceMetrics, are focused on combining disparate datasets equivalent to 911 calls, behavioral health records and prior incident history to offer officers higher context in the sphere. But crucially, these systems are built with public safety needs and community respect as a priority, not an afterthought.
Constructing a Privacy-First Infrastructure
A privacy-first approach means greater than redacting sensitive information. It means limiting access to data unless there’s a transparent, lawful need. It means documenting how decisions are made and enabling third-party audits. It means partnering with community stakeholders and civil rights groups to shape policy and implementation. These steps lead to strengthened security and overall legitimacy.
Despite the technological advances, we’re still operating in a legal vacuum. The U.S. lacks comprehensive federal data privacy laws, leaving agencies and vendors to make up the principles as they go. Europe has GDPR, which offers a roadmap for consent-based data usage and accountability. The U.S., against this, has a fragmented patchwork of state-level policies that don’t adequately address the complexities of AI in public systems.
That needs to vary. We want clear, enforceable standards around how law enforcement and public safety organizations collect, store and share data. And we’d like to incorporate community stakeholders within the conversation. Consent, transparency and accountability should be baked into every level of the system, from procurement to implementation to every day use.
The Bottom Line: Without Interoperability, Privacy Suffers
In public safety, lives are on the road. The concept one vendor could control access to mission-critical data and restrict how and when it’s used isn’t just inefficient. It’s unethical.
We want to maneuver beyond the parable that innovation and privacy are at odds. Responsible AI means more equitable, effective and accountable systems. It means rejecting vendor lock-in, prioritizing interoperability and demanding open standards. Because in a democracy, no single company should control the information that decides who gets help, who gets stopped or who gets left behind.