Campaign ads can already get a bit messy and controversial.
Now imagine you’re targeted with a campaign ad by which a candidate voices strong positions that sway your vote — and the ad isn’t even real. It’s a deepfake.
This just isn’t some futuristic hypothetical; deepfakes are an actual, pervasive problem. We’ve already seen AI-generated “endorsements” making headlines, and what we’ve heard only scratches the surface.
As we approach the 2024 U.S. presidential election, we’re entering uncharted territory in cybersecurity and knowledge integrity. I’ve worked on the intersection of cybersecurity and AI since each of those were nascent concepts, and I’ve never seen anything like what’s happening without delay.
The rapid evolution of artificial intelligence — specifically generative AI and, after all, the resulting ease of making realistic deepfakes — has transformed the landscape of election threats. This recent reality demands a change in basic assumptions regarding election security and voter education.
Weaponized AI
You don’t should take my personal experience as proof; there’s loads of evidence that the cybersecurity challenges we face today are evolving at an unprecedented rate. Within the span of just a number of years, we have witnessed a dramatic transformation within the capabilities and methodologies of potential threat actors. This evolution mirrors the accelerated development we have seen in AI technologies, but with a concerning twist.
Living proof:
- Rapid weaponization of vulnerabilities. Today’s attackers can quickly exploit newly discovered vulnerabilities, often faster than patches might be developed and deployed. AI tools further speed up this process, shrinking the window between vulnerability discovery and exploitation.
- Expanded attack surface. The widespread adoption of cloud technologies has significantly broadened the potential attack surface. Distributed infrastructure and the shared responsibility model between cloud providers and users create recent vectors for exploitation if not properly managed.
- Outdated traditional security measures. Legacy security tools like firewalls and antivirus software are struggling to maintain pace with these evolving threats, especially in terms of detecting and mitigating AI-generated content.
Look Who’s Talking
On this recent threat landscape, deepfakes represent a very insidious challenge to election integrity. Recent research from Ivanti puts some numbers to the threat: greater than half of office employees (54%) are unaware that advanced AI can impersonate anyone’s voice. This lack of information amongst potential voters is deeply concerning as we approach a critical election cycle.
The sophistication of today’s deepfake technology allows threat actors, each foreign and domestic, to create convincing fake audio, video and text content with minimal effort. A straightforward text prompt can now generate a deepfake that is increasingly difficult to tell apart from real content. This capability has serious implications for the spread of disinformation and the manipulation of public opinion.
Challenges in Attribution and Mitigation
Attribution is some of the significant challenges we face with AI-generated election interference. While we have historically associated election interference with nation-state actors, the democratization of AI tools implies that domestic groups, driven by various ideological motivations, can now leverage these technologies to influence elections.
This diffusion of potential threat actors complicates our ability to discover and mitigate sources of disinformation. It also underscores the necessity for a multi-faceted approach to election security that goes beyond traditional cybersecurity measures.
A Coordinated Effort to Uphold Election Integrity
Addressing the challenge of AI-powered deepfakes in elections would require a coordinated effort across multiple sectors. Listed below are key areas where we want to focus our efforts:
- Shift-left security for AI systems. We want to use the principles of “shift-left” security to the event of AI systems themselves. This implies incorporating security considerations from the earliest stages of AI model development, including considerations for potential misuse in election interference.
- Enforcing secure configurations. AI systems and platforms that might potentially be used to generate deepfakes must have robust, secure configurations by default. This includes strong authentication measures and restrictions on the forms of content that might be generated.
- Securing the AI supply chain. Just as we give attention to securing the software supply chain, we want to increase this vigilance to the AI supply chain. This includes scrutinizing the datasets used to coach AI models and the algorithms employed in generative AI systems.
- Enhanced detection capabilities. We want to take a position in and develop advanced detection tools that may discover AI-generated content, particularly within the context of election-related information. This can likely involve leveraging AI itself to combat AI-generated disinformation.
- Voter education and awareness. A vital component of our defense against deepfakes is an informed electorate. We want comprehensive education schemes to assist voters understand the existence and potential impact of AI-generated content, and to offer them with tools to critically evaluate the knowledge they encounter.
- Cross-sector collaboration. The tech sector, particularly IT and cybersecurity firms, must work closely with government agencies, election officials and media organizations to create a united front against AI-driven election interference.
What’s Now, and What’s Next
As we implement these strategies, it’s crucial that we constantly measure their effectiveness. This can require recent metrics and monitoring tools specifically designed to trace the impact of AI-generated content on election discourse and voter behavior.
We must always even be prepared to adapt our strategies rapidly. The sector of AI is evolving at a breakneck pace, and our defensive measures must evolve just as quickly. This may increasingly involve leveraging AI itself to create more robust and adaptable security measures.
The challenge of AI-powered deepfakes in elections represents a brand new chapter in cybersecurity and knowledge integrity. To handle it, we must think beyond traditional security paradigms and foster collaboration across sectors and disciplines. The goal: to harness the ability of AI for the advantage of democratic processes while mitigating its potential for harm. This just isn’t only a technical challenge, but a societal one that can require ongoing vigilance, adaptation and cooperation.
The integrity of our elections – and by extension, the health of our democracy – will depend on our ability to fulfill this challenge head-on. It is a responsibility that falls on all of us: technologists, policymakers and residents alike.