When the generative-AI boom first kicked off, one among the most important concerns amongst pundits and experts was that hyperrealistic AI deepfakes could possibly be used to influence elections. But recent research from the Alan Turing Institute within the UK shows that those fears may need been overblown. AI-generated falsehoods and deepfakes appear to have had no effect on election leads to the UK, France, and the European Parliament, in addition to other elections world wide to this point this 12 months.
As an alternative of using generative AI to interfere in elections, state actors resembling Russia are counting on well-established techniques—resembling social bots that flood comment sections—to sow division and create confusion, says Sam Stockwell, the researcher who conducted the study. Read more about it from me here.
But one of the crucial consequential elections of the 12 months continues to be ahead of us. In only over a month, Americans will head to the polls to decide on Donald Trump or Kamala Harris as their next president. Are the Russians saving their GPUs for the US elections?
To date, that doesn’t appear to be the case, says Stockwell, who has been monitoring viral AI disinformation across the US elections too. Bad actors are “still counting on these well-established methods which were used for years, if not a long time, around things resembling social bot accounts that attempt to create the impression that pro-Russian policies are gaining traction among the many US public,” he says.
And after they do try to make use of generative-AI tools, they don’t appear to repay, he adds. For instance, one information campaign with strong ties to Russia, called Copy Cop, has been attempting to use chatbots to rewrite real news stories on Russia’s war in Ukraine to reflect pro-Russian narratives.
The issue? They’re forgetting to remove the prompts from the articles they publish.
Within the short term, there are a couple of things that the US can do to counter more immediate harms, says Stockwell. For instance, some states, resembling Arizona and Colorado, are already conducting red-teaming workshops with election polling officials and law enforcement to simulate worst-case scenarios involving AI threats on Election Day. There also must be heightened collaboration between social media platforms, their online safety teams, fact-checking organizations, disinformation researchers, and law enforcement to be certain that viral influencing efforts might be exposed, debunked, and brought down, says Stockwell.
But while state actors aren’t using deepfakes, that hasn’t stopped the candidates themselves. Most recently Donald Trump has used AI-generated images implying that Taylor Swift had endorsed him. (Soon after, the pop star offered her endorsement to Harris.)