Deepfakes and Navigating the Latest Era of Synthetic Media

-

Remember “fake news“? The term has been used (and abused) so extensively at this point that it might be hard to recollect what it initially referred to. However the concept has a really specific origin. Ten years ago, journalists began sounding the alarm about an influx of purported “news” sites flinging false, often outlandish claims about politicians and celebrities. Many could immediately tell these sites were illegitimate.

But many more lacked the critical tools to acknowledge this. The result was the primary stirrings of an epistemological crisis that’s now coming to engulf the web—one which has reached its most frightening manifestation with the rise of deepfakes.

Next to even a passable deepfake, the “fake news” web sites of yore seem tame. Worse yet, even those that consider themselves to own relatively high levels of media literacy are prone to being fooled. Synthetic media created with the usage of deep learning algorithms and generative AI have the potential to wreak havoc on the foundations of our society. Based on Deloitte, this 12 months alone they might cost businesses greater than $250 million through phony transactions and other varieties of fraud. Meanwhile, the World Economic Forum has called deepfakes “some of the worrying uses of AI,” pointing to the potential of “agenda-driven, real-time AI chatbots and avatars” to facilitate latest strains of ultra-personalized (and ultra-effective) manipulation.

The WEF’s suggested response to this problem is a smart one: they advocate a “zero-trust mindset,” one which brings a level of skepticism to each encounter with digital media. If we wish to tell apart between the authentic and artificial moving forward—especially in immersive online environments—such a mindset can be increasingly essential.

Two approaches to combating the deepfake crisis

Combating rampant disinformation bred by synthetic media would require, for my part, two distinct approaches.

The primary involves verification: providing a straightforward way for on a regular basis web users to find out whether the video they’re is indeed authentic. Such tools are already widespread in industries like insurance, given the potential of bad actors to file false claims abetted by doctored videos, photographs and documents. Democratizing these tools—making them free and straightforward to access—is a vital first step on this fight, and we’re already seeing significant movement on this front.

The second step is less technological in nature, and thus more of a challenge: namely, raising awareness and fostering critical considering skills. Within the aftermath of the unique “fake news” scandal, in 2015, nonprofits across the country drew up media literacy programs and worked to spread best practices, often pairing with local civic institutions to empower on a regular basis residents to identify falsehoods. In fact, old-school “fake news” is child’s play next to essentially the most advanced deepfakes, which is why we’d like to redouble our efforts on this front and put money into education at every level.

Advanced deepfakes require advanced critical considering

In fact, these educational initiatives were somewhat easier to undertake when the disinformation in query was text-based. With fake news sites, the telltale signs of fraudulence were often obvious: janky website design, rampant typos, bizarre sourcing. With deepfakes, the signs are way more subtle—and very often not possible to note at first glance.

Accordingly, web users of all ages have to effectively re-train themselves to scrutinize digital video for deepfake indicators. Which means paying close attention to a lot of aspects. For video, that might mean unreal-seeming blurry areas and shadows; unnatural-looking facial movements and expressions; too-perfect skin tones; inconsistent patterns in clothing and in movements; lip sync errors; on and on. For audio, that might mean voices which might be too-pristine sounding (or obviously digitized), an absence of a human-feeling emotional tone, odd speech patterns, or unusual phrasing.

Within the short-term, this sort of self-training will be highly useful. By asking ourselves, over and once more, Does this look suspicious?, we sharpen not merely our ability to detect deepfakes but our critical considering skills typically. That said, we’re rapidly approaching a degree at which not even the best-trained eye will give you the option to separate fact from fiction without outside assistance. The visual tells—the irregularities mentioned above—can be technologically smoothed over, such that wholly manufactured clips can be indistinguishable from the real article. What we can be left with is our situational intuition—our ability to ask ourselves questions like Would such-and-such a politician or celebrity really say that? Is the content of this video plausible?

It’s on this context that AI-detection platforms turn out to be so essential. With the naked eye rendered irrelevant for deepfake detection purposes, these platforms can function definitive arbiters of reality—guardrails against the epistemological abyss. When a video looks real but someway seems suspicious—as will occur increasingly more often in the approaching months and years—these platforms can keep us grounded within the facts by confirming the baseline veracity of whatever we’re . Ultimately, with technology this powerful, the one thing that may save us is AI itself. We’d like to fight fire with fire—which implies using good AI to root out the technology’s worst abuses.

Really, the acquisition of those skills under no circumstances must be a cynical or negative process. Fostering a zero-trust mindset can as an alternative be considered a chance to sharpen your critical considering, intuition, and awareness. By asking yourself, over and once more, certain key questions—Does this make sense? Is that this suspicious?—you heighten your ability to confront not merely fake media however the world writ large. If there is a silver lining to the deepfake era, that is it. We’re being forced to think for ourselves and to turn out to be more empirical in our day-to-day lives—and that may only be a great thing.

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x