What we’ve been getting improper about AI’s truth crisis

-

On Thursday, I reported the primary confirmation that the US Department of Homeland Security, which houses immigration agencies, is using AI video generators from Google and Adobe to make content that it shares with the general public. The news comes as immigration agencies have flooded social media with content to support President Trump’s mass deportation agenda—a few of which appears to be made with AI (like a video about “Christmas after mass deportations”).

But I received two forms of reactions from readers that will explain just as much concerning the epistemic crisis we’re in. 

One was from individuals who weren’t surprised, because on January 22 the White House had posted a digitally altered photo of a lady arrested at an ICE protest, one which made her appear hysterical and in tears. Kaelan Dorr, the White House’s deputy communications director, didn’t reply to questions on whether the White House altered the photo but wrote, “The memes will proceed.”

The second was from readers who saw no point in reporting that DHS was using AI to edit content shared with the general public, because news outlets were apparently doing the identical. They pointed to the indisputable fact that the news network MS Now (formerly MSNBC) shared a picture of Alex Pretti that was AI-edited and appeared to make him look more handsome, a indisputable fact that led to many viral clips this week, including one from Joe Rogan’s podcast. Fight fire with fire, in other words? A spokesperson for MS Now told Snopes that the news outlet aired the image without knowing it was edited.

There is no such thing as a reason to collapse these two cases of altered content into the identical category, or to read them as evidence that truth not matters. One involved the US government sharing a clearly altered photo with the general public and declining to reply whether it was intentionally manipulated; the opposite involved a news outlet airing a photograph it must have known was altered but taking some steps to reveal the error.

What these reactions reveal as a substitute is a flaw in how we were collectively preparing for this moment. Warnings concerning the AI truth crisis revolved around a core thesis: that not with the ability to tell what’s real will destroy us, so we want tools to independently confirm the reality. My two grim takeaways are that these tools are failing, and that while vetting the reality stays essential, it is not any longer capable by itself of manufacturing the societal trust we were promised.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x