The most important AI flops of 2024

-

AI slop infiltrated almost every corner of the web

Generative AI makes creating reams of text, images, videos, and other varieties of material a breeze. Since it takes just a couple of seconds between entering a prompt on your model of selection to spit out the result, these models have turn out to be a fast, easy strategy to produce content on an enormous scale. And 2024 was the 12 months we began calling this (generally poor quality) media what it’s—AI slop.  

This low-stakes way of making AI slop means it may now be present in just about every corner of the web: from the newsletters in your inbox and books sold on Amazon, to ads and articles across the net and shonky pictures in your social media feeds. The more emotionally evocative these pictures are (wounded veterans, crying children, a signal of support within the Israel-Palestine conflict) the more likely they’re to be shared, leading to higher engagement and ad revenue for his or her savvy creators.

AI slop isn’t just annoying—its rise poses a real problem for the long run of the very models that helped to supply it. Because those models are trained on data scraped from the web, the increasing variety of junky web sites containing AI garbage means there’s a really real danger models’ output and performance will get steadily worse. 

AI art is warping our expectations of real events

2024 was also the 12 months that the results of surreal AI images began seeping into our real lives. Willy’s Chocolate Experience, a wildly unofficial immersive event inspired by Roald Dahl’s , made headlines the world over in February after its fantastical AI-generated marketing materials gave visitors the impression it might be much grander than the sparsely-decorated warehouse its producers created.

Similarly, tons of of individuals lined the streets of Dublin for a Halloween parade that didn’t exist. A Pakistan-based website used AI to create a listing of events in the town, which was shared widely across social media ahead of October 31. Although the Search engine optimization-baiting site (myspirithalloween.com) has since been taken down, each events illustrate how misplaced public trust in AI-generated material online can come back to haunt us.

Grok allows users to create images of just about any scenario

The overwhelming majority of major AI image generators have guardrails—rules that dictate what AI models can and may’t do—to stop users from creating violent, explicit, illegal, and other varieties of harmful content. Sometimes these guardrails are only meant to be certain that that nobody makes blatant use of others’ mental property. But Grok, an assistant made by Elon Musk’s AI company, called xAI, ignores just about all of those principles in step with Musk’s rejection of what he calls “woke AI.”

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x