Home Artificial Intelligence Generative algorithms and cracked mirrors

Generative algorithms and cracked mirrors

Generative algorithms and cracked mirrors

Enrique Dans
IMAGE: In a surreal landscape in brownish tones, a standing broken mirror
IMAGE: 愚木混株 (Cdd20) — Pixabay

The variety of ChatGPT users is beginning to fall for the primary time since its launch, with 10% fewer visits worldwide in June. At the identical time, we’re beginning to see an increasing number of cases of AI-spam, web pages written by generative algorithms— a few of them not even bothering to delete the very recognizable “sorry, as a man-made intelligence-based language model, I can’t generate…” or the ultimate “in brief…” paragraphs.

We face a primary order conceptual absurdity rooted in an already disastrous situation: the content creation industry, pages created in factories where people copied, mixed and pasted from other pages to generate a relentless flow of content destined to index and host ads, or to turn out to be link generators sold to the very best bidder. search engine optimisation has already ruined the online and filled the world with page farms, and now the arrival of generative algorithms controlled by lunatics guarantees to complete the job, effectively taking up web.

We’ve a basic problem: we have no idea which pages generative algorithms are being trained with, but from the style of errors often present in their responses, it seems clear that there are few criteria. During which case, who should resolve which pages are chosen to feed generative algorithms?

Google hinted at this years ago: the thought of making some type of authority index” to choose which pages reply to reasonably rigorous criteria and that are garbage, lies, conspiracy or outright stupidity is one, but suffers from many problems: the primary, subjectivity: whoever makes these decisions could be obtaining, in case they manage to standardize their criteria, enormous power that comes, as Peter Parker’s Uncle Ben would say, with great responsibility. Second, cultural aspects: what’s indisputable or true in a given cultural context might not be so in one other. And eventually, interest: choosing some content over others will be done with the aim of preserving the reality, or, as has normally happened, to make more cash.

Following the launch of ChatGPT at the top of last November the online is filling up with mechanically generated junk content, and if no attention is paid to the training criteria of those algorithms (that are free to make use of, let’s not forget, not out of the goodness of OpenAI’s heart, but because the corporate gets more data for its…



Please enter your comment!
Please enter your name here