
Three years ago, ChatGPT was born. It amazed the world and ignited unprecedented investment and excitement in AI. Today, ChatGPT remains to be a toddler, but public sentiment across the AI boom has turned sharply negative. The shift began when OpenAI released GPT-5 this summer to mixed reviews, mostly from casual users who, unsurprisingly, judged the system by its surface flaws reasonably than its underlying capabilities.
Since then, pundits and influencers have declared that AI progress is slowing, that scaling has “hit the wall,” and that your entire field is just one other tech bubble inflated by blusterous hype. In reality, many influencers have latched onto the dismissive phrase “AI slop” to diminish the amazing images, documents, videos and code that frontier AI models generate on command.
This attitude will not be just mistaken, it’s dangerous.
It makes me wonder, where were all these “experts” on irrational technology bubbles when electric scooter startups were touted as a transportation revolution and cartoon NFTs were being auctioned for hundreds of thousands? They were probably too busy buying worthless land within the metaverse or adding to their positions in GameStop. But in relation to the AI boom, which is well probably the most significant technological and economic transformation agent of the last 25 years, journalists and influencers can’t write the word “slop” enough times.
Doth we protest an excessive amount of? In spite of everything, by any objective measure AI is wildly more capable than the overwhelming majority of computer scientists predicted only five years ago and it remains to be improving at a surprising pace. The impressive leap demonstrated by Gemini 3 is barely the newest example. At the identical time, McKinsey recently reported that 20% of organizations already derive tangible value from genAI. Also, a recent survey by Deloitte indicates that 85% of organizations boosted their AI investment in 2025, and 91% plan to extend again in 2026.
This doesn’t fit the “bubble” narrative and the dismissive “slop” language. As a pc scientist and research engineer who began working with neural networks back in 1989 and tracked progress through cold winters and hot booms ever since, I find myself amazed almost daily by the rapidly increasing capabilities of frontier AI models. After I talk with other professionals in the sector, I hear similar sentiments. If anything, the speed of AI advancement leaves many experts feeling overwhelmed and albeit somewhat scared.
The hazards of AI denial
So why is the general public buying into the narrative that AI is faltering, that the output is “slop,” and that the AI boom lacks authentic use cases? Personally, I imagine it’s because we’ve fallen right into a collective state of AI denial, latching onto the narratives we would like to listen to within the face of strong evidence on the contrary. Denial is the primary stage of grief and thus an affordable response to the very disturbing prospect that we humans may soon lose cognitive supremacy here on planet earth. In other words, the overblown AI bubble narrative is a societal defense mechanism.
Imagine me, I get it. I’ve been warning concerning the destabilizing risks and demoralizing impact of superintelligence for well over a decade, and I too feel AI is getting too smart too fast. The actual fact is, we’re rapidly headed towards a future where widely available AI systems will find a way to outperform most humans in most cognitive tasks, solving problems faster, more accurately and yes, more creatively than any individual can. I emphasize “creativity” because AI denialists often insist that certain human qualities (particularly creativity and emotional intelligence) will at all times be out of reach of AI systems. Unfortunately, there may be little evidence supporting this attitude.
On the creativity front, today’s AI models can generate content faster and with more variation than any individual human. Critics argue that true creativity requires inner motivation. I resonate with that argument but find it circular — we're defining creativity based on how we experience it reasonably than the standard, originality or usefulness of the output. Also, we just don’t know if AI systems will develop internal drives or a way of agency. Either way, if AI can produce original work that rivals most human professionals, the impact on creative jobs will still be quite devastating.
The AI manipulation problem
Our human edge around emotional intelligence is much more precarious. It’s likely that AI will soon find a way to read our emotions faster and more accurately than any human, tracking subtle cues in our micro-expressions, vocal patterns, posture, gaze and even respiration. And as we integrate AI assistants into our phones, glasses and other wearable devices, these systems will monitor our emotional reactions throughout our day, constructing predictive models of our behaviors. Without strict regulation, which is increasingly unlikely, these predictive models could possibly be used to focus on us with individually optimized influence that maximizes persuasion.
This is named the AI manipulation problem and it suggests that emotional intelligence may not give humanity a bonus. In reality, it could possibly be a major weakness, fostering an asymmetric dynamic where AI systems can read us with superhuman accuracy, while we will’t read AI in any respect. Once you talk with photorealistic AI agents (and you’ll) you’ll see a smiling façade designed to look warm, empathic and trustworthy. It’ll feel and look human, but that’s just an illusion, and it could easily sway your perspectives. In spite of everything, our emotional reactions to faces are visceral reflexes shaped by hundreds of thousands of years of evolution on a planet where every interactive human face we encountered was actually human. Soon, that may not be true.
We’re rapidly heading toward a world where lots of the faces we encounter will belong to AI agents hiding behind digital facades. In reality, these “virtual spokespeople” could easily have appearances which are designed for every of us based on our prior reactions – whatever gets us to best let down our guard. And yet many insist that AI is just one other tech cycle.
That is wishful pondering. The huge investment pouring into AI isn’t driven by hype — it’s driven by the expectation that AI will permeate every aspect of day by day life, embodied as intelligent actors we engage throughout our day. These systems will assist us, teach us and influence us. They may reshape our lives, and it is going to occur faster than most individuals think.
To be clear, we will not be witnessing an AI bubble filling with empty gas. We’re watching a brand new planet form, a molten world rapidly taking shape, and it is going to solidify into a brand new AI-powered society. Denial is not going to stop this. It’ll only make us less prepared for the risks.
Louis Rosenberg is an early pioneer of augmented reality and a longtime AI researcher.
