Inevitably, these conversations take a turn: AI is having all these ripple effects , but when the technology gets higher, what happens next? That’s often when they give the impression of being at me, expecting a forecast of either doom or hope.Â
I probably disappoint, if only because predictions for AI are getting harder and harder to make.Â
Despite that, has, I need to say, a fairly excellent track record of constructing sense of where AI is headed. We’ve just published a pointy list of predictions for what’s next in 2026 (where you’ll be able to read my thoughts on the legal battles surrounding AI), and the predictions on last yr’s list all got here to fruition. But every holiday season, it gets harder and harder to work out the impact AI may have. That’s mostly due to three big unanswered questions.
For one, we don’t know if large language models will proceed getting incrementally smarter within the near future. Since this particular technology is what underpins nearly all the thrill and anxiety in AI straight away, powering the whole lot from AI companions to customer support agents, its slowdown could be a fairly huge deal. Such a giant deal, in truth, that we devoted a complete slate of stories in December to what a brand new post-AI-hype era might appear to be.Â
Number two, AI is pretty abysmally unpopular amongst most people. Here’s only one example: Nearly a yr ago, OpenAI’s Sam Altman stood next to President Trump to excitedly announce a $500 billion project to construct data centers across the US with the intention to train larger and bigger AI models. The pair either didn’t guess or didn’t care that many Americans would staunchly oppose having such data centers inbuilt their communities. A yr later, Big Tech is waging an uphill battle to win over public opinion and carry on constructing. Can it win?Â
The response from lawmakers to all this frustration is very confused. Trump has pleased Big Tech CEOs by moving to make AI regulation a federal somewhat than a state issue, and tech firms are actually hoping to codify this into law. But the gang that wishes to guard kids from chatbots ranges from progressive lawmakers in California to the increasingly Trump-aligned Federal Trade Commission, each with distinct motives and approaches. Will they have the ability to place aside their differences and rein AI firms in?Â
If the gloomy holiday dinner table conversation gets this far, someone will say: Hey, isn’t AI getting used for objectively good things? Making people healthier, unearthing scientific discoveries, higher understanding climate change?
Well, kind of. Machine learning, an older type of AI, has long been utilized in all kinds of scientific research. One branch, called deep learning, forms a part of AlphaFold, a Nobel Prize–winning tool for protein prediction that has transformed biology. Image recognition models are improving at identifying cancerous cells.Â
