3. AI is power hungry and getting hungrier.
You’ve probably heard that AI is power hungry. But a variety of that repute comes from the quantity of electricity it takes to coach these giant models, though giant models only get trained once in a while.
What’s modified is that these models at the moment are getting used by tons of of hundreds of thousands of individuals every single day. And while using a model takes far less energy than training one, the energy costs ramp up massively with those sorts of user numbers.
ChatGPT, for instance, has 400 million weekly users. That makes it the fifth-most-visited website on the earth, just after Instagram and ahead of X. Other chatbots are catching up.
So it’s no surprise that tech corporations are racing to construct latest data centers within the desert and revamp power grids.
The reality is we’ve been in the dead of night about exactly how much energy it takes to fuel this boom because none of the most important corporations constructing this technology have shared much details about it.
That’s starting to vary, nonetheless. Several of my colleagues spent months working with researchers to crunch the numbers for some open source versions of this tech. (Do try what they found.)
4. No person knows exactly how large language models work.
Sure, we all know the right way to construct them. We all know the right way to make them work rather well—see no. 1 on this list.
But how they do what they do continues to be an unsolved mystery. It’s like these items have arrived from outer space and scientists are poking and prodding them from the surface to determine what they are surely.
It’s incredible to think that never before has a mass-market technology utilized by billions of individuals been so little understood.
Why does that matter? Well, until we understand them higher we won’t know exactly what they will and may’t do. We won’t know the right way to control their behavior. We won’t fully understand hallucinations.
5. AGI doesn’t mean anything.
Not way back, talk of AGI was fringe, and mainstream researchers were embarrassed to bring it up. But as AI has got higher and way more lucrative, serious persons are comfortable to insist they’re about to create it. Whatever it’s.
AGI—or artificial general intelligence—has come to mean something like: AI that may match the performance of humans on a big selection of cognitive tasks.
But what does that mean? How will we measure performance? Which humans? How wide a spread of tasks? And performance on cognitive tasks is just one other way of claiming intelligence—so the definition is circular anyway.
Essentially, when people consult with AGI they now are inclined to just mean AI, but higher than what now we have today.
There’s this absolute faith within the progress of AI. It’s gotten higher previously, so it’s going to proceed to get well. But there’s zero evidence that this can actually play out.
So where does that leave us? We’re constructing machines which might be getting excellent at mimicking among the things people do, however the technology still has serious flaws. And we’re only just determining the way it actually works.
Here’s how I take into consideration AI: We now have built machines with humanlike behavior, but we haven’t shrugged off the habit of imagining a humanlike mind behind them. This results in exaggerated assumptions about what AI can do and plays into the broader culture wars between techno-optimists and techno-skeptics.
It’s right to be amazed by this technology. It’s also right to be skeptical of lots of the things said about it. It’s still very early days, and it’s all up for grabs.