AGI is suddenly a dinner table topic

-

First, let’s get the pesky business of defining AGI out of the best way. In practice, it’s a deeply hazy and changeable term shaped by the researchers or corporations set on constructing the technology. However it normally refers to a future AI that outperforms humans on cognitive tasks. humans and tasks we’re talking about makes all of the difference in assessing AGI’s achievability, safety, and impact on labor markets, war, and society. That’s why defining AGI, though an unglamorous pursuit, will not be pedantic but actually quite vital, as illustrated in a latest paper published this week by authors from Hugging Face and Google, amongst others. Within the absence of that definition, my advice whenever you hear AGI is to ask yourself what of the nebulous term the speaker means. (Don’t be afraid to ask for clarification!)

Okay, on to the news. First, a brand new AI model from China called Manus launched last week. A promotional video for the model, which is built to handle “agentic” tasks like creating web sites or performing evaluation, describes it as “potentially, a glimpse into AGI.” The model is doing real-world tasks on crowdsourcing platforms like Fiverr and Upwork, and the pinnacle of product at Hugging Face, an AI platform, called it “probably the most impressive AI tool I’ve ever tried.” 

It’s not clear just how impressive Manus actually is yet, but against this backdrop—the thought of agentic AI as a stepping stone toward AGI—it was fitting that columnist Ezra Klein dedicated his podcast on Tuesday to AGI. It also implies that the concept has been moving quickly beyond AI circles and into the realm of dinner table conversation. Klein was joined by Ben Buchanan, a Georgetown professor and former special advisor for artificial intelligence within the Biden White House.

They discussed plenty of things—what AGI would mean for law enforcement and national security, and why the US government finds it essential to develop AGI before China—but probably the most contentious segments were in regards to the technology’s potential impact on labor markets. If AI is on the cusp of excelling at plenty of cognitive tasks, Klein said, then lawmakers higher start wrapping their heads around what a large-scale transition of labor from human minds to algorithms will mean for staff. He criticized Democrats for largely not having a plan.

We could consider this to be inflating the fear balloon, suggesting that AGI’s impact is imminent and sweeping. Following close behind and puncturing that balloon with a large safety pin, then, is Gary Marcus, a professor of neural science at Latest York University and an AGI critic who wrote a rebuttal to the points made on Klein’s show.

Marcus points out that recent news, including the underwhelming performance of OpenAI’s latest ChatGPT-4.5, suggests that AGI is far more than three years away. He says core technical problems persist despite a long time of research, and efforts to scale training and computing capability have reached diminishing returns. Large language models, dominant today, may not even be the thing that unlocks AGI. He says the political domain doesn’t need people raising the alarm about AGI, arguing that such talk actually advantages the businesses spending money to construct it greater than it helps the general public good. As an alternative, we want more people questioning claims that AGI is imminent. That said, Marcus will not be doubting that AGI is feasible. He’s merely doubting the timeline. 

Just after Marcus tried to deflate it, the AGI balloon got blown up again. Three influential people—Google’s former CEO Eric Schmidt, Scale AI’s CEO Alexandr Wang, and director of the Center for AI Safety Dan Hendrycks—published a paper called “Superintelligence Strategy.” 

By “superintelligence,” they mean AI that “would decisively surpass the world’s best individual experts in nearly every mental domain,” Hendrycks told me in an email. “The cognitive tasks most pertinent to safety are hacking, virology, and autonomous-AI research and development—areas where exceeding human expertise could give rise to severe risks.”

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x