“There’s plenty of consequences of AI,” he said. “However the one I feel probably the most about is automated research. Once we have a look at human history, plenty of it’s about technological progress, about humans constructing latest technologies. The purpose when computers can develop latest technologies themselves looks as if an important, um, inflection point.
“We already see these models assist scientists. But when they can work on longer horizons—once they’re able to determine research programs for themselves—the world will feel meaningfully different.”
For Chen, that ability for models to work by themselves for longer is essential. “I mean, I do think everyone has their very own definitions of AGI,” he said. “But this idea of autonomous time—just the period of time that the model can spend making productive progress on a difficult problem without hitting a dead end—that’s considered one of the large things that we’re after.”
It’s a daring vision—and much beyond the capabilities of today’s models. But I used to be nevertheless struck by how Chen and Pachocki made AGI sound almost mundane. Compare this with how Sutskever responded after I spoke to him 18 months ago. “It’s going to be monumental, earth-shattering,” he told me. “There shall be a before and an after.” Faced with the immensity of what he was constructing, Sutskever switched the main focus of his profession from designing higher and higher models to determining learn how to control a technology that he believed would soon be smarter than himself.
Two years ago Sutskever arrange what he called a superalignment team that he would co-lead with one other OpenAI safety researcher, Jan Leike. The claim was that this team would funnel a full fifth of OpenAI’s resources into determining learn how to control a hypothetical superintelligence. Today, the general public on the superalignment team, including Sutskever and Leike, have left the corporate and the team not exists.
When Leike quit, he said it was since the team had not been given the support he felt it deserved. He posted this on X: “Constructing smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an unlimited responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.” Other departing researchers shared similar statements.
I asked Chen and Pachocki what they make of such concerns. “Lots of these items are highly personal decisions,” Chen said. “You understand, a researcher can sort of, you recognize—”
He began again. “They may have a belief that the sector goes to evolve in a certain way and that their research goes to pan out and goes to bear fruit. And, you recognize, possibly the corporate doesn’t reshape in the way in which that you simply want it to. It’s a really dynamic field.”