On the time, few people beyond the insular world of AI research knew about OpenAI. But as a reporter at covering the ever‑expanding boundaries of artificial intelligence, I had been following its movements closely.
Until that yr, OpenAI had been something of a stepchild in AI research. It had an outlandish premise that AGI may very well be attained inside a decade, when most non‑OpenAI experts doubted it may very well be attained in any respect. To much of the sphere, it had an obscene amount of funding despite little direction and spent an excessive amount of of the cash on marketing what other researchers continuously snubbed as unoriginal research. It was, for some, also an object of envy. As a nonprofit, it had said that it had no intention to chase commercialization. It was a rare mental playground without strings attached, a haven for fringe ideas.
But within the six months leading as much as my visit, the rapid slew of changes at OpenAI signaled a significant shift in its trajectory. First was its confusing decision to withhold GPT‑2 and brag about it. Then its announcement that Sam Altman, who had mysteriously departed his influential perch at YC, would step in as OpenAI’s CEO with the creation of its recent “capped‑profit” structure. I had already made my arrangements to go to the office when it subsequently revealed its cope with Microsoft, which gave the tech giant priority for commercializing OpenAI’s technologies and locked it into exclusively using Azure, Microsoft’s cloud‑computing platform.
Each recent announcement garnered fresh controversy, intense speculation, and growing attention, starting to succeed in beyond the confines of the tech industry. As my colleagues and I covered the corporate’s progression, it was hard to know the total weight of what was happening. What was clear was that OpenAI was starting to exert meaningful sway over AI research and the way in which policymakers were learning to grasp the technology. The lab’s decision to revamp itself right into a partially for‑profit business would have ripple effects across its spheres of influence in industry and government.
So late one night, with the urging of my editor, I dashed off an email to Jack Clark, OpenAI’s policy director, whom I had spoken with before: I can be on the town for 2 weeks, and it felt like the fitting moment in OpenAI’s history. Could I interest them in a profile? Clark passed me on to the communications head, who got here back with a solution. OpenAI was indeed able to reintroduce itself to the general public. I might have three days to interview leadership and embed contained in the company.
Brockman and I settled right into a glass meeting room with the corporate’s chief scientist, Ilya Sutskever. Sitting side by side at a protracted conference table, they each played their part. Brockman, the coder and doer, leaned forward, a bit on edge, able to make a superb impression; Sutskever, the researcher and philosopher, settled back into his chair, relaxed and aloof.
I opened my laptop and scrolled through my questions. OpenAI’s mission is to make sure helpful AGI, I started. Why spend billions of dollars on this problem and never something else?
Brockman nodded vigorously. He was used to defending OpenAI’s position. “The rationale that we care a lot about AGI and that we expect it’s essential to construct is because we expect it could help solve complex problems which can be just out of reach of humans,” he said.