OpenAI has split the inner committee answerable for safety reviews before releasing recent AI models into an independent board oversight body—and CEO Sam Altman is out of the image.
On the sixteenth (local time), OpenAI announced on its website that it will change its ‘Safety and Security Committee’ to an independent organization supervised by the board of directors.
The brand new organization shall be chaired by Carnegie Mellon University professor Zico Coulter and led by Quora CEO Adam D’Angelo, former U.S. Army general Paul Nakasone, and former Sony vp Nicole Seligman, all of whom are members of OpenAI’s board of directors.
Although the complete list of committee members was not disclosed, CEO Sam Altman is claimed to have been excluded.
The organization was arrange in May to deal with the risks posed by recent models and future technologies. The board may oppose the launch of the model, as reported here. Nonetheless, on the time, it included six internal staff members, including CEO Altman.
OpenAI said it will “address issues of safety for the next-generation Porontia model over the following three months,” resulting in speculation that GPT-5 is perhaps released at the top of August. Nonetheless, OpenAI confirmed that the committee’s first task was o1, which was released last week.
It also explained that they are going to proceed to receive regular reports on technology assessments for current and future models, in addition to ongoing monitoring reports after launch.
The purpose is that although CEO Altman has formally stepped down, nothing has modified in substance.
TechCrunch notes that “even without Altman, there’s little indication that the Safety and Security Committee will make difficult decisions that seriously affect OpenAI’s industrial roadmap.” Indeed, a lot of the board members who lead the committee are said to be pro-Altman.
The move was reportedly a results of recent staff departures and questions raised by U.S. senators concerning the company’s policies. Five senators sent a letter to Altman in July criticizing the corporate’s lack of AI safety measures.
In any case, OpenAI continues so as to add devices to mitigate safety concerns raised ahead of the launch of GPT-5. Last month, it announced that the AI ​​Safety Lab, a U.S. government agency, would conduct preliminary testing of the following model.
Reporter Park Chan cpark@aitimes.com