OpenAI, Internal Whistleblower Requests Government Investigation… “AI Safety Takes a Backseat”

-

(Photo = Shutterstock)

OpenAI is constant to show internal issues related to AI safety. Following the revelation that ‘GPT-4o’, which was released in May, was released swiftly, ignoring even internal safety processes, there has even been a request to government authorities to terminate the contract prohibiting internal whistleblowing on safety.

The Washington Post (WP) reported on the twelfth (local time), citing three sources who requested anonymity, that OpenAI rushed to finish internal safety testing procedures to satisfy the discharge date of GPT-4o.

In line with this, OpenAI compressed the security tester period to only one week despite opposition from employees. It also prepared an alternate plan to release a previous version that was already deemed protected if an issue occurred through the test. It is claimed that it even planned a celebration to rejoice the discharge of GPT-4o before the test was even over.

Regarding this, an OpenAI official admitted, “Every week is enough to finish the test, but it surely is true that there was pressure.” He added, “It’s true that this shouldn’t be the most effective method,” and “We’re completely reconsidering the testing method.”

OpenAI issued an official statement saying, “While the corporate didn’t take any shortcuts in its safety processes, we recognize that the team was stressed.” It added, “We conducted extensive internal and external testing, and initially put some multimedia features on hold to proceed our safety work.”

The corporate has made several guarantees and policies regarding AI safety. In July of last 12 months, it issued a voluntary ‘AI Safety Pledge’ with six major firms, and in February of this 12 months, it joined the ‘AI Safety Consortium’ led by the White House with 200 firms. In May, it announced that it could launch a ‘Safety Committee’ comprised of internal personnel and board members, and that it could release recent products after 90 days of testing and obtaining board approval.

Nevertheless, despite these repeated guarantees, safety is definitely being placed on the back burner. For this reason safety-related personnel recently left the corporate one after one other, saying they were disenchanted in the corporate. As well as, in June, nine current and former employees issued an open letter urging, “Don’t block AI risk warnings.”

(Photo = Shutterstock)
(Photo = Shutterstock)

Then, on the thirteenth, WP obtained a document sent to the U.S. Securities and Exchange Commission (SEC) by some OpenAI employees, reporting that the corporate had signed a contract with its employees that will impose disadvantages in the event that they leaked internal secrets.

In documents sent to the SEC earlier this month, OpenAI urged the federal government to research allegations that it forced employees to sign nondisclosure agreements, violating the federal government’s whistleblower protection policy.

The problem was first brought up in May when the interior safety team was disbanded. On the time, CEO Sam Altman said he would fix it. OpenAI also said it had “already made significant changes to its exit procedures to remove the anti-disparagement clause.”

The series of events comes amid criticism that OpenAI, a nonprofit with an altruistic mission, prioritizes profits over safety, and that the technology company’s strict confidentiality agreements have long plagued employees and regulators.

Such concerns are more likely to bolster some lawmakers pushing for stronger AI regulation laws. They indicate that AI firms within the U.S. largely operate in a legal vacuum, and policymakers cannot effectively craft recent AI policies without the assistance of whistleblowers who can explain the potential threats posed by the fast-moving technology.

Meanwhile, it shouldn’t be known whether the SEC has launched a proper investigation. Nevertheless, within the strategy of sharing this document with Congress, it’s reported that it emphasized that “we must take swift and aggressive motion to deal with these illegal contracts.”

Reporter Im Dae-jun ydj@aitimes.com

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x