OpenAI employees accused the company of preventing publicity about the risks of AI

A group of whistleblowers from OpenAI filed a complaint with the US Securities and Exchange Commission (SEC), accusing the organization of illegally prohibiting employees from warning regulators about the serious risks that artificial intelligence may pose to humanity.

Image source: Elizabeth Frantz/For The Washington Post

OpenAI forced employees to sign overly restrictive employment contracts, severance agreements and non-disclosure agreements, The Washington Post reported, citing anonymous sources. These documents could lead to punishment for workers if they independently contacted federal regulators about OpenAI’s activities.

Implicit in the employment contract was the following line: “We don’t want employees talking to federal regulators,” said one whistleblower, who spoke on condition of anonymity for fear of retaliation. “I don’t think AI companies can create technologies that are safe and in the public interest if they shield themselves from scrutiny and dissent,” he added.

However, Hannah Wong, an OpenAI spokeswoman, responded to the allegations by saying the company’s whistleblower policy protects employees’ rights to disclose information and that the company welcomes discussions about the impact of technology on society. “We believe a serious discussion about this technology is critical and have already made changes to our termination process to remove non-disclosure clauses,” she said.

In turn, whistleblower lawyer Stephen Kohn stressed that such agreements threaten employees with criminal prosecution if they report any violations to authorities, and that this contradicts federal laws and regulations protecting whistleblowers. He also noted that the agreement did not contain exceptions for the disclosure of certain information about violations of law, which is also a violation of SEC regulations.

The whistleblower letter comes amid growing concerns that OpenAI, originally founded as a non-profit organization with an altruistic mission, is now putting profit before safety when creating its technology. ChatGPT’s latest AI model was reportedly released before the company completed its own security review due to management’s desire to meet a deadline.

The unnamed OpenAI employees called on the SEC to conduct a thorough investigation and take action against OpenAI, including imposing fines for each improper contract and requiring all past and present employees to be notified of their right to confidentially and anonymously report any violations of the law to the SEC.

San Francisco lawyer Chris Baker, who recently won a lawsuit against Google over an employment agreement and a $27 million settlement for its employees, said the fight against NDAs in Silicon Valley has been going on for a long time. . However, “employers understand that sometimes the damage from data breaches is much greater than from lawsuits, and they are willing to take the risk,” Baker said.

admin

Share
Published by
admin

Recent Posts

Alibaba Cloud Reduces Data Center Assembly Time by 50% Using Modular Architecture

Alibaba Cloud presented at its annual Apsara conference a modular data center architecture called “CUBE…

10 mins ago

The release has crept up unnoticed: the classic version of Resident Evil 3 will appear on GOG very soon

The original Resident Evil 3: Nemesis turned 25 years old yesterday, and the digital distribution…

40 mins ago

Biden and Modi agreed to build a chip factory in India

The United States and India have reached an agreement under which a new semiconductor manufacturing…

1 hour ago

An insider has revealed the main source of inspiration for the multiplayer Assassin’s Creed Invictus – Fall Guys

Image Source: Mediatonic Among the available formats are team deathmatch, every man for himself, and…

3 hours ago

Seasonic has released a PRIME PX-2200 power supply with a power of 2200 W for $500

Seasonic has released the PRIME PX-2200 2200 W power supply. The new product was first…

3 hours ago