A group of whistleblowers from OpenAI filed a complaint with the US Securities and Exchange Commission (SEC), accusing the organization of illegally prohibiting employees from warning regulators about the serious risks that artificial intelligence may pose to humanity.

Image source: Elizabeth Frantz/For The Washington Post

OpenAI forced employees to sign overly restrictive employment contracts, severance agreements and non-disclosure agreements, The Washington Post reported, citing anonymous sources. These documents could lead to punishment for workers if they independently contacted federal regulators about OpenAI’s activities.

Implicit in the employment contract was the following line: “We don’t want employees talking to federal regulators,” said one whistleblower, who spoke on condition of anonymity for fear of retaliation. “I don’t think AI companies can create technologies that are safe and in the public interest if they shield themselves from scrutiny and dissent,” he added.

However, Hannah Wong, an OpenAI spokeswoman, responded to the allegations by saying the company’s whistleblower policy protects employees’ rights to disclose information and that the company welcomes discussions about the impact of technology on society. “We believe a serious discussion about this technology is critical and have already made changes to our termination process to remove non-disclosure clauses,” she said.

In turn, whistleblower lawyer Stephen Kohn stressed that such agreements threaten employees with criminal prosecution if they report any violations to authorities, and that this contradicts federal laws and regulations protecting whistleblowers. He also noted that the agreement did not contain exceptions for the disclosure of certain information about violations of law, which is also a violation of SEC regulations.

The whistleblower letter comes amid growing concerns that OpenAI, originally founded as a non-profit organization with an altruistic mission, is now putting profit before safety when creating its technology. ChatGPT’s latest AI model was reportedly released before the company completed its own security review due to management’s desire to meet a deadline.

The unnamed OpenAI employees called on the SEC to conduct a thorough investigation and take action against OpenAI, including imposing fines for each improper contract and requiring all past and present employees to be notified of their right to confidentially and anonymously report any violations of the law to the SEC.

San Francisco lawyer Chris Baker, who recently won a lawsuit against Google over an employment agreement and a $27 million settlement for its employees, said the fight against NDAs in Silicon Valley has been going on for a long time. . However, “employers understand that sometimes the damage from data breaches is much greater than from lawsuits, and they are willing to take the risk,” Baker said.

Leave a Reply

Your email address will not be published. Required fields are marked *