A whistleblower at OpenAI has filed a complaint with the Securities and Exchange Commission, calling for an investigation, alleging that the artificial intelligence company illegally barred employees from warning regulators about significant risks its technology poses to humanity.
The whistleblowers say Open AI issued overly restrictive employment, severance and non-disclosure agreements to employees and could have led to penalties for employees who raised concerns about Open AI with federal regulators, according to a seven-page letter that referenced a formal complaint sent to SEC commissioners earlier this month and was obtained exclusively by The Washington Post.
According to the letter, OpenAI had its employees sign employee agreements that required them to waive federal rights to whistleblower compensation. These agreements also required OpenAI employees to obtain consent from the company before disclosing information to federal authorities. OpenAI did not include an exemption in its employees’ non-disparagement clauses for disclosing securities violations to the Securities and Exchange Commission.
The letter said these overly broad agreements violate long-standing federal laws and regulations designed to protect whistleblowers who want to anonymously reveal damaging information about their companies without fear of retaliation.
“These contracts sent the message that we don’t want our employees to talk to federal regulators,” said one of the whistleblowers, who spoke on the condition of anonymity for fear of retaliation. “I don’t think AI companies can build technology that is safe and in the public interest if they protect themselves from oversight and dissent.”
Get caught up in
Stories to keep you up to date
“Our whistleblower policy protects employees’ rights to make protected disclosures. Additionally, we believe a rigorous debate about this technology is essential, and have already made important changes to our exit process and removed non-disparagement clauses,” OpenAI spokesperson Hannah Wong said in a statement.
The whistleblower letter comes amid concerns that OpenAI, which began as a nonprofit with an altruistic mission, is prioritizing profits over safety in developing its technology. The Washington Post reported Friday that OpenAI rushed to release the latest AI model that powers ChatGPT to meet a May release date set by company leaders, despite employees’ concerns that the company had “failed to adhere” to its own security testing protocols that it says protect the AI from catastrophic harms, such as teaching users to build biological weapons or helping hackers develop new kinds of cyber attacks. “We did not skip any safety processes, but we recognize the release was stressful for the team,” OpenAI spokeswoman Lindsay Held said in a statement.
Tech companies’ strict non-disclosure agreements have long troubled workers and regulators. During the #MeToo movement and nationwide protests over the killing of George Floyd, workers warned that such legal agreements limit their ability to report sexual misconduct or racism. Meanwhile, regulators worry that the clauses silence tech employees who could flag misconduct in the opaque tech industry amid allegations that companies’ algorithms promote content that undermines elections, public health and child safety.
Rapid advances in artificial intelligence have policymakers growing concerned about the power of the tech industry and sparked a flurry of calls for regulation. In the United States, AI companies operate largely in a legal vacuum, and policymakers say they cannot effectively craft new AI policy without the help of whistleblowers who can help explain the potential threats posed by the rapidly advancing technology.
“OpenAI’s policies and practices appear to have a chilling effect on the right of whistleblowers to speak out and receive fair compensation for protected disclosures,” Sen. Chuck Grassley (R-Iowa) said in a statement to The Washington Post. “If the federal government is to stay one step ahead in artificial intelligence, OpenAI’s non-disclosure agreements must be changed.”
A copy of the whistleblower letter, addressed to SEC Chairman Gary Gensler, was sent to Congress. The Washington Post obtained a copy of the letter from Grassley’s office.
The formal complaint referenced in the letter was filed with the SEC in June. Steven Cohn, an attorney representing OpenAI’s whistleblowers, said the SEC has responded to the complaint.
It is unclear whether the SEC has opened an investigation. The SEC declined to comment.
The letter said the SEC should take “swift and aggressive” action to address these unlawful contracts because they potentially concern the entire AI industry and may violate an October White House executive order calling on AI companies to safely develop their technology.
“At the heart of these enforcement efforts is the recognition that insiders must be able to freely report concerns to federal authorities,” the letter said. “Employees are best positioned to detect and warn of the types of dangers referenced in the Executive Order, and they are also best positioned to help ensure that AI benefits humanity, rather than works against it.”
Those agreements threatened employees with criminal prosecution if they reported violations of trade secret laws to federal authorities, Cohn said. Employees were instructed to keep corporate information secret and were threatened with “severe sanctions” without being granted the right to report such information to the government, Cohn said.
“We’re just getting started when it comes to AI oversight,” Kohn said. “We need employees to step up and we need OpenAI to be open.”
The SEC should require OpenAI to submit all employment, severance and investor agreements, including confidentiality clauses, to ensure it is not violating federal law, the letter said. Federal regulators should require OpenAI to notify all past and present employees of any violations the company has committed and inform them of their right to report violations of the law confidentially and anonymously to the SEC. The SEC should fine OpenAI under the SEC Act for each “improper agreement” and direct OpenAI to correct the “chilling effect” of its past practices, according to the whistleblower letter.
Several technology employees, including Facebook whistleblower Frances Haugen, filed complaints with the SEC, which established the whistleblower program in the wake of the 2008 financial crisis.
The fight back against Silicon Valley’s use of non-disclosure agreements to create an “information monopoly” is a long game, said Chris Baker, a San Francisco lawyer who won a $27 million settlement in December after Google employees alleged the company used onerous non-disclosure agreements to thwart whistle-blowing and other protected activities. Now, tech companies are increasingly fighting back with more subtle ways to thwart speech, he said.
“Employers are willing to take the risk because they’re learning that the costs of a breach can be much higher than the costs of litigation,” Baker said.