Enlarge / California State Capitol in Sacramento.
California’s Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act (aka SB-1047) has generated a lot of headlines and discussion about the overall “safety” of large-scale artificial intelligence models. However, critics worry that the bill’s excessive focus on existential threats from future AI models could severely limit research and development of today’s more mundane, non-threatening uses of AI.
Introduced by state Senator Scott Wiener, SB-1047 passed the California Senate by a vote of 32-1 in May and appears on track for a final vote in the state Assembly in August. Provisions in the bill would require companies with AI models of sufficient scale (training costs are currently set at $100 million, with rough computational power estimated from that cost) to put in place testing procedures and systems to prevent and respond to “safety incidents.”
The bill lays out a legal definition of a safety incident and focuses on defining a range of “significant harm” that an AI system could cause, including harm that would result in “multiple casualties or damages of at least $500 million,” such as “the development or use of chemical, biological, radiological or nuclear weapons” (Skynet anyone?) or “precise instructions to carry out a cyber attack against critical infrastructure.” The bill also mentions “other significant harm to public safety and security” with a severity equivalent to those explicitly stated.
Creators of AI models cannot be held liable for harm caused by sharing “publicly available” information from outside the model. For example, simply asking a law graduate to summarize The Anarchist Cookbook probably wouldn’t violate the law. Instead, the bill seems most interested in future AI that could create “new threats to public safety and security.” Rather than humans using AI to brainstorm harmful ideas, SB-1047 focuses on the idea of AI “autonomously engaging in actions outside of user requests” while acting with “limited human oversight, intervention, and oversight.”
Expand / Could California’s new bill stop WOPR?
To prevent this sci-fi eventuality, anyone who trains a sufficiently large model would be required to “implement the ability to promptly perform a complete shutdown,” along with policies for when such a shutdown would be performed and other precautions and testing. The bill also focuses on AI actions that would require “intent, recklessness, or gross negligence” if performed by a human, suggesting a degree of conduct that does not exist in large language models today.
Killer AI attack?
The bill’s language perhaps reflects a particular concern of its author, Dan Hendricks, co-founder of the Center for AI Safety (CAIS). In a 2023 Time magazine article, Hendricks made the most existential argument that “evolutionary pressures will likely lead AI to adopt behaviors that promote self-preservation,” putting it “on a path to supplanting itself as the dominant species on Earth.”
If Hendrix is right, bills like SB-1047 seem like common-sense precautions, but in practice they may not be enough. Supporters of the bill, including AI experts Geoffrey Hinton and Yoshua Bengio, agree with Hendrix that the bill is a necessary step to prevent catastrophic damage from advanced AI systems.
“AI systems that exceed a certain level of capability can pose significant risks to democracy and public safety,” Bengio said in a statement supporting the bill. “Therefore, they must be properly tested and have appropriate safeguards in place. This bill provides a practical approach to accomplishing this and is a major step toward the requirements that I have recommended to lawmakers.”
“If we see power-seeking behavior here, it’s not coming from AI systems, it’s coming from AI pessimists.
Technology policy expert Dr. Nirit Weiss-Blatt
But critics argue that AI policy shouldn’t be guided by outlandish fears about futuristic systems that are closer to science fiction than current technology. “SB-1047 was originally drafted by nonprofits like Dan Hendrix’s Center for AI Safety who believe in an apocalypse by sentient machines,” Daniel Jeffries, a prominent voice in the AI community, told Ars. “You can’t start from this premise and craft a sane, sane, ‘light touch’ safety bill.”
“If we’re seeing power-hungry behavior here, it’s coming not from AI systems, but from AI pessimists,” added tech policy expert Nirit Weiss Blatt. “They’re harboring fictitious fears and trying to pass fictitious legislation that, according to many AI experts and open source advocates, could undermine California’s and the United States’ technological advantage.”