California’s SB 1047: A Controversial Step Towards Regulating AI Safety
Update: On August 15, California’s Appropriations Committee passed SB 1047 with significant amendments. Details of these changes can be found here.
In a world where AI technology evolves rapidly, California lawmakers are taking steps to prevent potential disasters arising from artificial intelligence. The state’s proposed legislation, SB 1047, aims to enforce safety protocols on major AI systems to avert possible harm to society. The bill successfully passed the state senate in August and is now awaiting the decision of Governor Gavin Newsom, who can either approve or veto it.
While many would agree with the intent of SB 1047, it has sparked significant backlash from various factions within Silicon Valley, including venture capitalists, tech giants, researchers, and startup founders. Amid a wave of proposed AI regulations across the country, California’s initiative has emerged as one of the most contentious.
What Does SB 1047 Encompass?
SB 1047 is designed to deter the use of large AI models to inflict "critical harms" on humanity. The bill outlines potential scenarios of such harms, including a malicious actor leveraging an AI model to develop a weapon causing mass casualties or conducting a cyber attack that incurs over $500 million in damages, comparable to the CrowdStrike catastrophe, which reportedly cost upwards of $5 billion.
Under this legislation, companies creating AI models deemed "large"—those that require at least $100 million for training and utilize 10^26 FLOPS—would bear the responsibility for ensuring sufficient safety measures are in place. While few companies currently produce models of this scale, industry leaders such as OpenAI, Google, and Microsoft are expected to reach these thresholds soon.
Additionally, the bill stipulates that the original developers of open-source models retain responsibility unless a subsequent developer invests an additional $10 million in modifications.
Safety Protocols and Oversight
The proposed legislation mandates developers implement comprehensive safety protocols, including an "emergency stop" feature to disable AI models, establish risk assessment testing processes, and hire third-party auditors annually to evaluate compliance with safety standards. While the bill does not require absolute certainty in preventing critical harms, it demands "reasonable assurance" that these safeguards are effective.
Oversight will be provided by a newly established agency, the Board of Frontier Models, responsible for certifying AI models that meet the outlined criteria. This board will consist of nine members, including representatives from the AI sector, academia, and the open-source community, appointed by the governor and legislature.
Developers must submit annual certifications detailing the risks associated with their AI models and compliance with SB 1047’s requirements. Should an AI-related safety incident occur, developers must report it within 72 hours. Violations of the safety protocols could result in the California attorney general seeking injunctions, potentially halting operations or training of non-compliant models.
Support and Opposition
Supporters, including California State Senator Scott Wiener, argue that the legislation aims to proactively address the risks associated with AI, drawing lessons from past oversights related to social media and data privacy. Wiener emphasizes the need for California to lead in establishing regulatory frameworks for handling advanced technologies, especially in light of stagnant federal legislation on this front.
Prominent figures in AI, such as Geoffrey Hinton and Yoshua Bengio, have also expressed support for SB 1047, emphasizing the necessity of prioritizing AI safety. The Center for AI Safety sponsors the bill, highlighting the urgent need to address existential risks posed by AI technology.
Conversely, a significant number of stakeholders within Silicon Valley, including influential venture firms like a16z, oppose the bill. Critics argue it could stifle innovation, particularly affecting startups that may struggle to absorb the regulatory burden of compliance. They claim the shifting thresholds of financial responsibility create uncertainty, and contend that punishing developers for the actions of potential bad actors is misguided.
Famed AI expert Fei-Fei Li warned that SB 1047 could harm California’s burgeoning AI ecosystem, while others, including Meta’s AI lead Yann LeCun, have accused the legislation of promoting an unfounded fear of existential threats.
Looking Ahead
As SB 1047 awaits Governor Newsom’s signature, its future remains uncertain. If enacted, the law will not take effect immediately, as the Board of Frontier Models will be established in 2026. Legal challenges are anticipated from various sectors should the bill pass, echoing historical trends of pushback against sweeping technology regulations in California.
This ongoing debate reflects the complexities of balancing technological innovation with necessary safeguards—an effort that will likely define the landscape of AI policy in the years to come.