x
N A B I L . O R G
Close
Technology - September 19, 2025

California’s Narrowed AI Safety Bill Targets Big Tech Companies: A Potential Check on Power in a Booming Industry

California’s Narrowed AI Safety Bill Targets Big Tech Companies: A Potential Check on Power in a Booming Industry

The California State Senate has approved a new AI safety bill, SB 53, sending it to Governor Gavin Newsom for consideration. This bill targets large AI companies with annual revenues exceeding $500 million and focuses on regulatory measures for these corporations.

In contrast to last year’s vetoed AI safety bill, SB 1047, SB 53 is more specific and places emphasis on the mentioned large-scale companies. The new legislation mandates these businesses to publish safety reports related to their AI models and requires them to report any incidents to the government. Additionally, it provides employees at these labs a channel for voicing concerns about safety without fear of retaliation, even if they have signed non-disclosure agreements.

Max Zeff and Kirsten Korosec discussed SB 53 in the latest episode of a popular podcast, highlighting its potential impact on AI regulation. Max believes that the bill’s focus on big companies increases its chances of becoming law, as well as being endorsed by AI company Anthropic.

During their conversation about AI safety and state-level legislation, Max stated: “AI companies are becoming some of the most powerful entities globally, making this bill a potential check on their power.” He further explained that SB 53 introduces meaningful regulations for AI labs, requiring them to disclose safety reports for their models and report incidents to the government.

Kirsten Korosec pointed out the significance of California as a hub for AI activity, with every major AI company either based there or having a significant presence. However, she questioned whether SB 53 might be more complex due to various exceptions and carve-outs.

Max agreed that some elements of the bill could be considered complicated but emphasized its intent to exclude small startups. He noted that the bill specifically applies to AI developers generating over $500 million annually, targeting major companies like OpenAI and Google DeepMind while leaving run-of-the-mill startups largely unaffected.

Anthony added that if a startup falls below the specified revenue threshold, they still need to share safety information but not as extensively as larger corporations. Moreover, he highlighted the current political landscape surrounding AI regulation, with federal administrations advocating for minimal intervention and potentially prohibiting state-level AI regulations in future funding bills.

The podcast discussed this latest development in AI safety legislation and its potential implications, reiterating the importance of monitoring such advancements closely.