x
N A B I L . O R G
Close
AI - September 8, 2025

Anthropic Endorses California’s SB 53, Pushing for First-Ever AI Transparency Regulations Amid Industry Opposition

Anthropic Endorses California’s SB 53, Pushing for First-Ever AI Transparency Regulations Amid Industry Opposition

In a notable development, Anthropic has endorsed Senate Bill 53, a landmark legislation proposed by California Senator Scott Wiener. The bill seeks to impose groundbreaking transparency requirements on the world’s leading AI model developers, marking a significant milestone in its journey towards approval.

Anthropic’s endorsement offers a much-needed boost to SB 53, as prominent tech organizations like the Consumer Technology Association (CTA) and Chamber for Progress have been vocal opponents of the bill. In a blog post, Anthropic expressed concern about the increasing potency of AI technology and emphasized the urgency of proactive governance.

If passed, SB 53 will mandate leading AI model developers such as OpenAI, Anthropic, Google, and xAI to establish safety protocols, and publish safety and security reports prior to deploying powerful AI models. The bill also encompasses whistleblower protections for employees voicing safety concerns.

SB 53 primarily focuses on mitigating “catastrophic risks,” defined as incidents resulting in the loss of at least 50 lives or damages exceeding a billion dollars. It targets extreme AI threats, such as the misuse of AI models to create biological weapons or launch cyberattacks, rather than more immediate concerns like AI deepfakes or sycophancy.

California’s Senate has already approved an earlier version of SB 53, but a final vote is still pending. Governor Gavin Newsom has yet to publicly comment on the bill, although he previously vetoed Senator Wiener’s previous AI safety bill, SB 1047.

Controversy surrounds bills regulating frontier AI model developers, with both Silicon Valley and the Trump administration expressing concerns about potential limitations on American innovation in the race against China. Critics argue that such regulations could violate the Commerce Clause of the Constitution, which restricts state governments from passing laws that exceed their jurisdictions and hamper interstate commerce.

However, Anthropic co-founder Jack Clark contends that the tech industry will soon develop powerful AI systems and cannot afford to wait for federal regulation. “We have long advocated for a federal standard,” said Clark, “but in its absence, SB 53 provides a robust framework for AI governance that can no longer be ignored.”

OpenAI’s chief global affairs officer, Chris Lehane, penned a letter to Governor Newsom in August expressing concern about any regulation that might drive startups out of California. While the letter did not mention SB 53 specifically, critics claim it contains misleading statements regarding the bill and AI policy in general. Notably, SB 53 is designed to regulate only the world’s largest AI companies, primarily those with a gross revenue exceeding $500 million.

Despite facing criticism, policy experts view SB 53 as a more measured approach compared to previous AI safety bills. Dean Ball, a senior fellow at the Foundation for American Innovation and former White House AI policy adviser, believes SB 53 has a strong chance of becoming law. Ball, who was critical of SB 1047, commends the bill’s drafters for showing respect for technical realities and exercising legislative restraint.

Senator Wiener stated that SB 53 was heavily influenced by an expert policy panel convened by Governor Newsom to advise California on AI regulation. The panel was co-led by Stanford researcher Fei-Fei Li, a leading figure in the field of AI. Most AI labs already have some form of internal safety policy similar to that required by SB 53. However, these companies are self-regulated, potentially falling short of their own safety commitments at times. SB 53 aims to enforce these requirements as state law, with penalties for non-compliance.

Recently, California lawmakers amended SB 53 to remove a section requiring third-party audits of AI model developers. Tech companies have previously resisted such third-party audits in other AI policy debates, citing them as overly burdensome.