캘리포니아의 새로운 AI 안전 법안은 규제와 혁신이 양립 불가능하지 않음을 시사한다.

캘리포니아의 새로운 AI 안전 법안은 규제와 혁신이 양립 불가능하지 않음을 시사한다.

“`html

This week, California Gov. Gavin Newsom signed SB 53, the AI transparency and safety bill, demonstrating that state-level rules don’t necessarily impede AI advancement.

That’s according to Adam Billen, the public policy VP at Encode AI, a youth-led advocacy organization, on today’s Equity episode.

Billen told TechCrunch, “The truth is that policymakers understand the need for action and have learned from other issues how to create laws that genuinely safeguard innovation — which I do value — while ensuring product safety.”

Fundamentally, SB 53 is the first such U.S. law requiring major AI developers to disclose their safety and security measures, particularly concerning preventing their models from causing catastrophic harm, such as cyberattacks on vital infrastructure or the creation of bio-weapons. The law also mandates adherence to these protocols, which the Office of Emergency Services will enforce.

Billen told TechCrunch, “Companies already perform the actions we require in this bill. They conduct model safety testing and release model cards. Are some companies starting to cut corners? Yes, and that’s why bills like this matter.”

Billen also pointed out that certain AI companies have a policy of easing safety measures when faced with competition. For instance, OpenAI has publicly announced it might “adjust” its safety standards if a competing AI lab releases a potentially dangerous system without similar protections. Billen contends that this policy can ensure companies keep their safety pledges, preventing them from compromising due to competitive or financial pressures.

Although there was less public outcry against SB 53 than against its forerunner, SB 1047, which Newsom rejected last year, Silicon Valley and most AI labs have generally argued that AI regulation is detrimental to progress and will ultimately hamper the U.S. in its competition with China.

Techcrunch event

San Francisco
|
October 27-29, 2025

This is why organizations like Meta, venture capitalists like Andreessen Horowitz, and influential figures like OpenAI president Greg Brockman are collectively investing significant amounts in super PACs to support pro-AI politicians in state elections. It’s also why these groups advocated earlier this year for an AI moratorium that would have prohibited states from regulating AI for a decade.

Encode AI led a coalition of over 200 groups to defeat the proposal, but Billen believes the battle isn’t over. Senator Ted Cruz, who supported the moratorium, is pursuing a new strategy to achieve federal preemption of state laws. In September, Cruz introduced the SANDBOX Act, which would allow AI firms to seek waivers to temporarily circumvent certain federal regulations for up to 10 years. Billen also anticipates a future bill creating a federal AI standard that would be promoted as a compromise but would effectively override state laws.

He cautioned that limited federal AI legislation could “eliminate federalism for the most critical technology of our era.”

Billen stated, “If you suggested SB 53 should replace all state laws on AI and its potential risks, I’d argue that’s not a good idea because this bill addresses a specific set of issues.”