The specter of Silicon Valley haunts AI safety proponents.

The specter of Silicon Valley haunts AI safety proponents.

Comments made online this week by Silicon Valley figures such as David Sacks, the White House AI & Crypto Czar, and Jason Kwon, OpenAI’s Chief Strategy Officer, have ignited controversy due to their remarks concerning organizations advocating for AI safety. In separate instances, they suggested that certain champions of AI safety might not be as principled as they seem, possibly acting in their own self-interest or influenced by wealthy, behind-the-scenes figures.

According to AI safety groups who spoke with TechCrunch, the accusations from OpenAI and Sacks represent Silicon Valley’s latest, though not unprecedented, effort to discourage those critical of the industry. Earlier in 2024, some venture capital firms disseminated claims that SB 1047, a proposed AI safety law in California, could lead to the imprisonment of startup founders. Despite the Brookings Institution’s description of the rumor as a “misrepresentation” of the bill, Governor Gavin Newsom ultimately decided to veto it.

Whether or not Sacks and OpenAI intended to discourage criticism, their actions have had the effect of frightening a number of AI safety advocates. To protect their organizations from potential retaliation, many nonprofit leaders who were contacted by TechCrunch during the past week requested anonymity.

This controversy highlights the escalating tension in Silicon Valley between the goals of building AI responsibly and creating a widely adopted consumer product—a topic that my colleagues Kirsten Korosec, Anthony Ha, and I explore in this week’s episode of the Equity podcast. We also delve into a newly enacted AI safety law in California aimed at regulating chatbots, as well as OpenAI’s policies regarding erotica in ChatGPT.

On Tuesday, Sacks posted on X, alleging that Anthropic—an organization that has voiced concerns about AI’s potential to cause unemployment, cyberattacks, and significant harm to society—is simply engaging in fearmongering to push for legislation that would benefit itself and overwhelm smaller startups with regulatory burdens. Anthropic stood alone among major AI labs in endorsing California’s Senate Bill 53 (SB 53), a law establishing safety reporting requirements for major AI companies, which was signed last month.

Sacks was reacting to a widely circulated essay by Anthropic co-founder Jack Clark, in which he shared his fears regarding AI. Clark initially presented the essay as a speech at the Curve AI safety conference in Berkeley several weeks prior. While it seemed like a genuine expression of a technologist’s concerns about the products he develops to those in attendance, Sacks held a different view.

Sacks claimed that Anthropic is executing a “sophisticated regulatory capture strategy,” although it is worth considering that a truly sophisticated strategy likely wouldn’t involve antagonizing the federal government. In a subsequent post on X, Sacks pointed out that Anthropic has “consistently positioned itself as an opponent of the Trump administration.”

Techcrunch event

San Francisco
|
October 27-29, 2025

Also this week, Jason Kwon, the chief strategy officer at OpenAI, detailed on X the reasons behind the company’s issuance of subpoenas to AI safety nonprofits, including Encode, an organization advocating for responsible AI policy. (A subpoena is a formal legal demand for documents or testimony.) Kwon suggested that following Elon Musk’s lawsuit against OpenAI—alleging the ChatGPT developer had deviated from its nonprofit objectives—OpenAI became wary of the simultaneous opposition to its restructuring from various organizations. Encode submitted an amicus brief supporting Musk’s lawsuit, and other nonprofits publicly voiced their disapproval of OpenAI’s restructuring.

“This raised transparency questions about who was funding them and whether there was any coordination,” said Kwon.

NBC News reported this week that OpenAI issued broad subpoenas to Encode and six other nonprofits that had criticized the company, seeking their communications related to two of OpenAI’s primary adversaries, Musk and Meta CEO Mark Zuckerberg. OpenAI also requested Encode’s communications pertaining to its support for SB 53.

According to a leading figure in AI safety who spoke with TechCrunch, a growing divide exists between OpenAI’s governmental affairs team and its research division. While OpenAI’s safety researchers routinely release reports outlining the potential dangers of AI systems, OpenAI’s policy team lobbied against SB 53, stating its preference for standardized regulations at the federal level.

Joshua Achiam, OpenAI’s head of mission alignment, publicly addressed his company’s decision to subpoena nonprofits in a post on X this week.

“At what is possibly a risk to my whole career I will say: this doesn’t seem great,” said Achiam.

Brendan Steinhauser, the CEO of the Alliance for Secure AI, an AI safety nonprofit (which has not received a subpoena from OpenAI), informed TechCrunch that OpenAI appears to believe its critics are part of a Musk-orchestrated conspiracy. However, he contends that this is untrue, noting that many within the AI safety community are critical of xAI’s safety protocols, or lack thereof.

“On OpenAI’s part, this is meant to silence critics, to intimidate them, and to dissuade other nonprofits from doing the same,” said Steinhauser. “For Sacks, I think he’s concerned that [the AI safety] movement is growing and people want to hold these companies accountable.”

Sriram Krishnan, the White House’s senior policy advisor for AI and a former a16z general partner, contributed to the discussion this week with a social media post of his own, characterizing AI safety advocates as disconnected. He encouraged AI safety organizations to engage with “people in the real world using, selling, adopting AI in their homes and organizations.”

A recent Pew study revealed that approximately half of Americans express more concern than enthusiasm regarding AI, though the specifics of their concerns remain unclear. Another recent study provided further details, indicating that American voters are more concerned about job displacement and deepfakes than the catastrophic risks often emphasized by the AI safety movement.

Addressing these safety concerns may hinder the rapid expansion of the AI sector—a trade-off that concerns many in Silicon Valley. Given the significant role of AI investment in supporting the U.S. economy, the apprehension surrounding excessive regulation is understandable.

However, after years of largely unregulated advancement in AI, the movement for AI safety appears to be gaining significant traction as we approach 2026. The defensive actions taken by Silicon Valley against groups focused on safety may suggest that these groups are having an impact.