Washington is approaching a decision on AI regulation for the first time. The central conflict revolves around regulatory authority, not the technology itself.
With no significant federal AI standard addressing consumer protection, states have proposed numerous bills to shield residents from AI-related risks, such as California’s SB-53 and Texas’s Responsible AI Governance Act, which bans the deliberate misuse of AI systems.
Tech giants and Silicon Valley startups contend that these state laws create an impractical, fragmented system that hinders innovation.
“It will impede our progress in the competition with China,” Josh Vlasto, co-founder of the pro-AI PAC Leading the Future, stated to TechCrunch.
The industry, along with some of its allies in the White House, advocates for either a comprehensive national standard or no regulation at all. Amid this high-stakes battle, new initiatives have surfaced to prevent states from passing their own AI laws.
House legislators are reportedly attempting to use the National Defense Authorization Act (NDAA) to obstruct state AI laws. Simultaneously, a leaked draft of a White House executive order indicates strong support for overriding state AI regulations.
A broad preemption eliminating states’ ability to regulate AI is unpopular in Congress, which overwhelmingly opposed a similar freeze earlier in the year. Lawmakers have asserted that blocking states without a federal standard would leave consumers vulnerable and allow tech companies to operate unchecked.
Techcrunch event
San Francisco
|
October 13-15, 2026
To establish a national standard, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are developing a package of federal AI bills addressing consumer protections like fraud, healthcare, transparency, child safety, and catastrophic risk. Given its scope, such a comprehensive bill could take months or years to pass, highlighting why the current push to limit state authority is a central point of contention in AI policy.
The battle lines: NDAA and the EO

Recent efforts to prevent states from regulating AI have intensified.
House Majority Leader Steve Scalise (R-LA) mentioned to Punchbowl News that Congress has considered inserting language into the NDAA to stop states from regulating AI. Politico reported that Congress aimed to finalize the defense bill before Thanksgiving. A source familiar with the situation informed TechCrunch that negotiations have centered on limiting the scope to potentially preserve state authority over areas such as children’s safety and transparency.
In parallel, a leaked White House EO draft reveals the administration’s possible preemption strategy. The EO, reportedly on hold, would establish an “AI Litigation Task Force” to challenge state AI laws in court, instruct agencies to assess state laws considered “onerous,” and steer the Federal Communications Commission and Federal Trade Commission toward national standards that override state rules.
Significantly, the EO would grant David Sacks – Trump’s AI and Crypto Czar and co-founder of VC firm Craft Ventures – shared authority in creating a uniform legal framework. This would give Sacks direct influence over AI policy, exceeding the usual role of the White House Office of Science and Technology Policy and its leader, Michael Kratsios.
Sacks has publicly supported blocking state regulation and maintaining minimal federal oversight, favoring industry self-regulation to “maximize growth.”
The patchwork argument
Sacks’s stance reflects the views of much of the AI industry. Several pro-AI super PACs have recently emerged, investing hundreds of millions of dollars in local and state elections to oppose candidates who favor AI regulation.
Leading the Future – supported by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale – has accumulated over $100 million. This week, Leading the Future launched a $10 million campaign urging Congress to develop a national AI policy that supersedes state laws.
“When attempting to foster innovation in the tech sector, it’s untenable to have a constant stream of laws emerging from individuals who may lack the necessary technical expertise,” Vlasto told TechCrunch.
He contended that a collection of state regulations will “slow our progress in the race against China.”
Nathan Leamer, executive director of Build American AI, the PAC’s advocacy branch, confirmed the group supports preemption even without specific federal consumer protections for AI. Leamer argued that existing laws, such as those addressing fraud or product liability, are adequate for handling AI-related harms. While state laws often aim to prevent issues proactively, Leamer prefers a reactive approach: allowing companies to move quickly and addressing problems in court later.
No preemption without representation

Alex Bores, a New York Assembly member running for Congress, is among Leading the Future’s primary targets. He sponsored the RAISE Act, mandating that large AI labs implement safety plans to prevent significant harms.
“I am a believer in the potential of AI, which underscores the importance of implementing reasonable regulations,” Bores told TechCrunch. “Ultimately, the AI that will succeed in the marketplace will be trustworthy AI, and the market often undervalues or misplaces short-term incentives on safety investments.”
Bores supports a national AI policy but argues that states can act more rapidly to address emerging threats.
Indeed, states can act faster.
As of November 2025, 38 states have enacted over 100 AI-related laws this year, largely focused on deepfakes, transparency and disclosure, and government use of AI. (A recent study revealed that 69% of these laws impose no requirements on AI developers.)
Congressional activity further supports the argument that states are quicker to act. Hundreds of AI bills have been introduced, but few have passed. Since 2015, Rep. Lieu has introduced 67 bills to the House Science Committee, with only one becoming law.
More than 200 lawmakers signed an open letter opposing preemption in the NDAA, arguing that “states serve as laboratories of democracies” and must “retain the flexibility to confront new digital challenges as they arise.” Approximately 40 state attorneys general also issued an open letter opposing a ban on state AI regulation.
Cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders – authors of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship – argue that the concern about a fragmented system is exaggerated.
They note that AI companies already adhere to stricter EU regulations and that most industries find ways to operate under varying state laws. The true motivation, they assert, is to avoid accountability.
What could a federal standard look like?
Lieu is drafting a comprehensive bill exceeding 200 pages, which he hopes to introduce in December. It addresses various issues, including fraud penalties, deepfake protections, whistleblower protections, compute resources for academia, and mandatory testing and disclosure for large language model companies.
The final provision would require AI labs to test their models and publish the results – a practice most currently undertake voluntarily. While Lieu has not yet introduced the bill, he stated that it does not direct any federal agencies to directly review AI models. This contrasts with a similar bill introduced by Sens. Josh Hawley (R-MS) and Richard Blumenthal (D-CN), which would establish a government-run evaluation program for advanced AI systems before deployment.
Lieu acknowledged that his bill would be less stringent but have a greater chance of becoming law.
“My objective is to get something enacted into law this term,” Lieu stated, noting that House Majority Leader Scalise is openly opposed to AI regulation. “I’m not drafting a bill based on my ideal scenario; I’m trying to create a bill that could pass a Republican-controlled House, a Republican-controlled Senate, and a Republican-controlled White House.”
