The Senate is swiftly moving toward the passage of a dangerous 10-year ban on state artificial intelligence regulations — with no plan to replace them with federal protections. This provision, which is part of the massive budget bill that Congress is considering, would prevent states from enforcing state laws limiting or regulating AI models, AI systems, and automated decision systems for the next decade, with certain exceptions.
Under the version of the provision released by the Senate Commerce Committee on Wednesday, states that don’t abide by the regulation freeze could lose out on billions in federal funds allocated for developing AI infrastructure and expanding rural communities’ access to high-speed internet service. The Senate parliamentarian has since ruled that in order to pass with a simple majority, the provision can only impact $500 million in new funding provided by the reconciliation bill related to AI.
Senate Republicans have pledged to pass the budget megabill by July 4, meaning time is running out for senators to eliminate the AI provision. Once passed, the bill will return to the House for consideration. If enacted, it could stop the enforcement of more than 149 existing laws passed in over 40 states plus the District of Columbia since 2019. These laws have had substantial support from both Republican- and Democratic-led legislatures.
The AI provision has rightfully provoked serious opposition. Advocacy groups, state attorneys general, and state lawmakers from across the political spectrum are collectively warning of the grave dangers of restricting AI regulation. States would effectively be barred from protecting their residents from the threat of emerging AI technologies, with virtually no federal legislation in place to counter those harms.
It is hard to predict the role that AI will play in the years — let alone decade — to come, as it is already reshaping many areas of life, including education, social media usage, health care, and the legal profession. But as the Brennan Center has warned in recent years, this technology poses significant risks to our elections, from enabling the spread of misinformation to facilitating attacks on election officials and infrastructure. Halting AI regulation would leave elections vulnerable and carry high stakes for American democracy.
The state of AI regulation
Congress has so far been slow to regulate AI. It recently passed the Take It Down Act, the first federal law aimed at protecting individuals from the spread of nonconsensual intimate AI-generated images. But Congress has yet to be able to pass comprehensive federal legislation regulating AI.
In the absence of effective federal regulation, states have begun to fill the gap. Since 2019, states have passed over a hundred laws that create safeguards in response to AI threats. Nearly 100 AI-related bills were passed in 2024, and more than 1,000 have been introduced across the country during the 2025 legislative sessions. And 25 states have passed laws that regulate AI usage in political campaigns and elections, including laws that curb deceptive media and help deter the intentional dissemination of deepfakes with strong potential to suppress votes.
The regulation ban proposed by Congress would put a full stop to any state progress, where even modest regulations may become unenforceable.
Misleading anti-regulatory narratives
Supporters of the regulation freeze argue that it’s necessary to prevent a “patchwork of laws” that impose compliance burdens on AI developers and companies, making it harder for the U.S. to “win the AI race” against China. These propositions, however, rest on misleading and weak evidence.
States have long served as laboratories of experimentation and regulators of emerging technologies. For instance, California has led the way on data privacy with the California Consumer Privacy Act of 2018 and Illinois implemented the Biometric Information Privacy Act in 2008, both establishing statewide standards without derailing innovation. And in the context of AI, states have not deterred innovation but rather have shaped responsive policies that strengthen U.S. democracy without overburdening developers.
We analyzed key AI laws tracked by the National Conference of State Legislatures since 2019 and found that most regulate the abuse of AI technology by users rather than focusing on AI development or design. While states should pass robust regulations aimed at AI developers and AI companies, to date few have done so for most high-risk AI applications. Since only a modest number of state laws impose meaningful restraints on AI developers and companies, pausing regulations for a decade would do little to alleviate the regulatory burdens supporters are concerned about.
That isn’t stopping anti-regulatory forces from pushing for this provision, though. One of the leading voices in deregulating AI is venture capitalist David Sacks, who serves as the White House’s first AI and cryptocurrency czar and is the cofounder of the AI tool company Glue. Sacks has declared AI regulation to be detrimental to U.S. interests and supports partnering with the United Arab Emirates to prevent “push[ing] them into the arms of China.” During a Senate AI hearing in May, Sen. Ted Cruz (R-TX) emphasized that “the way to beat China in the AI race is to outrace them in innovation, not saddle AI developers with European style regulations,” referring to the European Union’s robust AI regulation policy.
Yet framing the debate as a zero-sum struggle against China or other countries oversimplifies the serious risks and potential benefits of an emerging technology. It risks stifling legitimate oversight by casting it as unpatriotic and leans on narratives that have historically masked xenophobic sentiment.
Future legal fights
As currently drafted, the provision leaves significant room for legal ambiguities that are likely to result in a flood of litigation as courts scramble to interpret the federal legislation and how it interacts with state authority.
At first glance, the budget bill’s AI provision seems to fall neatly under express preemption — the principle that federal law overrides state regulation. But the AI moratorium is an atypical overreach. Congress has previously overridden state regulation in areas such as broadcast media, aviation, copyright, product labeling, and medical device safety. In each case, it created new federal rules to govern those areas, replacing the state regulations it was preempting. Here, however, Congress has enacted virtually no substantive regulation of AI, so barring states from acting will simply create a regulatory void.
Additionally, the scope of the AI provision is uncertain. While it seeks to broadly prevent state regulation of AI models, AI systems, and automated decision systems, it is unclear whether the ban would primarily affect laws that target AI developers and providers or whether it would extend to laws governing AI users, such as laws that limit the intentional dissemination of AI-generated deepfakes.
Read broadly, the provision would grant wide-reaching protection to big tech companies, AI developers and providers, and individuals who misuse AI technology. If this provision were to go into effect, developers would remain largely untouched by any effective federal oversight. That is no accident — the provision expressly allows the enforcement of state laws that make it easier to facilitate the deployment or operation of AI, and it is part of a larger effort by the Trump administration and its allies to chip away at AI safety rules.
• • •
With no comprehensive federal regulatory framework in place, a ban on state AI regulation effectively cedes the field to private actors with little accountability. Consequences could be manifold. As AI usage becomes more embedded in our elections — in areas from voter outreach to deliberate disinformation campaigns — the stakes for our democracy grow exponentially. States have both the constitutional authority and the duty to secure their elections. Hampering states’ ability to respond swiftly and responsibly to emerging AI threats leaves our elections exposed at a critical time. Congress must remove the AI moratorium provision from the budget bill to protect the American public.