Skip Navigation
Resource

States Take the Lead on Regulating Artificial Intelligence

Trends in state legislation could guide Congress’s approach to regulating this new technology.

Last Updated: November 6, 2023
Published: November 1, 2023

This year has produced an upsurge in tools that allow the public to use generative artificial intelligence to produce text, images, audio, and video, frequently in the voice of real people. Federal legislators have been both astounded and concerned by this progress, and many are clamoring to craft an appropriate regulatory response. Consensus has yet to emerge, but Congress can look to state legislatures — often referred to as the laboratories of democracy — for inspiration regarding how to address the opportunities and challenges posed by AI.

This year, Congress has held committee hearings and proposed many bills to address the perceived threats and promises of this new technology. Senate Majority Leader Chuck Schumer (D-NY) and a bipartisan team of senators have each announced frameworks to guide forthcoming AI legislation. To underscore the extraordinary nature of this issue, on September 13, Schumer assembled two-thirds of the Senate, major technology CEOs, and labor and civil rights leaders for a closed-door AI Insights Forum. Participants — including tech mogul Elon Musk, OpenAI founder Sam Altman, and the Center for Humane Technology’s Tristan Harris — overwhelmingly agreed that AI regulation is necessary.

As our colleagues Faiza Patel and Ivey Dyson have noted, the European Union is steps ahead of the United States and has already provided a potential model for federal regulation with its AI Act. Meanwhile, closer to home, 30 states have passed more than 50 laws over the last five years to address AI in some capacity, with attention greatly increasing year over year.

No existing state legislation matches the ambitious European proposals. Instead, states have entered the space incrementally, passing laws that tackle discrete policy concerns or establish entities to study the potential consequences of AI and make policy recommendations. While much of the enacted legislation does not directly address AI’s impact on elections, laws in California, Texas, Minnesota, and Washington do, and more such laws have been proposed.

This summer, the National Conference of State Legislatures published a report aimed at building consensus around common definitions of AI terms, circulating best practices in AI regulation, and increasing awareness of key risks. The report cautions that “there will be a race to the bottom for AI if no guardrails are offered.”

State legislation often foreshadows federal solutions. Several trends have emerged from state-level AI regulation that merit attention from Washington, DC.

General Legislation Addressing AI

Building Infrastructure to Increase Understanding of AI

Though most states have yet to regulate AI, at least 12 — including Alabama, California, Colorado, Connecticut, Illinois, Louisiana, New Jersey, New York, North Dakota, Texas, Vermont, and Washington — have enacted laws that delegate research obligations to government or government-organized entities to increase institutional knowledge of AI and better understand its possible consequences.

The assembly of experts in bodies such as task forces, advisory boards, commissions, or councils can be the first step toward regulatory action. These bodies, some designed to operate only temporarily, are created to study AI and related technologies and submit reports containing policy recommendations that span a range of subjects, including employment, health care, education, and elections.

While some of these efforts may merely punt the enactment of targeted regulation, some have sparked concrete action. For example, evaluation by Vermont’s temporary Artificial Intelligence Task Force led to the establishment of the state’s Division of Artificial Intelligence, which now conducts a yearly inventory of the use and impacts of AI systems within state government.

It remains to be seen how productive recently created bodies will prove to be. In 2023, Texas tasked an Artificial Intelligence Advisory Council with studying AI systems used by state agencies and their impact on constitutional or legal rights and assessing the need for a state code of ethics to guide AI adoption by state government. Its report on these matters is not due until December 2024. By February 2024, Connecticut’s new working group of AI experts will submit a report containing best practices and recommendations for the ethical and equitable use of AI, an assessment of the White House’s Blueprint for an AI Bill of Rights, and recommendations for the development and adoption of similar state-level protections.

Congress has exhibited its own interest in increasing support for development and global competitiveness related to AI and technology. This summer, it proposed bills seeking to establish an independent Artificial Intelligence Commission to conduct research and develop policy recommendations on the risks and opportunities of AI, to create an Office of Global Competition Analysis to analyze how U.S. technological innovation policy compares with that of other countries, and to appoint an emerging technology lead within each agency to facilitate technological sophistication and interagency coordination.

Protecting Data Privacy

State lawmakers have successfully legislated on a few key subject areas. Of most consistent interest: consumer data privacy. At least 12 states — including California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Nevada, Oregon, Tennessee, Texas, and Virginia — have regulated how entities can use automated processing systems to profile consumers based on their personal data. While AI is not explicitly mentioned, the automated decision-making addressed in these laws includes the use of algorithms such as AI and machine learning.

Virginia passed its omnibus privacy bill in 2021. The state’s Consumer Data Protection Act applies, with exceptions, to persons controlling or processing the personal data of more than 25,000 consumers. It requires data “controllers” to provide consumers with privacy notices and honor certain consumer rights, such as the right to opt out of data profiling. It also categorizes profiling practices based on risk level, requiring entities to conduct assessments of practices that pose a “heightened risk of harm” and compelling compliance with disclosure requests from the attorney general. Violations are subject to injunctive relief and civil penalty.

Many of the data protection acts subsequently passed by other states closely mirror Virginia’s approach, indicating a willingness to repurpose successful regulatory templates. The diverse ideological and political majorities of these states suggest potential for bipartisan agreement on this type of regulation.

Combatting Discriminatory AI — Especially in Hiring Practices

There is substantial evidence that without proper guardrails, the use of algorithmic decision-making tools and AI can exacerbate existing forms of societal discrimination. Some states — California, Colorado, Connecticut, Massachusetts, New Jersey, and Rhode Island — and the District of Columbia are proposing and passing legislation to ensure that the adoption of AI does not perpetuate bias against protected classes. While some AI applications are receiving attention from legislatures across the political spectrum, combatting discriminatory AI has primarily been a focus for Democratic-led states.

Colorado prohibits algorithmic discrimination in the insurance space. In 2021, the state required insurers to disclose and conduct risk management of any use of algorithms and predictive modeling in order to better guarantee equitable insurance coverage. A 2023 Massachusetts proposal aimed at “preventing a dystopian work environment” would, among other things, prohibit electronic surveillance of employees through the use of AI-based facial, gait, and emotion recognition technologies.

Proposed legislation in the District of Columbia would prohibit entities that use algorithms to make service eligibility determinations (i.e., data brokers and service providers) from making determinations based on protected characteristics, subject to civil action by the attorney general and individuals suffering relevant discrimination. It would also require covered entities to provide individuals with a notice detailing how their algorithms use personal data prior to making any eligibility determinations.

Other jurisdictions have regulated the use of AI in hiring practices. For example, New York City passed a law in 2021 that restricts the use of automated decision systems in the screening of candidates by requiring employers to conduct bias audits, publish results, and notify candidates of the use of such tools, subject to civil penalty. States such as California, Illinois, Maryland, Massachusetts, and New Jersey have also taken up the issue of AI-based hiring practices.

Legislation Addressing AI in Elections

Cracking Down on Deceptive Media and Deepfakes

Several states have passed (California, Texas, Minnesota, and Washington) or introduced (New York, New Jersey, and Michigan) laws to either ban or mandate disclosure of the manipulation of media that might deceive the public about candidates for political office or otherwise influence the outcome of an election. Below, we distinguish these laws based on several features: whether the regulation bans deceptive content outright or requires disclosure; whether AI is addressed explicitly or implicitly; the types of content regulated; who can make a claim, and who a claim can be made against; the types of liability imposed; and the limitations and entities exempted under these laws.

To understand how these regulations operate, we take California’s law as an example. California prohibits the distribution of “materially deceptive media” — defined as images, videos, or audio depicting a candidate for office that “falsely appear . . . to be authentic” and transmit a “fundamentally different understanding or impression” than reality — with the intent to injure a candidate’s reputation or deceive voters. The law does not explicitly mention AI, though some AI-based deceptive content would trigger a violation. The restriction comes with noteworthy limitations. For example, it requires deceptive intent, applies only to depictions of a candidate distributed within 60 days of an election, and permits media that clearly discloses that the content was manipulated.

Washington’s 2023 law and New York’s recently proposed bill require similar disclosure, whereas the laws in Texas and Minnesota set forth outright bans of election-related deepfakes, which Texas defines as media created “to depict a real person performing an action that did not occur in reality.”

Unlike California’s law, the bill in New York does explicitly name AI-generated content as the focus of its regulation. Its proposed legislation targets political communications that incorporate synthetic media, defined as images, videos, text, or recordings intentionally “created or modified through the use of artificial intelligence.” Washington also targets synthetic media, but its definition of the term encompasses “the use of generative adversarial network techniques or other digital technology” rather than solely AI.

Meanwhile, Minnesota and Texas prohibit deepfakes, implicating AI and other technologies capable of manipulating digital media. And while most state restrictions cover all digital formats (text, audio, and visual), Texas limits its regulation to video deepfakes only. A 2023 proposed amendment, however, would add a section to regulate “altered image[s]” which have been “manipulated to change the physical appearance of an individual or depict an individual performing an action that did not occur.”

States also regulate different subject matter. California and Washington restrict false depictions only of political candidates, whereas Texas and Minnesota prohibit media created with the “intent to injure a political candidate or influence the result of an election,” regardless of its subject. The latter criteria could more expansively include deepfakes that misrepresent the conduct of election officials.

Alternately, proposed legislation in New Jersey amends the state’s identity theft statute to include impersonation or false depiction through the use of “artificially generated speech, transcription of speech, or text” that is deceptive and “substantially likely to cause perceptible individual or societal harm.” While such harm is not limited to elections, it does include “the alteration of a public policy debate or election” and “improper interference in an official proceeding.”

Finally, the type of liability varies by law as well. Minnesota, Texas, and the New Jersey bill establish the creation and dissemination of a prohibited deepfake as a crime punishable by imprisonment or fine. Alternately, California, Washington, and the New York bill apply only civil liability. Washington and New York do not specify who may bring a claim for damages or injunctive relief, while California restricts private rights of action to the political candidate depicted by the deceptive media.

Critics have argued that some of these laws infringe upon freedom of expression. Debate ensues over whether regulating political deepfakes will chill protected speech and whether existing protections, such as defamation law, are sufficient to cover abuses.

In an apparent effort to overcome First Amendment challenges, states have set boundaries on who and when the regulations apply. For example, the restrictions of Texas, California, and Minnesota apply only to content created or distributed with malintent, such as to injure candidates, influence elections, or deceive voters. They also apply only within 30, 60, and 90 days of an election, respectively. Some states also exempt entities such as print publications, radio and television broadcasters, or, in California, those engaging in satire or parody. Notably, these acts do not limit themselves to entities typically covered by campaign finance laws such as the Federal Election Campaign Act, which primarily regulates electoral actors, including candidates, campaigns, and political committees, that meet established financial requirements (i.e., a candidate that receives contributions of a certain value). Instead, state AI restrictions broadly encompass interference from individuals acting outside traditional political structures and do not impose spending requirements.

States are actively experimenting with how to craft effective regulations. Just last month, Michigan proposed several bills that iterate upon the ideas of California’s legislation. One proposal regulates “materially deceptive media,” incorporating California’s exemption for satire and parody while applying criminal instead of civil liability and extending the time limitation from 60 to 90 days before an election. The bill requires not only that an actor intend to influence election outcomes but also that the distribution of the technically enhanced media be “reasonably likely to cause that result.” Additionally, an amendment to the state’s campaign finance law focuses on political actors and committees by criminalizing the creation, publication, or original distribution of political advertisements that fail to disclose AI enhancement.

The majority of bills that explicitly cover AI in elections focus on the manipulation of media. However, Arizona, a state known for entertaining oft-debunked election narratives, provides an exception: in 2023, the state’s conservative legislature passed a bill that would have prohibited election officials from using election machines that employ AI to scan ballots, process affidavits, or tabulate votes. This proposal was motivated by fear of algorithmic interference with the individual right to vote, despite there being no evidence of such intrusion in the state. Arizona’s Democratic governor vetoed the bill, writing that it was focused on “challenges that do not currently face our State.”

• • •

The rapid and widespread introduction of AI has elicited calls from inside and outside Congress for a robust federal regulatory response that balances the risks and promise of this powerful new technology. Though it remains to be seen what Congress is capable of passing, recent regulations from the states offer clues about what to expect. The experiences of these new state laws, as well as their ability to survive legal challenges, will no doubt inform Congress’s approach. States are leading this work, and their ideas belong in the conversation alongside those of Congress and the EU.