Legislation Addressing AI in Elections
Cracking Down on Deceptive Media and Deepfakes
Several states have passed (California, Texas, Minnesota, and Washington) or introduced (New York, New Jersey, and Michigan) laws to either ban or mandate disclosure of the manipulation of media that might deceive the public about candidates for political office or otherwise influence the outcome of an election. Below, we distinguish these laws based on several features: whether the regulation bans deceptive content outright or requires disclosure; whether AI is addressed explicitly or implicitly; the types of content regulated; who can make a claim, and who a claim can be made against; the types of liability imposed; and the limitations and entities exempted under these laws.
To understand how these regulations operate, we take California’s law as an example. California prohibits the distribution of “materially deceptive media” — defined as images, videos, or audio depicting a candidate for office that “falsely appear . . . to be authentic” and transmit a “fundamentally different understanding or impression” than reality — with the intent to injure a candidate’s reputation or deceive voters. The law does not explicitly mention AI, though some AI-based deceptive content would trigger a violation. The restriction comes with noteworthy limitations. For example, it requires deceptive intent, applies only to depictions of a candidate distributed within 60 days of an election, and permits media that clearly discloses that the content was manipulated.
Washington’s 2023 law and New York’s recently proposed bill require similar disclosure, whereas the laws in Texas and Minnesota set forth outright bans of election-related deepfakes, which Texas defines as media created “to depict a real person performing an action that did not occur in reality.”
Unlike California’s law, the bill in New York does explicitly name AI-generated content as the focus of its regulation. Its proposed legislation targets political communications that incorporate synthetic media, defined as images, videos, text, or recordings intentionally “created or modified through the use of artificial intelligence.” Washington also targets synthetic media, but its definition of the term encompasses “the use of generative adversarial network techniques or other digital technology” rather than solely AI.
Meanwhile, Minnesota and Texas prohibit deepfakes, implicating AI and other technologies capable of manipulating digital media. And while most state restrictions cover all digital formats (text, audio, and visual), Texas limits its regulation to video deepfakes only. A 2023 proposed amendment, however, would add a section to regulate “altered image[s]” which have been “manipulated to change the physical appearance of an individual or depict an individual performing an action that did not occur.”
States also regulate different subject matter. California and Washington restrict false depictions only of political candidates, whereas Texas and Minnesota prohibit media created with the “intent to injure a political candidate or influence the result of an election,” regardless of its subject. The latter criteria could more expansively include deepfakes that misrepresent the conduct of election officials.
Alternately, proposed legislation in New Jersey amends the state’s identity theft statute to include impersonation or false depiction through the use of “artificially generated speech, transcription of speech, or text” that is deceptive and “substantially likely to cause perceptible individual or societal harm.” While such harm is not limited to elections, it does include “the alteration of a public policy debate or election” and “improper interference in an official proceeding.”
Finally, the type of liability varies by law as well. Minnesota, Texas, and the New Jersey bill establish the creation and dissemination of a prohibited deepfake as a crime punishable by imprisonment or fine. Alternately, California, Washington, and the New York bill apply only civil liability. Washington and New York do not specify who may bring a claim for damages or injunctive relief, while California restricts private rights of action to the political candidate depicted by the deceptive media.
Critics have argued that some of these laws infringe upon freedom of expression. Debate ensues over whether regulating political deepfakes will chill protected speech and whether existing protections, such as defamation law, are sufficient to cover abuses.
In an apparent effort to overcome First Amendment challenges, states have set boundaries on who and when the regulations apply. For example, the restrictions of Texas, California, and Minnesota apply only to content created or distributed with malintent, such as to injure candidates, influence elections, or deceive voters. They also apply only within 30, 60, and 90 days of an election, respectively. Some states also exempt entities such as print publications, radio and television broadcasters, or, in California, those engaging in satire or parody. Notably, these acts do not limit themselves to entities typically covered by campaign finance laws such as the Federal Election Campaign Act, which primarily regulates electoral actors, including candidates, campaigns, and political committees, that meet established financial requirements (i.e., a candidate that receives contributions of a certain value). Instead, state AI restrictions broadly encompass interference from individuals acting outside traditional political structures and do not impose spending requirements.
States are actively experimenting with how to craft effective regulations. Just last month, Michigan proposed several bills that iterate upon the ideas of California’s legislation. One proposal regulates “materially deceptive media,” incorporating California’s exemption for satire and parody while applying criminal instead of civil liability and extending the time limitation from 60 to 90 days before an election. The bill requires not only that an actor intend to influence election outcomes but also that the distribution of the technically enhanced media be “reasonably likely to cause that result.” Additionally, an amendment to the state’s campaign finance law focuses on political actors and committees by criminalizing the creation, publication, or original distribution of political advertisements that fail to disclose AI enhancement.
The majority of bills that explicitly cover AI in elections focus on the manipulation of media. However, Arizona, a state known for entertaining oft-debunked election narratives, provides an exception: in 2023, the state’s conservative legislature passed a bill that would have prohibited election officials from using election machines that employ AI to scan ballots, process affidavits, or tabulate votes. This proposal was motivated by fear of algorithmic interference with the individual right to vote, despite there being no evidence of such intrusion in the state. Arizona’s Democratic governor vetoed the bill, writing that it was focused on “challenges that do not currently face our State.”
• • •
The rapid and widespread introduction of AI has elicited calls from inside and outside Congress for a robust federal regulatory response that balances the risks and promise of this powerful new technology. Though it remains to be seen what Congress is capable of passing, recent regulations from the states offer clues about what to expect. The experiences of these new state laws, as well as their ability to survive legal challenges, will no doubt inform Congress’s approach. States are leading this work, and their ideas belong in the conversation alongside those of Congress and the EU.