Skip Navigation
Resource

Regulating AI Deepfakes and Synthetic Media in the Political Arena

Policymakers must prevent manipulated media from being used to undermine elections and disenfranchise voters.

Published: December 5, 2023
View the entire AI and Democracy series

The early 2020s will likely be remembered as the beginning of the deepfake era in elections. Generative artificial intelligence now has the capability to convincingly imitate elected leaders and other public figures. AI tools can synthesize audio in any person’s voice and generate realistic images and videos of almost anyone doing anything — content that can then be amplified using other AI tools, like chatbots. The proliferation of deepfakes and similar content poses particular challenges to the functioning of democracies because such communications can deprive the public of the accurate information it needs to make informed decisions in elections. 

Recent months have seen deepfakes used repeatedly to deceive the public about statements and actions taken by political leaders. Specious content can be especially dangerous in the lead-up to an election, when time is short to debunk it before voters go to the polls. In the days before Slovakia’s October 2023 election, deepfake audio recordings that seemed to depict Michal Šimečka, leader of the pro-Western Progressive Slovakia party, talking about rigging the election and doubling the price of beer went viral on social media

Other deepfake audios that made the rounds just before the election included disclaimers that they were generated by AI, but the disclaimers did not appear until 15 seconds into the 20-second clips. At least one researcher has argued that this timing was a deliberate attempt to deceive listeners. Šimečka’s party ended up losing a close election to the pro-Kremlin opposition, and some commenters speculated that these late-circulating deepfakes affected the final vote.

In the United States, the 2024 election is still a year away, but Republican primary candidates are already using AI in campaign advertisements. Most famously, Florida Gov. Ron DeSantis’s campaign released AI-generated images of former President Donald Trump embracing Anthony Fauci, who has become a lightning rod among Republican primary voters because of the Covid-19 mitigation policies he advocated. 

Given the astonishing speed at which deepfakes and other synthetic media (that is, media created or modified by automated means, including with AI) have developed over just the past year, we can expect even more sophisticated deceptive communications to make their way into political contests in the coming months and years. In response to this evolving threat, members of Congress and state legislators across the country have proposed legislation to regulate AI.

As of this writing, federal lawmakers from both parties have introduced at least four bills specifically targeting the use of deepfakes and other manipulated content in federal elections and at least four others that address such content more broadly. At the state level, new laws banning or otherwise restricting deepfakes and other deceptive media in election advertisements and political messages have passed in recent years in states as ideologically diverse as CaliforniaMinnesotaTexas, and Washington. Federal and state regulators may also take action. Recently, for example, the advocacy group Public Citizen petitioned the Federal Election Commission (FEC) to amend its regulations to prohibit the use of deepfakes by candidates in certain circumstances. 

However, even as policymakers move to update laws and regulations in the face of new and more advanced types of manipulated media, they must be mindful of countervailing considerations. Most important among these considerations is the reality that manipulated content can sometimes serve legitimate, nondeceptive purposes, such as the creation of satire or other forms of commentary or art. These types of expression have inherent value and, in the United States, merit considerable legal protection under the First Amendment. U.S. law specifies that even outright deception with no redeeming artistic or other licit purpose, while typically entitled to less constitutional protection, cannot be prohibited simply for its own sake. The government must still provide an independent justification for any restriction and demonstrate that the restriction in question is appropriately tailored to its stated goal. 

These constraints are not a reason to shy away from enacting new rules for manipulated media, but in pursuing such regulation, policymakers should be sure to have clear and well-articulated objectives. Further, they should carefully craft new rules to achieve those objectives without unduly burdening other expression.

Part I of this resource defines the terms deepfakesynthetic media, and manipulated media in more detail. Part II sets forth some necessary considerations for policymakers, specifically:

  • The most plausible rationales for regulating deepfakes and other manipulated media when used in the political arena. In general, the necessity of promoting an informed electorate and the need to safeguard the overall integrity of the electoral process are among the most compelling rationales for regulating manipulated media in the political space.
  • The types of communications that should be regulated. Regulations should reach synthetic images and audio as well as video. Policymakers should focus on curbing or otherwise limiting depictions of events or statements that did not actually occur, especially those appearing in paid campaign ads and certain other categories of paid advertising or otherwise widely disseminated communications. All new rules should have clear carve-outs for parody, news media stories, and potentially other types of protected speech.
  • How such media should be regulated. Transparency rules — for example, rules requiring a manipulated image or audio recording to be clearly labeled as artificial and not a portrayal of real events — will usually be easiest to defend in court. Transparency will not always be enough, however; lawmakers should also consider outright bans of certain categories of manipulated media, such as deceptive audio and visual material seeking to mislead people about the time, place, and manner of voting.
  • Who regulations should target. Both bans and less burdensome transparency requirements should primarily target those who create or disseminate deceptive media, although regulation of the platforms used to transmit deepfakes may also make sense.

What Are Deepfakes and Other Synthetic and Manipulated Media?

The term deepfake refers to videos, images, or audio of a real or fictitious person’s likeness or voice generated or substantially modified using deep learning, a subset of machine learning that drives many AI applications. Technological advances have enabled increasingly sophisticated deepfakes that can now mimic the voices and facial features of prominent public figures and private citizens with alarming accuracy. Deepfakes are a type of manipulated media, which encompasses everything from synthetic media to media altered with more basic tools like Photoshop or dubbing software.

Our focus in this resource is on synthetic and other manipulated media that convey events, speech, or conduct that did not actually occur or individuals who do not exist in a manner that would be convincing to a reasonable viewer or listener.

As a recent paper by the RAND Corporation notes, new technology is making deepfakes and other synthetic media easier to produce and disseminate inexpensively. Many developers now offer tools that the average person can use to make convincing content with only a short video or audio clip of a real person. One professor was able to create a deepfake of himself in eight minutes for only $11. These technologies will only get cheaper and more attainable. Meanwhile, text-to-image tools have become increasingly sophisticated and are widely available. This increased effectiveness coupled with wide accessibility threaten to undermine the electorate’s ability to distinguish fact from fiction when making decisions about whether and how to participate in democratic processes. 

Urgent Considerations for Policymakers

In weighing potential regulation of AI deepfakes and other manipulated media, legislators and regulators should consider the following questions: What is the overall goal for regulation? What sort of conduct should be covered? How should this conduct be regulated? And, finally, who should be included — in particular, should regulation target only those creating or disseminating AI deepfakes or also the online and other platforms that facilitate their transmission?

Why Regulate?

A threshold question for any new area of regulation is why. In our view, promoting a more informed electorate and safeguarding the integrity of the electoral process are the most compelling objectives for restricting manipulated media. Other valid aims include shielding candidates and election workers and curbing harmful disinformation more broadly, but pursuing those goals could risk unduly infringing on protected types of expression.

Promoting an informed electorate. One of the strongest cases for regulating AI deepfakes and other manipulated media is to prevent, or at least mitigate, the harmful effects of deceptive campaign advertising and similar communications that are clearly designed to influence how voters cast their ballots. 

Although campaign ads are a form of protected political speech, the Supreme Court has long permitted their regulation. As the Court observed in the seminal 1976 campaign finance decision of Buckley v. Valeo, “In a republic where the people are sovereign, the ability of the citizenry to make informed choices among candidates for office is essential, for the identities of those who are elected will inevitably shape the course that we follow as a nation.”

Deepfake depictions of candidates or other people or events in campaign ads and similar communications undeniably threaten the pivotal public interest of having an informed electorate. That interest has long justified a variety of transparency rules requiring those who disseminate campaign ads to disclose their identities and sources of funding; it would almost certainly justify requirements to label deepfakes in such ads. And because many deepfakes, unlike other forms of political speech, are inherently deceptive and therefore entitled to less constitutional protection, a strong case can be made for outright prohibiting at least a subset of them, such as deepfake depictions of candidates disseminated by their opponents.

Safeguarding the electoral process. There is also a strong case for regulating the dissemination of deceptive images and audio that seek to disenfranchise voters or undermine the integrity of the electoral process. Regulation should focus at a minimum on communications that purposely mislead voters about where, when, and how to vote, and on those designed to cast doubt on the legitimacy of an electoral result by purporting to convey fraud or other illegality that never actually occurred.

Many voter intimidation and deception tactics are already illegal. As lawmakers have long recognized, such tactics can be just as effective as outright denying someone access to the voting booth. While the problem long predates the advent of sophisticated AI technology, deepfakes and other synthetic images and audio that mimic election officials or other trusted sources in order to disseminate false information can dramatically amplify voter deception campaigns. These communications have minimal (if any) redeeming value and the potential to do considerable harm, which likely justifies prohibiting them.

Deepfakes and similar media that attack the integrity of the electoral process through false depictions of fraud and other illegality are also a major threat. Not only do false assertions of fraud undermine confidence in our elections, they increasingly drive election-related violence, such as the January 6, 2021, attack on the U.S. Capitol, which was part of a broader effort to overturn the 2020 presidential election result. These lies remain a potent force in American politics, helping to sustain a broad election denial movement that has sought to intimidate voters and election workers and, increasingly, to infiltrate election offices and gain control over the machinery of elections. Sophisticated AI deepfakes have the power to supercharge already rampant conspiracy theories. Instilling confidence in elections and preventing election-related violence are stalwart justifications for labeling and other transparency requirements at a minimum, and arguably for outright prohibitions. 

Protecting candidates and election workers. There may also be a case for broader restrictions on deepfakes and similar depictions of candidates, election officials, and election workers with the intent to intimidate, threaten, or defame them. 

Most candidates would be considered public figures who, in running for office, have voluntarily sacrificed some of the reputational protections to which ordinary citizens are entitled. But this sacrifice is not absolute. For instance, the Supreme Court has long held that the dissemination of falsehoods about a public figure with “actual malice” can give rise to liability under libel and defamation laws. Running for office is already difficult; candidates from traditionally disadvantaged backgrounds encounter disparate barriers — including an increased likelihood of facing threats and other forms of online abuse. Sophisticated online deepfakes threaten to compound this problem. 

Unlike candidates, many election workers who have suffered vicious attacks are not public figures — like the Georgia poll workers targeted by Trump lawyer Rudy Giuliani as part of the effort to overturn the 2020 election. Local election officials and workers already face a host of challenges, including mounting fears of threats, intimidation, and even outright violence spurred on by the election denial movement. Here too, AI deepfakes could exacerbate an already serious problem.

Cogent reasons to broadly protect candidates, election officials, and ordinary election workers from threats and malign falsehoods aside, however, blanket restrictions on deepfakes and similar depictions of these individuals also risk unjustifiably chilling legitimate criticism or other commentary (at least when the target is a public figure). Moreover, affected individuals may already have avenues for relief, including state laws against libel, defamation, and intentional infliction of emotional distress. (Giuliani was recently held liable for defaming the election workers he targeted). In many cases, it could make the most sense to ensure that these generally applicable laws adequately factor in advancing deepfake technology.

Curbing deceptive political discourse in general. In a similar vein, there may be a case for regulating deepfakes and other deceptive media that implicate issues of public concern even without an explicit tie to an election, although the most aggressive efforts will likely be hard to defend in court.

Deceptive communications focused on contentious social and political issues are the most common type of disinformation and are often intended — like disinformation expressly targeting elections — to distort and destabilize the political process. For instance, many of the Russian ads directed at the U.S. electorate in 2016 did not even mention candidates; rather, they aimed to sow discord on divisive social issues in an already charged environment. Microsoft researchers recently claimed that China was generating images “meant to mimic U.S. voters across the political spectrum and create controversy along racial, economic, and ideological lines.” Other communications might be designed to discredit political leaders without a clear nexus to an immediate election. UK opposition leader Keir Starmer recently suffered such an attack when an apparently false AI-generated audio clip that seemed to capture him swearing at staffers went viral. Within a few days, the clip had 1.5 million views on X (formerly Twitter).

Broad efforts to curb such communications carry their own risks, however. Even staunch proponents of combatting disinformation typically concede that the line between disinformation and ordinary (if perhaps misguided) statements of opinion is not always clear. To be sure, deepfakes afford more clarity versus other types of misleading communications in that the former are by definition false, thereby reducing the risk that a regulator’s subjective perceptions of what is or is not truthful will lead them to chill sincere speech. 

But deepfakes still have an expressive component — and, in some cases, genuine value as art or political commentary. At least in the United States, courts may or may not deem a general interest in improving the quality of political discourse sufficient to justify sweeping restrictions on such communications. At a minimum, broader restrictions governing deepfakes and other manipulated media will likely require carve-outs and may need to take a lighter touch (for example, requiring transparency instead of an outright prohibition) than more narrowly targeted rules, as discussed below.

What Conduct Should Be Regulated?

The next question that policymakers must confront is what should be regulated. Existing laws and proposals vary widely. Some aim to cover all materially deceptive media, while others target only synthetic media, or only synthetic depictions of candidates, or even only synthetic video depictions of candidates. Our view is that rules should generally cover any convincing manipulated image, audio clip, or video depiction of an event or statement that did not actually occur in a paid campaign ad, along with certain other types of paid or unpaid communications, with appropriate carve-outs and exceptions.

Doctored photos and manipulated video clips are nothing new in politics. Indeed, editing photos of one’s opponent to make them look more sinister or unappealing has become a hallmark of American campaigning. Opponents sometimes darkened Barack Obama’s skin in attack ads. A 2020 Trump campaign ad manipulated three photos of Joe Biden to show him “alone” and “hiding” during the 2020 Covid-19 lockdown. Biden’s own campaign spliced video of Trump to make it seem like he called the coronavirus pandemic a “hoax.” Also in 2020, a manipulated video of House Speaker Nancy Pelosi in which she appeared to be drunk and slurring her speech went viral on social media. None of these depictions used AI technology.

For this reason, some proposals aim to regulate all “materially deceptive media” used in political or election messaging, regardless of whether AI played a role in the manipulation. A California law passed in 2019 regulates the distribution of candidate depictions within 60 days of an election, covering manipulated candidate images, videos, or audio clips with “the intent to injure the candidate’s reputation or to deceive a voter into voting for or against a candidate.”

More common are proposals and laws that explicitly regulate only synthetic media — like a bill in New York that targets political communications incorporating images, videos, text, or recordings intentionally “created or modified through the use of artificial intelligence.” Several other states seek to regulate synthetic media in new laws and proposals, as do at least eight bills in Congress. Even here, we see differences in which media types are included. Most bills cover images, videos, and audio content; some, like the one in New York, also include text, and others, like one in Texas, limit covered media to video images.

A related question is whether the law should target only deepfakes of candidates or encompass a broader array of manipulated images, video, and audio. Even in candidates’ own campaign ads, it isn’t difficult to imagine methods of deception that do not impersonate or falsely depict a candidate but could still mislead the public. For example, a Toronto mayoral candidate’s use of AI-generated images in his campaign materials came to light only when people noticed that one image supposedly portraying average constituents included a woman with three arms.

Whereas that AI use was relatively innocuous, the same technology can be employed to create more troubling images of events or occurrences that never happened. One recent ad from the Republican National Committee, for instance, shows a variety of politically sensitive scenarios, including a city on lockdown, war between China and Taiwan, and migrants streaming across the border — all of them AI-generated. Another viral fake image circulating online seems to capture an explosion at the Pentagon.

Policymakers must also ask what types of media they hope to regulate. Much of the recent news focus on deepfakes has been on fake video, which may be why some recent proposals have focused only on synthetic video images. But, as discussed above, deepfake technology can also help create fake photos and audio content. Leaving this content outside the scope of rules can result in significant loopholes. That appears to have been what happened in the case of the viral fake audio clips of Slovakian politician Michal Šimečka, which circulated on Facebook but were not covered by Meta’s media manipulation rules that only apply to video. (A case currently before Meta’s Oversight Board could prompt this policy’s reevaluation.)

In general, deepfake and other manipulated media regulations should cover images, video, and audio, any of which can effectively fool voters. At a minimum, the law should require disclaimers on professional campaign ads that use manipulated media — including but not limited to AI-generated content — to convey events or statements that did not actually occur. Campaign ads in this context would include paid ads run by candidate campaigns or other PACs (which are defined under federal law as entities whose “major purpose” is to influence elections), as well as ads run by others that either contain explicit electoral messages or depict a candidate shortly before an election.

A good model for manipulated media rules could be federal campaign finance laws, which regulate so-called “electioneering communications” — that is, paid broadcast, cable, or satellite communications run 30 days before a primary or 60 days before a general election that refer to or depict a candidate and target their electorate. To be effective, this framework would also need to apply to online ads.

Whether to extend regulation to other types of paid or unpaid communications should depend on the context. A case can be made for regulating candidate campaigns and other PACs when they disseminate certain deepfakes across any channel (including encrypted messaging apps like WhatsApp and Signal) even if no money changes hands related to the dissemination — for example, deepfakes intended to impersonate a candidate’s opponent. Federal regulations already prohibit some other types of impersonation, and the FEC is currently weighing whether to explicitly extend those rules to deepfakes. As noted above, the case is also strong for prohibiting anyone from creating or disseminating any type of manipulated media that either spreads disinformation about the voting process with the intent to disenfranchise eligible voters or falsely depicts illegal activities to cast doubt on the legitimacy of an election.

Rules targeting the use of AI or other manipulated media in other types of communications — including unpaid electoral communications by persons other than candidates or PACs and communications of any sort addressing general issues of public concern without any direct tie to an election — could also be defensible but are more likely to impinge on general public discourse. This concern is one reason why, as noted below, we think that broader deepfake rules should mostly focus on disclosure, with outright bans reserved for narrower categories of especially harmful communications.

Another important limiting principle is that trying to restrict communications that fall short of depicting events or statements that did not actually occur probably does not make sense. Other tactics, like altering an image to change someone’s appearance or adding a synthetic background, can range from innocuous to nefarious. Attempting to limit such AI uses would likely be harder to defend in court.

To avoid burdening protected speech, policymakers will also want to create various carve-outs or exceptions. Given the press’s central role in our democracy and the need for news media to have extra leeway to report freely on matters of public interest, the law requires most campaign finance rules to include a press exception that exempts typical press activities, like editorials and news stories, from regulation. Similar exceptions for press activities are probably warranted for rules limiting dissemination of deepfakes (which, among other things, may sometimes be legitimate subjects for news coverage). Parodies and satire represent another important exception for deepfake laws; both were included in California, Minnesota, and in the U.S. Senate’s bipartisan Protect Elections from Deceptive AI bill (S. 2770). Other exceptions may also be necessary depending on how broadly a deepfake restriction reaches and how burdensome it is.

How Should Deepfakes and Similar Media Be Regulated? 

How to regulate manipulated media is another important question. Broadly speaking, in both Congress and at the state level, legislators have taken one of two approaches to regulating deepfakes and other manipulated media: requiring disclosure of the deepfake or manipulation or banning it altogether. Whereas disclosure will almost always be easier to defend in court, targeted prohibitions for the most egregious types of manipulated media are also appropriate. 

The majority of laws and new proposals that restrict deepfakes and other manipulated media focus on disclosure. For instance, Washington State and California require disclosure of inauthentic content that falsely depicts political candidates with an intent to deceive voters; the Michigan legislature recently passed similar legislation. Similarly, in Congress, the AI Disclosure Act of 2023 (H.R. 3831) would require any AI-created content to include the disclosure “Disclaimer: this output has been generated by artificial intelligence.” The AI Labeling Act of 2023 (S. 2691) requires a similar disclosure and specifies that the disclaimer must be legible and difficult to remove. The REAL Political Advertisements Act (H.R. 3044 and S. 1596) requires disclaimers on any political ad using content that is “in whole or in part” generated by AI. The DEEPFAKES Accountability Act (H.R. 5586) requires a disclaimer on a deepfake of any person, political figure or not. 

These proposals focus on disclosure for a reason: as the Supreme Court has noted in the campaign finance context, disclosure does not “prevent anyone from speaking.” Disclosure requirements thus are usually subject to a more forgiving constitutional standard. Campaign transparency rules — like requiring disclaimers on ads so that viewers know who paid for them — are also broadly popular across the ideological spectrum.

That being said, lawmakers have also put forth proposals to ban certain categories of deepfakes intended to influence elections. Laws in Texas and Minnesota ban such content (as does the Protect Elections from Deceptive AI legislation). The Texas law bans deepfake videos, which it defines as videos created “to depict a real person performing an action that did not occur in reality” for the purpose of injuring a candidate or influencing an election. The law in Minnesota bans video, audio, and images that are “so realistic that a reasonable person would believe [they] depict[] speech or conduct of an individual” when disseminated to influence an election. Importantly, both state bans are limited to discrete time frames (within 30 days of an election in Texas and 90 days in Minnesota). 

Outright prohibitions of this sort have certain advantages over disclosure. Not everyone who views or listens to a deepfake or other form of manipulated media will notice a disclaimer intended to inform them of the content’s artificiality. Additionally, video, images, and audio can be altered to remove disclaimers, notwithstanding efforts to develop watermarking technology for synthetic content. Disclaimer requirements are also challenging to enforce in an era of microtargeted online ads that are subject to little or no human review. Particularly for communications that have little to no redeeming value — like attempts to deceive voters to keep them from casting their ballots or false depictions of illegal activity intended to undermine an election’s legitimacy — a strong case can be made for bright-line rules outlawing them. Broader prohibitions on electoral deepfakes and other manipulated media could also be defensible, but they should probably be restricted to certain preelection windows, as in Texas and Minnesota, and have clear carve-outs for parody and other forms of nondeceptive speech.

Who Should Be Regulated?

Finally, we come to the question of who should be regulated. In general, we think that most regulations should target candidates, PACs, and others creating and disseminating deepfakes and similar content related to elections, but policymakers may also want to consider targeted rules for online and other platforms. 

Typically, the government’s constitutional interest in regulating political expression is highest when that expression originates from a candidate’s campaign or other form of PAC. These entities are in the business of trying to persuade voters. While they have robust First Amendment rights, the government’s well-established interest in fostering an informed electorate has long justified the imposition of detailed transparency requirements that compel disclosure of donors and other operational details. That same interest likely justifies requiring clear disclaimers when a campaign or PAC deploys deepfakes and other deceptive content, and arguably even outright prohibitions on them doing so in certain circumstances, as explained above. Other individuals and groups deploying such content in paid campaign advertising should also be subject to disclaimer requirements at a minimum, as several of the leading proposals before Congress would require. 

For other types of content, as noted above, the advisability of regulation is context specific. In general, laws that have the potential to sweep in large numbers of ordinary people posting or sharing content online to express their personal political views are the most apt to be challenged. 

Lastly, some targeted regulation of the platforms that transmit deepfakes and other manipulated media probably makes sense. A good model in this regard is the bipartisan Honest Ads Act (S. 1989), which would require major online platforms like Facebook, Google, and X to maintain public records of all requests to purchase online political ads, ensure that those ads carry appropriate disclaimers about who paid for them, and make reasonable efforts to assure that the ads they run comply with other laws. Any public file maintained by an online platform should also include information (provided by the ad purchaser) about whether an ad incorporates deepfake technology. Platforms should also be required to take reasonable steps to make sure that such ads include required disclaimers and are not otherwise prohibited. (At least two major platforms, Google and Meta, have voluntarily adopted disclaimer requirements.) Similar requirements may make sense for broadcast, cable, and satellite providers that are subject to Federal Communications Commission political programming rules.

• • •

The advent of sophisticated AI-generated deepfakes and other manipulated media poses a serious challenge to elections around the world. U.S. policymakers should strive to develop appropriately tailored responses that mitigate the harms caused by new technologies without unduly burdening legitimate expression.

When it comes to elections, deepfakes and advanced synthetic media generally, like many other recent leaps in AI technology, are less a freestanding threat than a threat amplifier. Deceptive campaign ads, other types of mis- and disinformation, voter suppression, attacks on election workers, and broader efforts to undermine the legitimacy of the electoral process all predate the emergence of deepfakes and would continue even if all AI-generated content somehow disappeared tomorrow. While addressing this specific problem is imperative, policymakers should not lose sight of the broader challenges that have left our democratic institutions so vulnerable to new avenues for deception.