Skip Navigation
Resource

Generative AI in Political Advertising

Political campaigns should be watchful of the risks and opportunities of using generative AI to engage with voters.

  • Christina LaChapelle
  • Catherine Tucker
Published: November 28, 2023
Graphic of colorful political ads on screens
Chris Burnett
View the entire AI and Democracy series

As the 2024 U.S. elections gain momentum, political campaigns and advertising agencies are turning to a new ally, artificial intelligence, to disseminate their messages. This emerging technology presents both risks and potential benefits for the political advertising industry. AI-powered tools can generate new text, images, video, and speech from a single prompt to weave into campaign messages. Political campaigns are already using these tools to create messages for ads and fundraising solicitations. AI software can even compose campaign emails. Generative AI is poised to redefine modern campaigning, although the exact nature of its influence remains uncertain. However, despite calls for regulation or moratoria in Congress, the Federal Election Commission, and even the political consultants’ trade association, national lawmakers have yet to address the new technology.

New AI software products are inexpensive, require almost no training to use, and can generate seemingly limitless content. These tools can support personalized advertising at scale, reducing the need for large digital teams and leveling the playing field for campaigns that lack substantial resources. Yet AI also introduces a novel set of challenges, from a tendency to generate bland and repetitive text to the risk of misleading audiences and amplifying ongoing election misinformation issues. This essay examines ways in which AI — and specifically large language models that generate text — can enhance campaigns’ voter outreach efforts, the cautions that campaigns must exercise when embracing new AI tools, and the market forces affecting both the technology itself and campaigns’ experience as generative AI consumers.

We focus primarily on considerations that could influence the decision-making of campaigns themselves. Other essays in this series delve into broader threats to democracy associated with AI use, like the disturbing trend of AI-generated deepfakes in campaigns and AI as a tool for voter suppression, along with policy responses to these problems. Here, we take a campaign’s-eye view of some of the strengths and weaknesses of generative AI in political advertising and the likely course that the market will take.

How Can AI Make Political Advertising More Powerful?

Throughout history, political campaigns have evolved in tandem with new media platforms, from the rise of radio broadcasting in the 1920s to the proliferation of the internet and social media in the 21st century. In recent years, data analytics and microtargeting tools have become the bedrock of modern campaigning. Campaigns now rely on large data sets that offer detailed insights into citizens’ behaviors, interests, and whereabouts in order to advance their key goals, such as voter mobilization and fundraising. Yet using this wealth of data to deliver the right message to the right person at the right time demands considerable labor and expertise, as well as precision tools. And this data-driven targeting is not perfect: campaigns sometimes end up delivering the wrong message or targeting the wrong person. AI has the potential to make data-driven tools even more powerful and accessible.

Targeting Specific Audiences

From a campaign perspective, AI’s ability to synthesize information about a target audience and generate a persuasive message tailored to that audience’s interests holds great promise for microtargeting efforts. Free AI tools, such as OpenAI’s ChatGPT, Microsoft’s Bing, and Google’s Bard, are each capable of producing relevant, comprehensive, and sophisticated marketing copy for a target audience. What’s more, such tools can accomplish this task on a massive scale. AI can fine-tune messages for a diverse array of voter groups and their subgroups, and it can execute these refinements hundreds if not thousands of times daily.

For a political campaign wanting to address the unique concerns of different voters — such as women worried about reproductive health care costs, young voters newly participating in the democratic process, parents unsure about educational opportunities for their children, or rural communities facing problems with limited infrastructure — AI can feasibly assist in developing and targeting messages that address all these concerns and more. Indeed, this year, major platforms like Meta and Google have begun implementing AI-powered tools for advertisers that aim to make the personalization of ad messages easier and more efficient.

Empowering Less-Resourced Campaigns 

AI can be especially helpful to political campaigns with fewer financial resources. Well-funded, high-profile campaigns can usually afford sizeable digital teams capable of disseminating a large volume of targeted ads; smaller, less moneyed campaigns have historically been unable to compete in this area. The current AI landscape, with its multitude of low-cost and user-friendly tools that require no prior knowledge of coding or machine learning, disrupts that equation. Low-resource campaign teams can harness this accessibility going forward and outsource targeted advertising production to these tools. Furthermore, AI-generated content can rival the sophistication of big-budget campaigns, giving smaller campaigns a boost in competing with larger ones.

Improving Ad Effectiveness

AI’s adroitness at tailoring messaging could make ads more effective at engaging audiences. Although recent academic work has cast doubt on how good political campaigns actually are at persuading voters to support particular candidates or initiatives, the bulk of digital campaign efforts lie in mobilizing supporters to perform certain actions, such as donating or turning out on Election Day. AI’s ability to generate well-reasoned political arguments tightly aligned with a target audience’s preferences stands to improve the overall effectiveness of such political microtargeting efforts. Indeed, some early evidence suggests that voters find AI-generated microtargeted ads fairly convincing and even favor AI-crafted political arguments over human-crafted ones, primarily due to AI’s ability to generate easy-to-read, fact-driven arguments that are more positive in tone.

What Are the Risks of AI Use in Political Advertising?

The power of generative AI to create content without humans comes with significant pitfalls. For one, the very features that enable campaigns to deliver targeted and effective messages at scale and on a budget could, in the hands of bad actors, be used in ways that imperil democracy, such as to suppress voting or incite violence. Even for well-intentioned campaigns, unsupervised AI can produce unoriginal, biased, or inaccurate messages.

Falsehoods and Empty Promises

Much of the media debate surrounding artificial intelligence in politics focuses on AI’s potential to generate false or misleading content. In June 2023, headlines reported on Florida Gov. Ron DeSantis’s presidential campaign sharing fake AI-generated images (sometimes called deepfakes) depicting Donald Trump embracing former Covid czar Anthony Fauci. Amplifying these concerns is AI’s swiftly advancing ability to replicate reality, which makes it increasingly challenging for voters to distinguish between AI-generated and human-generated material.

Given the current absence of AI regulations, there is growing apprehension that antidemocratic groups or other bad actors could exploit AI-driven advertising technology to unleash torrents of misinformation on the internet. This very real possibility underscores the urgency to establish robust detection systems that can rapidly spot manipulated or fabricated content and bring it to voters’ attention.

From the perspective of well-intentioned campaigns, however, an issue that has received less attention in the press lies in AI’s capacity to inadvertently create false content. Text generation tools like ChatGPT have been shown to “hallucinate,” fabricating facts about individuals or events to fill in gaps in their knowledge. Recently, a mayoral candidate in Toronto used AI-generated images in a platform document, one of which depicted a person with three arms.

Even the most well-meaning campaigns must take care to ensure that AI-generated content avoids such mistakes, remains factually accurate, and aligns with their planned messages. Given AI’s ability to churn out mountains of content, messages containing false information can easily slip through the cracks, particularly for low-resource campaigns lacking the staff to scrutinize every item generated by AI.

Campaigns also face the risk that AI will blur their core messaging. AI crafts ad messages that it judges will best fit with a targeted voter’s or group’s interests and preferences. As they undertake this process for various voter groups, AI tools are likely to emphasize different topics or issues and may even put forth different stances on those issues. Quite simply, unlike a human campaign team, AI does not have the same degree of internal consistency. This shortcoming proves especially problematic for campaigns striving to reach a diverse audience consisting of multiple voter groups with competing interests. The challenge escalates as the level of human oversight diminishes, potentially leaving a campaign unaware of all the claims and commitments propagated through its AI-generated ads during an election cycle.

Over time, voters may become more disillusioned with the electoral process as campaign promises made in AI-generated ads go unfulfilled. This could dampen turnout in future elections, with voters continually questioning the sincerity of any competing candidate. Even more concerning are cases in which an AI-generated message has directly asked for a donation and invented promises to obtain that donation. Such cases raise questions of fraud and pose the possibility that campaigns may face future legal repercussions for fundraising under false pretenses.

Biases

Relatedly, some political campaigns may struggle with biases inherent in AI systems. Given that AI is trained on historical, encyclopedic, or public data, AI systems absorb any and all biases present in this human-generated data. In particular, AI trained on data from the internet, which includes copious racist and sexist content (among other problems), is likely to generate racist and sexist messages. Moreover, several studies have illuminated political opinion biases in commonly used large language models. For instance, whereas ChatGPT often echoes left-wing libertarian views, Meta’s LLaMA tends to lean toward a right-wing authoritarian perspective. This divergence poses a dilemma for campaigns aiming to ensure that AI-generated ad content resonates with specific political values, as they may be unaware of bias and unable to supervise AI outputs at scale.

Ignorance of Certain Topics

Another hurdle that campaigns might encounter stems from the limitations of the data on which AI tools are trained. According to one study, AI appeared to struggle to formulate arguments on political issues that are less discussed in public political spaces, such as organ donation, instead producing output consisting of unrelated issue stances or rudimentary sentence fragments. Another study found that nine of the most-used AI tools could not generate content that accurately reflects the political opinions of different U.S. demographic groups, despite additional prompting. This deficiency highlights AI’s struggle with nuance, an important facet in politics: campaigns must be able to glean and respond to subtleties in the electorate’s opinions.

Generic Language

Finally, a downside missing from the discussion of common AI alarm bells centers on originality in AI-generated political ads. When prompted to craft political ads, AI synthesizes the style, tone, and content of ads present in the data on which it was trained. This process tends to dilute distinctive, inventive, or attention-grabbing elements that make ads unique, resulting in ads filled with generic statements and lacking the freshness and uniqueness that campaigns often aspire to convey. This issue dovetails with the ongoing challenge of AI cannibalism, wherein AI is now learning from AI-generated content, perpetuating trends of repetitive, unoriginal language in its output. If these homogenized political ads end up inundating the feeds, websites, and channels that voters view, then campaigns may find that those ads struggle to capture and maintain voters’ attention.

To illustrate these points, consider an example from one of the most widely used AI tools. We asked ChatGPT to produce an ad for social media featuring a fictional political candidate: 

Screenshot of ad copy generated by ChatGPT ChatGPT

The AI-generated ad copy exemplifies the following issues:

  • Generic statements. The ad contains generic political slogans like “a brighter future” and “together, we can make a difference” and is largely devoid of a unique voice or personality to resonate with the audience. These flaws demonstrate how personalization at scale might not be as powerful as it initially seems in certain contexts.
  • Made-up promises. The ad commits the candidate to initiatives that were not in the prompt, like creating local jobs and lowering prescription drug costs. Such promises run the risk of alienating voters if the candidate is unprepared or unwilling to fulfill these commitments down the road. In addition, for other prompts we tried, ChatGPT included nonexistent website URLs and social media handles in its ad copy — a big problem for campaigns wanting to direct the ad audience to actual websites or social media feeds.
  • Biased assumptions. Although the prompt’s language is neutral, the generated text assumes a fairly liberal stance, framing a vote for the candidate as a “Vote for Progress” and policies like providing affordable health care and a pathway to citizenship for immigrants. Campaigns across the political spectrum must be wary of biases embedded within AI systems leaking into their messaging.

To be clear, these issues are not specific to ChatGPT or OpenAI. When we gave the same prompts to different AI tools, they produced remarkably similar-sounding political ads with comparable snags — generic statements, unprompted assumptions, and made-up promises. Predicting exactly what kinds of political ads these AI tools will generate in the future is impossible, but if AI continues to learn from itself, then these problems are not going away.

What to Do: Can Market Forces Help?

Other essays in this series address potential governmental responses to election-related challenges posed by AI. To be sure, regulation could ameliorate some of the risks of generative AI. For example, transparency rules (such as a requirement that AI-generated messages include a disclaimer) could warn audiences to watch for false or biased content. But public policy cannot solve some of the issues that threaten to curtail AI’s usefulness for campaigns, especially the problem of generic text in AI-generated ads. Fortunately, certain market forces intrinsic to online advertising may actually be an effective remedy for the problem. The market forces we are thinking of are those that reflect the need of advertising platforms themselves to engage two groups: users and advertisers.

To understand how, we need to look at the economics underlying advertising-supported attention platforms. The term attention platform encompasses traditional ad-supported media (including radio, television, and print outlets) as well as online businesses like search and social media platforms. Attention platforms compete with one another by posting engaging content alongside ads that are as appealing as possible. Platforms compete for users’ attention and want to avoid repelling them by having unappealing ads. We can see this impetus across many platforms, from YouTube’s innovations in giving users control over video ads to Meta’s transformation of its ads toward visual appeal.

In sum, to compete, attention platforms must present appealing ads that don’t annoy their users. But they’re also incentivized to promote the ads that deliver their advertisers the highest return on investment — typically those ads most effective at engaging the attention of users. In other words, online advertising systems reward ads that are more engaging.

Going forward, we expect this natural market phenomenon to reduce incentives for generic, mass-produced ad copy riddled with the biases and false promises that we have pinpointed as key concerns. Just as users have come to engage more with political ads that are more appealing, campaign advertisers will come to distribute ads that are more appealing, with and without the aid of generative AI.

The use of AI in political advertising is still in its infancy, and we anticipate developments in the market that will assist campaigns in producing more effective ads. We expect that the foundational models on which AI systems are built will become richer in the breadth and depth of data they use, hopefully leading to fewer hallucinations and less ignorance of underserved political groups. We also expect that campaigns and advertisers will insist on more control over the output of generative AI, leading platforms to introduce companion tools that allow users to set strict boundaries on campaign promises or ‘fact check’ AI-generated ad copy for errors.

For now, as the 2024 election looms and political campaigns are already employing AI in their messaging plans, they need to keep in mind the limitations of AI when using it extensively to connect with voters.