Skip Navigation

The Effect of AI on Elections Around the World and What to Do About It

Artificial intelligence is being used for good and ill, and regulations must account for both.

June 6, 2024
View the entire AI and Elections collection

As more than 50 countries prepare for elections this year, artificial intelligence–generated media has begun to play a variety of roles in political campaigns, ranging from nefarious to innocuous to positive. With six months until the U.S. general election, examining the use of AI in this year’s major global elections provides Americans with insights on what to expect in our own election and how election officials, legislators, and civil society should prepare.

AI technology clearly has the potential to exacerbate election-related challenges, including the spread of disinformation and cyber vulnerabilities in election systems. Governments and civil society must work to fortify the electorate against such threats. Tactics include immediate actions such as publicizing corrective information and beefing up online safeguards and legislation, such as stronger curbs on deceptive online political advertising. In doing so, however, policymakers and advocates should keep in mind the various uses of AI in the political process and develop nuanced approaches that focus on the worst impacts without unduly limiting political expression.

AI’s dangers to the political process have become increasingly evident in the United States and many other countries. Earlier this year, for instance, AI-generated robocalls imitated President Biden’s voice, targeting New Hampshire voters and discouraging them from voting in the primary. Earlier this year, an AI-generated image falsely depicting former president Trump with convicted sex trafficker Jeffrey Epstein and a young girl began circulating on Twitter.

Meanwhile abroad, deepfakes circulated last year in the Slovakian election, defaming a political party leader and possibly helping swing the election in favor of his pro-Russia opponent. In January, the Chinese government apparently tried to deploy AI deepfakes to meddle in the Taiwanese election. And a wave of malicious AI-generated content is appearing in Britain ahead of its election, scheduled for July 4. One deepfake depicted a BBC newsreader, Sarah Campbell, falsely claiming that British Prime Minister Rishi Sunak promoted a scam investment platform. And as the Indian general election has gotten under way, deepfakes of popular deceased politicians appealing to voters as if they were still alive have become a popular campaign tactic.

Sometimes, however, the use of deepfakes and other AI technology is more complicated.

In Indonesia, for instance, the leading candidate for president, a former general, deployed an AI-generated cartoon to humanize himself to appeal to younger voters. This raised some eyebrows given his role in the country’s military dictatorship, but there was no clear deception involved. In Pakistan, the jailed opposition leader, Imran Khan, used an AI-generated video to address his supporters, blunting efforts to silence him by the military and his political rivals. In Belarus, the country’s embattled opposition even ran an AI-generated “candidate” for parliament. The candidate — actually  a chatbot that describes itself as a 35-year-old from Minsk — is part of an advocacy campaign to help the opposition, many of whom have gone into exile, reach Belorussian voters.

In short, while AI-generated deepfakes and other synthetic media pose very real dangers for our elections, they can also be used to further creative political communication. Policymakers need to weigh these competing interests carefully in determining how to respond to the use of AI in ways that counter the worst potential impacts of deceptive AI without unduly burdening legitimate political and other forms of expression.

To start, governments and civil society should promote accurate information about the electoral process, including through public education campaigns to empower citizens to discern truth from falsehoods and the establishment of rapid response teams to monitor and counteract false information. Where possible, these efforts should be in collaboration with social media companies and key participants in the electoral process such as candidates and political parties.

For election officials seeking to adopt new AI systems to help run elections, the Brennan Center recommends that they follow a careful, transparent process in deciding whether to do so in any particular context. If and when they choose to go ahead, they should integrate simple, effective systems with necessary human oversight, ensuring transparency and documentation. They should also establish robust training for their staff and contingency plans to address possible malfunctions in the systems being used. This should include periodic reviews and adjustments based on performance data and feedback, ensuring the safe and accountable use of AI tools.

When crafting legislation to address deepfakes and AI-generated media in elections, policymakers should adopt a careful and nuanced approach. It is important to prioritize transparency for manipulated and harmful media, as seen in several state laws and pending bills before the U.S. Congress. This will help ensure voters are informed about the authenticity of the messages they receive and protect the electoral process against the most significant threats. However, in some cases transparency alone will not suffice, as labels can be ignored or removed. Targeted bans may be necessary to address especially harmful content, such as content intended to confuse and deceive voters about when, where, and how to cast their ballots.

As deepfake tools become more sophisticated and accessible, they pose a significant threat to the democratic process around the world. Policymakers must recognize the urgency of the situation and take proactive measures to address this unprecedented challenge while continuing to respect free expression and the desire of political actors of all stripes to use novel methods to reach the voting public.