There is no silver bullet for the additional security risks wrought by AI technology’s rapid advances and increased availability. Broadly, however, government and the private sector must take action in five areas to mitigate the heightened threat: building more secure and resilient election systems; providing election officials with more technical support to safeguard election infrastructure; giving the public ways to authenticate election office communications and detect fakes; offering election workers AI-specific trainings and resources; and effectively countering false narratives generated by or about AI.
Build More Security and Resilience into Election Systems
Although the nation has made substantial progress on election security in the last decade, the likelihood that AI will supercharge security threats makes it even more critical to adopt existing safeguards — some of which themselves leverage AI technology — and to devise new ones to help election officials prevent, detect, and recover from cyberattacks on election infrastructure.
Some jurisdictions still need to expand their use of multifactor authentication, which CISA director Jen Easterly has called the most consequential step that users can take to combat cybersecurity threats. Multifactor authentication entails requiring users to present a combination of something they know (e.g., a password or personal identification number), something they have (e.g., an authenticator app on a cell phone that receives a code or request to verify), or some form of biometric identification (e.g., a fingerprint, palm print, or voice or face recognition).
While an attacker might be able to obtain passwords used by an election worker to access sensitive files like registration databases or ballot programming files, multifactor authentication requires access to that worker’s phone or another separate device as well. Security experts have long recommended multifactor authentication, but its uptake has not been universal. State and local officials must ensure even wider adoption of this essential security measure.
Effective resilience planning is also imperative. Even if an AI-aided cyberattack succeeds, it cannot be allowed to stop voters from casting ballots or officials from accurately tallying votes. States have made remarkable strides in enhancing system resilience over the last decade, including by moving overwhelmingly to voter-marked paper ballots. Paper ballots are independent of vote-counting software and can be used as a check against malware or software glitches. The Brennan Center has estimated that in 2022, 93 percent of voters — including nearly all voters in the major battleground states — cast their votes on paper ballots.
But gaps in election system resilience remain. For instance, paper is only a useful security measure when used as a check against election software. More states need to embrace postelection audits, which compare a subset of paper ballots with electronic tallies. As of 2022, eight states still do not require postelection audits of any kind. And even among the 42 that do, the audits’ efficacy as a tool to detect cyberattacks and technical errors differs widely. Only 33 of these states require election workers to hand-check sample ballots during an audit versus using a separate machine. Just five states require risk-limiting audits, which rely on statistical principles to determine the random sample of ballots that must be checked to confirm that overall election outcomes are accurate.
Election system vendors are targets too. The Brennan Center has previously detailed ways in which the federal government could mandate election security best practices — including stronger security and resilience planning — for vendors in the elections space as it does for vendors in other federal government sectors designated critical infrastructure. These standards should include guidelines for the use and disclosure of AI in election vendors’ work to identify potential security risks.
Provide Local Election Offices with More Technical Support to Protect Election Infrastructure
In the decentralized U.S. election system, target-rich, resource-poor local jurisdictions with limited capacity to address cybersecurity issues present one of the most concerning vulnerabilities. These election offices have little or no dedicated cybersecurity expertise and are often dependent on other offices in their counties or municipalities for IT support. In fact, nearly half of all election offices operate with one full-time employee at most, and nearly a third operate with no full-time staff at all. Yet election officials who serve these offices have the same monumental task of serving as frontline national security figures.
This challenge will only grow as the barriers to launch sophisticated cybersecurity attacks diminish. The federal and state governments must do more to protect local jurisdictions from these attacks.
1. Develop State Cyber Navigator Programs
Several states — including Florida, Massachusetts, Michigan, Minnesota, and Ohio — are working to tackle cyber vulnerabilities at the local level by creating cyber navigator programs. These programs employ cybersecurity and election administration professionals who work closely with election officials to assess system security, identify potential vulnerabilities, and devise tailored strategies to mitigate risks. Other states should follow suit.
2. Offer Targeted Assistance from CISA
At the federal level, the most critical agency in the fight against cybersecurity threats is CISA, which provides state and local election officials with risk assessments, information sharing, and security guidance. The agency recently announced its intention to hire 10 regional election security advisers to help communicate various election security best practices, including AI-supported cyberattack countermeasures, which the Brennan Center and other election security experts have urged. These hires are a crucial first step in reaching the jurisdictions that need the most help.
The Brennan Center has also called for more cybersecurity advisers — trained cybersecurity experts who can assist state and local officials — to prioritize outreach to under-resourced local election offices. An April 2023 Brennan Center survey of election officials demonstrated the need for this increased outreach: only 29 percent of local election officials said they were aware of CISA’s cybersecurity vulnerability scans, which are a vital tool for identifying election system weaknesses that generative AI can help attackers pinpoint and exploit.
3. Focus Resources on Defending AI Already Used in Elections
While this article mainly examines how hackers can use AI to hone and intensify their attacks against elections, we have also noted that it can bolster election defenses. Another article in this series will explore how AI is already being used to strengthen cybersecurity and for basic election administration tasks like signature matching and voter registration list maintenance. Undoubtedly, both election offices (especially those short on resources) and vendors will look to AI to improve their services in the coming years.
As election officials expand their use of AI, they must also consider that AI-supported systems will be targets for attack given the widespread damage that hackers can accomplish by corrupting such systems. It isn’t difficult to imagine, for instance, how attackers could manipulate AI to discriminate in its signature matching or list maintenance functions, or how they could corrupt AI functions that filter phishing attacks and other spam to breach the systems that those functions are meant to protect.
Election offices using AI need to implement additional security protocols to protect these systems. Another article in this series will offer more extensive recommendations for how to do so; for now, suffice it to say that CISA and state agencies responsible for protecting election infrastructure must identify jurisdictions that have incorporated AI into their security and administrative systems and proactively help them minimize their risks. One place to start would be for CISA to create a document that lists security and other matters for election officials to consider when incorporating AI into their operations. This guidance could borrow from the National Institute of Standards and Technology’s Plan for Federal Engagement in Developing Technical Standards and Related Tools and the Department of Homeland Security’s Artificial Intelligence Strategy.
4. Invest in AI to Protect Election Infrastructure
Cybersecurity is a race without a finish line. In the coming years, AI is likely to offer attackers more powerful tools for hacking into our election infrastructure. It is just as likely to offer powerful tools to defend it, but only if the government works to ensure that election officials have access to such tools and understand how to use them.
In August 2023, the Office of Management and Budget (OMB) and the Office of Science and Technology Policy (OSTP) published a memorandum outlining multiagency research and development priorities for FY 2025. Among other recommendations, the memo urged federal agencies to prioritize funding initiatives that would support and fulfill multiple critical purposes, including “build[ing] tools . . . for mitigating AI threats to truth, trust, and democracy.”
Election officials urgently need AI-supported tools to address ever-more sophisticated and frequent cyberattacks. The federal government should prioritize developing such tools and working to guarantee sufficient infrastructure that they can be implemented in election offices around the country.
5. Seek Tech Company Investment in Free and Low-Cost Tools That Increase Election Security, Confidence, and Transparency
Companies developing AI should not wait for government mandates or for AI’s most dangerous threats to democracy to be realized — they should strive not to accelerate these threats even in the absence of regulation. At the same time, they can actively help election officials navigate the risks we’ve outlined above. When foreign antagonists have targeted democracies in the past, companies like Microsoft, Google, and Cloudflare provided election security tools both in the United States and abroad at no cost to help improve resilience against attacks. Microsoft held virtual education and training sessions for U.S. election officials in 2020; Google sister company Jigsaw issued a suite of free security tools for campaigns and voter information websites in 2017; and Cloudflare offered free services, also in 2017, to help keep official election websites functioning in the event of cyberattacks or technical difficulties.
These and other vendors — especially those that stand to profit from AI development — should consider how their work can influence the cybersecurity landscape, including election security, and expand their offerings and proactive contributions to foster a healthy election information environment. Among other things, they can invest resources in nonprofits and programs such as the Election Technology Initiative. The initiative assists election administrators by providing and maintaining technology that builds public confidence in elections.
Technology companies behind AI tools (such as large language model chatbots) that the public may use in 2024 to get election information have a particular responsibility to ensure that these chatbots provide accurate information. AI is not equipped to accurately answer questions about how and where to vote or whether the latest election conspiracy theory is accurate. Chatbots should be trained to make their limitations clear and to redirect users to official, authoritative sources of election information, such as election office websites.
Authenticate Election Office Communications and Help the Public Spot Impersonations
Election offices and officials were a critical bulwark against the stolen election lie in 2020. Those seeking to undermine confidence in American democracy thus continue to view them as valuable targets, attacking their credibility with falsehoods, harassing them, and threatening them with criminal prosecution and physical harm.
AI-generated content offers a new way to attack the credibility of these essential information sources with a “firehose of falsehood” that purports to derive from those very sources, making the truth even more difficult for the average citizen to ascertain. Solving the problem of impersonation through AI-generated content requires multifaceted solutions. Election officials must act to secure their communication channels to make them harder to spoof, and all levels of government must reinforce these steps. Entities that seek to provide public information about elections must also verify content through official sources. The following urgent measures would help stem the flow of false information.
1. Move All Election Websites to .gov Domains
American election websites are far too vulnerable. In the lead-up to the 2020 election, the FBI identified dozens of websites mimicking federal and state election sources using easily accessible .com or .org domains. To guard against spoofing and interference, federal and state governments should work together to ensure that election offices adopt .gov domains — which only verified U.S.-based government entities can use — for their websites. When users see .gov in a website URL, they can be sure that the site is a trusted government source rather than a fake election website. Only one in four election office websites currently uses a .gov domain.
Leading up to the 2024 elections, CISA should double down on imparting the national security importance of .gov domains, in part through the Elections Infrastructure Information Sharing and Analysis Center (EI-ISAC). States should mandate the use of .gov domains for local election offices, as Ohio’s secretary of state did in 2019. Doing so would facilitate the transition for election officials who do not control their own websites and depend on their counties or municipalities for IT support. Registration for .gov domains is now free for election offices verified by CISA. And states and localities can use federal funds from DHS’s newly launched State and Local Cybersecurity Grant Program for other costs associated with transitioning to new domains.
2. Verify Accounts and Amplify Truthful Information
No .gov equivalent currently exists for social media, and generative AI makes it alarmingly easy to create and populate fake social media accounts that, among other problems, provide voters with inaccurate information about how and where to vote. Leading social media companies can intervene by identifying, verifying, and amplifying authentic election official content, as the Brennan Center and the Bipartisan Policy Center have previously recommended. At best, some companies like X (formerly Twitter) have taken a more passive approach, allowing election officials to apply for verification (albeit with lags between application for and grant of verification).
One solution is for the Election Assistance Commission (EAC) or CISA to create a dedicated server in the so-called Fediverse, a non-platform-specific federal clearinghouse for verified official accounts that can serve as a central distribution and syndication hub for social media communiqués. Posts with .gov handles could then be verified and batched, precluding the need for local election officials to publish content simultaneously across multiple platforms. Users, too, could then be sure of information’s veracity.
In 2022, the German government’s Federal Commissioner for Data Protection and Freedom of Information (BfDI) took a similar approach through Mastodon, which is currently the best-known entity in the Fediverse. More than 100 official accounts use the BfDI server, including the German weather service, the foreign office, and the federal court, which allows users to see posts from all these organizations in a single verified news feed. As one commentator put it, “When you read posts made by one of the https://social.bund.de accounts you inherently know from the domain name (web identity) that you’re not following an impostor.” A similar EAC/CISA feed would go a long way to dispelling election misinformation that ends up on social media.
3. Implement Methods to Ensure That Open Records Requests Are Authentic
As discussed above, AI makes it easier to bury election offices under mountains of deceptive open records requests that seem to come from different constituents. This challenge has no easy solution, but one important step would be for election offices to require every open records request to be filed by an actual person, and to use a version of CAPTCHA (which stands for “completely automated public Turing test to tell computers and humans apart”) to prevent bots from using AI-generated open records requests to overwhelm an office. Although generative AI itself is increasingly able to defeat CAPTCHA tests, anything that might curb a potential flood of faux requests filed with election offices would be a step in the right direction. In instituting any such policy, offices should ensure that people with disabilities and those without access to high-speed internet or other technology have accommodations available.
Forthcoming articles in this series will discuss this threat of deceptive, AI-generated open records requests, as well as inauthentic public comments to election and other government officials more generally, in greater detail.
4. Explore How to Authenticate Sensitive Election Materials
Regarding the risk of videos or documents supplied in response to FOIA requests being manipulated to promote disinformation, election officials should consider posting on their websites the unaltered versions of every document produced for such requests. Unfortunately, this mitigation strategy would require resources that some already overburdened offices lack. A related option is for state election offices to create central repositories for all local FOIA responses, allowing election officials to point to original, unaltered documents in case hackers try to pass off distorted documents as true copies. Hawaii already does so.
Digital signatures and cryptographic hash functions also hold promise for authenticating digital records produced by election offices, but they may be less easily understood by the public — or even the election officials who need to implement to them. CISA could help here by educating election offices and the public about these tools. Cybersecurity advisers could also teach election officials how these tools can prove whether materials posted online (like cast vote records or ballot images) are real or altered.
5. Take Extra Steps to Verify Election-Related Content
Journalists should cultivate relationships with election officials and other authoritative sources on elections processes. Content that purports to be from these sources should be verified against content that can be authenticated — for example, videos and other information provided on a secure website with a current security certificate or, ideally, on a .gov site. In the case of breaking news, media sources should include nonpartisan experts who can assess the plausibility of content that may have been manipulated. Sources should also represent non-English-speaking and historically marginalized communities so that journalists can address specific and diverse items of confusion or misinformation.
Give Election Workers AI-Specific Training and Resources
Election office employees and election system vendors are appealing targets for anyone seeking to damage faith in American elections. This threat could come from foreign or domestic antagonists seeking to gain access to election infrastructure through phishing emails or other methods. CISA and other experts and organizations should do everything possible to help these offices and their staffs remove from public websites information that hackers might use to personalize AI-generated messages, and to help election workers identify such messages.
1. Remove Data That Could Be Used to Personalize AI-Generated Communications
Criminals already use data found on the web to dupe unsuspecting Americans out of money and to glean information they should not have. Election offices and vendors should review their websites for personal and organizational information (e.g., chain of command, employee email addresses, names of employees’ relatives) that hackers seeking to obtain sensitive or confidential information could use to create personalized phishing emails or cloned voice messages.
CISA already makes recommendations for controlling personal information shared online to reduce the likelihood of doxing. The agency is well-positioned to create similar resources and trainings to help election offices and their workers analyze their websites to minimize the risk of personal information being used to facilitate attacks.
2. Help Election Workers Identify AI-Generated Content
For the moment, even as AI allows for more sophisticated phishing attacks and impersonations, there are often ways to spot such content. Among other clues, AI-generated text often includes very short sentences and repeated words and phrases; voices and images in AI-generated videos may not fully align.
The challenge for devising guidelines that election workers can use to spot AI-generated content is that generative AI’s capabilities advance so quickly. Here again, CISA has the expertise to help. By keeping up with how AI is evolving and regularly communicating how to identify AI-generated content, the agency can help reduce the effectiveness of attacks against election offices and vendors that try to use such content to deceive them.
Push Back on False Narratives
As with other election security challenges, the problem of false narratives existed before the recent advances in generative AI’s abilities. Yet rapidly developing AI technology threatens to exacerbate the problem, not only because it offers more tools to undermine confidence and spread lies, but because the use of those tools for crimes — and the inevitable publicity around those incidents — further undermines public confidence in the security of critical systems like those that support our elections.
All of us must push back on false narratives around election security. We must also recognize that AI’s power to disrupt security will make this work even more challenging. Other articles in this series will discuss in more detail how to build resilience to election disinformation fueled by developments in AI, but here we note a few ways that government, the mainstream media, and social media platforms can help as we head into the 2024 election season.
Disinformation relies on core falsehoods that evolve and recirculate. Generative AI greatly increases the opportunities for these myths to perpetuate. Federal agencies like CISA and the EAC should collaborate with election officials to raise public awareness of and confidence in election system security. Preemptive debunking of the sticky false narratives that we already know will underpin the lies and election disinformation in the weeks before and after Election Day is essential.
CISA and the EAC must vigorously promote the dissemination of accurate information from election officials and share best practices for strengthening societal resilience to the spread of false information, including falsehoods generated and enhanced by AI. One important step would be to follow the recommendations of CISA’s Cybersecurity Advisory Committee (CSAC), which in a 2022 report encouraged the agency to strengthen and expand its information-sharing networks and to use those networks to create public awareness campaigns focused on digital literacy, civics education, and bolstering information from authoritative sources. These networks could also be used to educate the public to spot AI-generated content.
Traditional and social media platforms alike must work to refute core misinformation themes and amplify accurate and authenticated election information, especially from election officials. Social media platforms should invest in detecting and removing coordinated bots to help prevent false information from influencing elections. And they should collaborate with AI developers to continually improve methods to detect AI-generated content.