Recommendation for Election Offices: AI CPR (Choose, Plan, and Review)
In deciding whether to employ AI, election officials should implement and follow a transparent selection process to choose the specific AI tool for any given election administration task. If and when they do choose a particular AI system, officials need to carefully plan that system’s integration into their workflows and processes. Part of that planning must include identifying and preparing for problems that may surface as the system is incorporated. They must also be able to shift resources as needed. Finally, they must establish thorough review procedures to ensure that the output of any AI tool deployed in an election office is assessed by staff for accuracy, quality, transparency, and consistency. Below, we describe important considerations at each of these three stages.
Choose AI Systems with Caution
Opt for the Simplest Choice
In choosing any system (AI-based or not) for use in election administration, all else being equal, we recommend that election officials choose the simplest tool possible. When it comes to AI, though simpler AI algorithms may be less refined than more complex ones, they are also easier to understand and explain, and they allow for greater transparency. Should questions or anomalies arise, determining answers and solutions will be easier with a simple AI model than an elaborate one. The most complicated AI systems currently available belong to the latest class of generative AI, followed by non-generative neural networks and other deep learning algorithms. Basic machine learning models like clustering algorithms or decision trees are among the simplest AI tools available today.
A useful practice to facilitate choosing the simplest possible system is for election officials to narrowly define the tasks that the AI will perform and identify a list of requirements. Requirements can range from price considerations or necessary IT and data infrastructure to the need for additional functionalities or minimum performance levels reflecting the risk level that election officials are willing to accept for a given task. Establishing these parameters ahead of the selection process will help both to ensure transparency around the criteria used for assessing proposals and to prevent “scope creep” when vendors demonstrate capabilities of more advanced systems.
Plan for Human Involvement
If an AI tool could result in someone being removed from the voter rolls, being denied the ability to cast a ballot, or not having their vote counted, then election officials should choose a system that requires human involvement in making final decisions. Human involvement helps to safeguard against AI performance irregularities and bias. Most jurisdictions have processes that require additional review before rejecting vote-by-mail or absentee ballots for a non-signature match. Generally, this review involves bipartisan teams that must reach a consensus before rejecting a ballot. Twenty-four states currently have processes in place that require election offices to notify voters should questions arise about their signature and to provide them the opportunity to respond and cure the issue. Such processes are vital to ensure that AI systems do not inadvertently prevent voters from having their votes counted. The planning and review stages outlined below will need to factor in this human involvement.
Anticipate Performance Disparities, Reliability Issues, and Output Variability
When selecting an AI tool, election officials should assume that the system will not perform as effectively as vendor metrics claim. In developing and training AI models, vendors inevitably use a training data set that is different than the data the AI is fed during actual use. This scenario frequently leads to degraded performance in real-world applications. Additionally, because of data eccentricities, data collection processes, and population differences, the same AI tool’s performance can vary substantially between districts and between population groups within the same district. As a result, AI tools are likely to perform less effectively on actual constituents’ data compared with benchmarks or results presented by vendors.
In particular, name-matching algorithm performance has been shown to vary across racial groups, with the lowest accuracy found among Asian names. A study of voter list maintenance errors in Wisconsin also revealed that members of minority groups, especially Hispanic and Black people, were more than twice as likely to be inaccurately flagged as potentially ineligible to vote than white people. Similarly, AI-powered signature-matching achieves between 74 and 96 percent accuracy in controlled conditions, whereas in practice, ballots from young and first-time mail-in voters, elderly voters, voters with disabilities, and nonwhite voters are more likely to be rejected. Unrepresentative training data coupled with low-quality signature images, often captured using DMV signature pads, to match against lowers the effectiveness of signature-matching software.
Implementing this technology for voter roll management thus raises major concerns. One mitigation strategy that election officials can utilize in choosing AI systems is to require vendors to use a data set provided by the election office for any demonstrations during the request for proposal and contracting process. This approach can provide further insight into system performance. Importantly, election officials should ensure that only publicly available data is used or that potential vendors are required to destroy the data after the selection process has concluded and not retain or share the data for other purposes.
Although a general strength of generative AI is its ability to respond to unanticipated or unusual requests or questions, election officials must bear in mind that current generative AI tools often suffer from reliability issues. Generative AI chatbots may produce different responses to the same request, and they regularly produce incorrect or hallucinated replies. In addition, the underlying language models are frequently fine-tuned and updated, which in turn affects the behavior of systems built on them.
Finally, when deciding whether to use a generative tool, election officials must consider whether variations in content and quality are acceptable. For most election-related tasks, variability that could result in an office propagating misinformation is not a tolerable outcome. As such, election offices should not adopt generative AI systems for critical functions without national or state standards in place to guide appropriate uses and provide baseline assurances of system reliability and safety.
Plan for AI Use — and for Potential Problems
Election offices should devise both internally and externally focused implementation plans for any AI system they seek to incorporate. Internally, election officials should consider staffing and training needs, prepare process and workflow reorganizations, and assign oversight responsibilities. Externally, they should inform constituents about the AI’s purpose and functionality and connect with other offices employing the same tool. Most importantly, officials should develop contingency plans to handle potential failures in deployed systems.
Develop Staff Training
Before deploying an AI tool, election officials must consider the training needs of their staff. While the following list is not all-inclusive, training should impart a high-level grasp of the AI system. Staff must understand the exact tasks the AI performs, its step-by-step processes, the underlying data utilized, and its expected performance. For instance, rather than thinking of a signature verification system simplistically as a time-saving bot that can verify mail-in ballots, staff should see it as a software tool that attempts to match the signature on a ballot to an image on record using a computer vision algorithm, and that it does so with an average accuracy rate of 85 percent.
At a minimum, staff training should cover
- familiarization with the user interface;
- common risks and issues associated with data and AI (such as those described above), how they could occur in the context of the office’s constituency and its election administration work, the system’s limitations, and how to address problems;
- internal processes for flagging issues with the AI and accountability guidelines in case of failure or errors; and
- requirements for — and the importance of — human involvement in decisions that directly implicate voter rolls, vote casting, and vote counting, including techniques for mitigating bias.
Prioritize Transparency
Constituents have a right to know about AI systems involved in election administration. Election officials must be transparent about when, for what, and how AI tools will be used. Before deployment, election offices should work with the AI developers to prepare and publish documentation in nontechnical language. These documents should describe the system’s functionality and how it will be used, what is known about its performance, limitations, and issues, and any measures taken to mitigate risk for the particular election administration task for which it will be deployed. Constituents should have opportunities to discuss questions and concerns with officials to build trust in the technology and in election administrators’ oversight capabilities. The need for transparency and documentation should be outlined in the request for proposal process and included in vendor contracts so that relevant information cannot be hidden from public view under the guise of proprietary information.
Prepare Contingency Plans
Election officials must have contingency plans in place before incorporating AI technology. AI contingency plans must include appropriate preparations to manage any potential failures in a deployed AI system. First and foremost, election offices must be able to disable an AI tool without impairing any election process — a fundamental best practice for using AI in a safe and trustworthy manner. AI tools should not be integrated into election processes in a way that makes it impossible to remove them if necessary.
Contingency plans must identify the conditions under which an AI tool will be turned off along with which staff members are authorized to make such a determination. Election offices must ensure that staff are aware of these conditions and are trained to identify them and to report issues, flaws, and problems to the responsible officials. Offices must also have a strategy in place for how to proceed if the use of AI is halted. This strategy should include identifying additional personnel or other resources that can be redirected to carry out certain tasks to ensure their timely completion.
Seek Other Users’ Input
The experiences of other users can help inform election offices newly adopting AI tools. Election officials should ask potential vendors for lists of other offices currently using their systems during the request for proposal process and should reach out to those offices when evaluating bids. Many AI tools are relatively new, so users are often the ones who discover their strengths and weaknesses. Learning from other users’ experiences in the elections space will be valuable for shaping effective training and implementation and for identifying resource needs and contingencies.
Review AI Processes and Performance
System reviews are an essential best practice when using AI tools. The extent and frequency of reviews will vary depending on the gravity of the election administration task at hand and the risk associated with it. Low-risk or low-impact applications (for example, an AI system used to check whether ballots comply with best design practices) may only need a process for getting user or voter feedback and a periodic review of the AI’s performance. However, systems that help decide if someone gets to vote or if a vote is counted need more frequent and direct human oversight.
Institute Straightforward Review Processes
Election officials should establish clear processes for collecting, assessing, and resolving issues identified by both internal and external stakeholders and for reviewing AI system performance. These processes should include soliciting staff and constituent feedback, monitoring use and output logs, tracking issues, and surveying help desk tickets.
Audits of issues and performance should occur before and after elections. Pre-election reviews are paramount to safeguard voting rights and identify if an AI’s contingency plan needs to be implemented. Postelection reviews will help improve future use and should assess all processes that AI touched, including evaluations of performance across demographic groups to reveal any potential biases. These reviews present an opportunity for election officials to work with federal partners on meaningful assessment tools for deployed AI systems, much as federal agency assessment tools exist for reviewing polling place accessibility and election office cybersecurity.
Ensure Human Involvement in Final Decisions That Affect Voters
People are the most critical factor in the successful deployment of AI systems in election offices. Decisions that directly affect an individual’s right to vote and ability to cast a ballot cannot be left solely to AI — trained individuals must be involved in reviewing consequential decisions based on AI analysis and AI-produced information. Regarding AI-assisted translations of election materials, if staff are not fluent in all relevant languages, officials should consider partnering with trusted local community groups to ensure translation accuracy. When incorporating AI technology into election administration processes, officials should also consider that these additional trainings and reviews may add or shift costs to different times in the election calendar.
Establish Challenge and Redress Procedures
Election officials must provide a process for challenging and reviewing AI-assisted decisions. Voters harmed by decisions made based on AI should be able to appeal and request reviews of those decisions. How these processes should occur will vary from jurisdiction to jurisdiction; existing state and local procedures for review and remedy should be assessed for appropriateness in light of AI-assisted decision-making and amended where necessary. For instance, what if a voter is directed to the wrong polling place by an agency chatbot and forced to cast a provisional ballot as a result? That voter needs a way to make sure that their ballot is counted nonetheless, especially because the action was prompted by inaccurate information provided by the election office. This is to say nothing of errors generated by AI-based signature-matching software, for example, or any number of other conceivable AI errors.
Enacting clear and accessible processes for constituents to challenge AI-driven decisions — processes that initiate a swift human review and an appropriate resolution — is imperative both to provide an added layer of protection to voting rights and to continually evaluate the performance of AI systems employed in election administration.