As artificial intelligence tools become cheaper and more widely available, government agencies and private companies are rapidly deploying them to perform basic functions and increase productivity. Indeed, by one estimate, global spending on artificial intelligence, including software, hardware, and services, will reach $154 billion this year, and more than double that by 2026. As in other government and private-sector offices, election officials around the country already use AI to perform important but limited functions effectively. Most election offices, facing budget and staff constraints, will undoubtedly face substantial pressure to expand the use of AI to improve efficiency and service to voters, particularly as the rest of the world adopts this technology more widely.
In the course of writing this resource, we spoke with several election officials who are currently using or considering how to integrate AI into their work. While a number of election officials were excited about the ways in which new AI capabilities could improve the functioning of their offices, most expressed concern that they didn’t have the proper tools to determine whether and how to incorporate these new technologies safely. They have good reason to worry. Countless examples of faulty AI deployment in recent years illustrate how AI systems can exacerbate bias, “hallucinate” false information, and otherwise make mistakes that human supervisors fail to notice.
Any office that works with AI should ensure that it does so with appropriate attention to quality, transparency, and consistency. These standards are especially vital for election offices, where accuracy and public trust are essential to preserving the health of our democracy and protecting the right to vote. In this resource, we examine how AI is already being used in election offices and how that use could evolve as the technology advances and becomes more widely available. We also offer election officials a set of preliminary recommendations for implementing safeguards for any deployed or planned AI systems ahead of the 2024 vote. A checklist summarizing these recommendations appears at the end of this resource.
As AI adoption expands across the election administration space, federal and state governments must develop certification standards and monitoring regimes for its use both in election offices and by vendors. President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence marks a pivotal first step, as it requires federal regulators to develop guidelines for AI use by critical infrastructure owners and operators (a designation that has included owners of election infrastructure since 2017) by late spring 2024.
Under its recently announced artificial intelligence roadmap, CISA will provide guidance for secure and resilient AI development and deployment, alongside recommendations for mitigating AI-enabled threats to critical infrastructure. But this is only a start. It remains unclear how far the development of these guidelines will go and what election systems they will cover. The recommendations in this resource are meant to assist election officials as they determine whether and how to integrate and use AI in election administration, whether before or after new federal guidelines are published next year.