Amend deceptive practices laws.
The Deceptive Practices and Voter Intimidation Prevention Act is a well-tailored federal bill designed to curb vote-suppression efforts that involve false claims about when, how, and where to vote, knowingly spread with the intent to prevent or deter voting within 60 days before a federal election. Similar state-level legislation addressing false claims about voter registration information and the time, place, and manner of elections exists in Kansas, Minnesota, and Virginia, for example, and other states are considering moving ahead with such efforts.
While such deceptive practices bills and laws could be read to cover some vote-suppression activity propelled by generative AI, a modest addition to the bills and legislative text could help ensure accountability for those who purposely develop AI systems designed to deceive voters about the voting process — and whose AI tools subsequently communicate false information about when, how, and where to vote.
As a preliminary matter, legislation should expressly include the development and intentional dissemination of AI tools. Additionally, under several existing laws and bills that address deceptive election practices, liability for deceiving voters about the voting process requires that an antagonist know that they communicated a claim that was false. But because ill-intentioned AI creators may not have contemporaneous knowledge about the falsity of each piece of content produced by their algorithms, legislators should amend deceptive practice laws and bills to remove any requirement that generative AI developers possess precise knowledge of false claims rendered to become liable; the minimum legal standard should be knowledge that a tool is designed to produce false claims.
Limit the spread of additional risky AI-generated content that endangers voting rights.
Federal and state laws should also bar or limit the dissemination of synthetic visual and audio content — including AI-generated content that falsely depicts damage to or impediments to the use of voting machines, voting equipment, or ballot drop boxes; manufactures disasters or emergencies at polling places; or falsely portrays election workers preventing or hindering voting — where such content is created or spread with the purpose of deterring voters or preventing votes within 60 days before Election Day.
While many deepfake bills and laws have zeroed in on AI deepfakes that damage candidates’ reputations and electoral prospects, deepfakes that threaten the right to vote merit similar attention from lawmakers and would likely enjoy a lower level of constitutional free speech protection. Similar to the Deceptive Practices and Voter Intimidation Prevention Act, such laws should compel state attorneys general to share accurate corrective information upon receipt of a credible report of vote-suppressing content being disseminated if election officials fail to adequately act to educate voters as needed.
In addition, such laws should not be limited to AI-generated content, but rather should extend to all visual and audio content created with substantial assistance of technical means, including Photoshop, computer-generated imagery (CGI), and other computational tools.
Require labeling and other regulation for some content generated by chatbot technology or distributed by bots.
Much attention in Congress and the states has focused on labeling or otherwise regulating visual and audio AI-generated content (including deepfakes) in the election context. But the technology behind generative AI chatbots poses different dangers, particularly when it comes to vote suppression through exploitative microtargeting and manipulative interactive AI conversations. Campaigns’ and political committees’ use of LLM technology, too, raises potential democratic concerns — for instance, a generative AI chatbot making promises to voters that are untethered to the campaign’s actual platform.
Federal and state laws should compel campaigns, political committees, and paid individuals to label a subset of LLM-created content. Such labeling requirements and other regulatory efforts should focus on AI-generated content distributed by bots wherein the bots deceptively impersonate or masquerade as human; interactive LLM-driven conversations between campaigns, PACs, and voters online and through robocalls; campaign and PAC use of LLMs to communicate with voters with minimal human supervision and oversight; and AI-generated communications from campaigns and PACs that microtarget voters based on certain demographic characteristics or behavioral data.
Regulate the use of AI to challenge voters and purge voter rolls.
The NVRA sets important limits on voter purges, but more guardrails are needed to protect voters from frivolous voter challenges and from the misuse of AI in voter challenges and removals. Congress and state legislatures should set baseline requirements governing the official use of AI systems to remove voters from rolls. Lawmakers should direct agencies to flesh out these requirements through regulation and to update rules as AI technologies evolve. To guard against improper voter disenfranchisement, legislatures and agencies should set thresholds for the accuracy, reliability, and quality of training data for AI systems used to assist officials in conducting voter purges. And they should require human staff to review all AI-assisted decisions to remove a voter from the rolls.
State legislatures and officials should also make changes to voter challenge procedures and requirements. In states that allow private citizens to file eligibility challenges, policymakers should shield voters from frivolous challenges, set requirements for documentation and evidence needed to substantiate a challenge, and impose constraints on evidence acceptability. As a preliminary matter, federal and state policymakers should reject the use of bots to transmit automated challenges to voters’ registrations to election offices. States should also require that private challenges be based on firsthand knowledge of a voter’s potential ineligibility — which does not include the use of AI-assisted or other forms of automated database matching.
Limit certain kinds of AI systems that infringe on autonomy and privacy and permit sophisticated forms of voter manipulation.
Congress and state lawmakers could regulate the creation and deployment of certain high-risk AI systems where such systems are used to influence elections and votes, including those designed to employ subliminal techniques, recognize emotions, conduct biometric monitoring, and use biometrics to assign people to categories such as racial groups. The European Union is seeking to impose a ban on real-time biometric surveillance and emotion recognition AI in certain contexts (including employment), as well as to prohibit AI systems that utilize subliminal techniques to harm people or distort their behavior. American lawmakers should limit similar AI tools used to manipulate voters, distort their behavior, or infringe on their personal autonomy or privacy interests. One approach would be to create a certification regime for the use of AI tools with these manipulation capabilities in sensitive contexts.
Strengthen regulation of political robocalls.
Policymakers should strengthen regulation of political robocalls to better protect voters from AI-boosted deception efforts. As mentioned above, the FCC recently confirmed that existing law reaches robocalls containing voice-generation AI, but substantial loopholes remain. Federal and state lawmakers should close the loophole that allows political robocalls — including those made through automated dialer systems and those that use AI-generated voices — to landlines without prior consent. Policymakers should also clarify that consent to receive a political robocall containing voice-generation AI must be preceded by clear notice of generative AI use and must demonstrate comprehension that generative AI will be deployed (rather than more general consent to receive political outreach from an organization). This requirement would give voters greater protection from AI-generated robocalls coming from political campaigns and PACs with whom they have interacted in the past.
Additionally, lawmakers should compel phone carriers and mobile device manufacturers to integrate high-quality tools that can screen calls for voice-generation AI and alert customers to a high likelihood of an AI-generated robocall.
Support election offices’ efforts to educate voters and defend against spoofing and hacking.
Election offices can take several steps to reduce the risks of AI-enhanced attempts to deceive people about the voting process. Most election office websites do not operate on a .gov domain — a web domain only for use by verified U.S. government entities — even though fraudsters sometimes spoof election websites to trick voters, a strategy that could become more common with the proliferation of generative AI tools. Migrating to .gov domains and educating constituents about the domain’s significance are straightforward ways to bolster the credibility of official election websites. Federal funds and support facilitate this important step, which can help election officials preempt and “prebunk” recurring false narratives about the election process through accessible materials and resources.
Election offices should promote messages about the robustness of existing election security safeguards via their websites — a method that evidence suggests increases confidence in the election process across the political spectrum. They should also maintain up-to-date rumor control pages on their websites, develop crisis communications plans to rebut viral rumors that threaten to deter voters on or ahead of Election Day, and establish networks among hard-to-reach communities to share accurate and timely information about how to vote.
Pass the Freedom to Vote Act.
AI amplifies myriad long-standing election concerns, and vote suppression is no exception. Although it does not directly address AI, the proposed Freedom to Vote Act would safeguard voters and election administration by affording voters a wide range of critical protections against vote suppression. These measures include protections against improper voter purges. The proposed law would also guard against certain deceptive practices that risk disenfranchising voters by incorporating the Deceptive Practices and Voter Intimidation Prevention Act.