Skip Navigation
Expert Brief

Congress Must Keep Pace with AI

Lawmakers need to both take advantage of AI advances and create needed safeguards.

Published: February 8, 2024
View the entire AI and Democracy series

A recent ad touted “a software platform that enables lobbyists to complete the most time consuming, tedious parts of their daily workflow in just a few clicks,” adding:

Imagine using an AI application to read the entire 2,500+ page National Defense Authorization Act, and extract all the components of the bill relevant to each of your clients in mere seconds, with just a few clicks. Or sending a dozen emails to congressional staffers, along with a one-sheeter on your client’s particular issue, without ever touching your keyboard.

Lawmakers and congressional staffers, meanwhile, contend with an ever-growing volume of correspondence, increasingly complex legislative issues, and limited access to the modern tools that lobbyists use. The routine task of parsing legislation like the National Defense Authorization Act (NDAA) is a gargantuan undertaking. The asymmetry of lobbyists using new generative AI tools to digest the NDAA’s substance with “just a few clicks” while legislators and staffers are still reading and CTRL+F-ing their way through is striking.

Congressional capacity has been an issue for years. AI could exacerbate the problem by empowering outside interests while Congress stagnates — or it could help level the playing field. Of course, any organization adopting AI into its operations must beware the documented risks associated with its use, including everything from commercial chatbots’ potential inaccuracies and “hallucinations” to security and confidentiality concerns to the perpetuation of biases based on data that software is trained on. Accordingly, legislatures looking to incorporate AI tools must consider not just how those tools could increase efficiencies but also what guardrails are needed to evaluate performance and mitigate risks.

This essay assesses recent steps by Congress to establish policies governing the use of generative AI and to encourage the legislative branch’s responsible experimentation with these new technologies. It emphasizes the importance of a proactive approach in the context of the “pacing problem” — a term coined by legal scholar Gary Marchant to describe the ever-expanding gap between technological advancement (which is often exponential) and the ability of governing institutions to keep up with these changes (at their default linear pace). It also explores the advantages of using AI in the legislative process, including its potential to strengthen institutional knowledge, policy research, oversight, and public engagement. It then reviews some of the known risks associated with recent innovations in AI technology and presents recommendations that address these risks while capitalizing on the benefits. These recommendations apply to Congress and to other legislative bodies seeking to develop their own AI strategies.

The Pacing Problem and Congress’s Proactive Steps to Fix It

The advent of AI-assisted lobbying was not unforeseen. Over the course of 2023, scholars such as John Nay, Nathan Sanders, and Bruce Schneier raised concerns that AI could influence public policy through sophisticated parsing of legislative language. And in April 2023, Brookings Institution scholars Bridget Dooling and Mark Febrizio predicted that a proliferation of comments to government agencies generated by large language models (LLMs) could lead to an “arms race” of “robotic rulemaking”:

Generative AI can be viewed as part of an ongoing tit-for-tat for public participation, with commenters deploying more sophisticated commenting methods and agencies attempting to respond with their own technology. Such an arms race is a waste of resources, though, if the end result is a large body of comments that neither represent the views of the general public nor offer novel and reliable information to the agency. More comments do not necessarily lead to better regulatory choices.

Less than a year later, many of these hypotheticals are becoming reality. The era of generative AI–powered lobbying and public engagement is here. 

Notwithstanding the asymmetry concern mentioned above, the rise of generative AI could also spur improvements to the legislative process. In February 2023, congressional innovation experts Zach Graves, Marci Harris (a coauthor of this essay), and Daniel Schuman observed that “massive increases in AI-fueled advocacy that are met with AI-assisted responses from congressional offices and agencies might ultimately prompt a healthy rethinking of the actual goals of these practices.” This healthy rethinking is essential to any congressional effort to address the pacing problem. 

In 2018, Adam Thierer, now a resident senior fellow at the R Street Institute, explained that three factors exacerbate the growing gap between technological innovation and government’s ability to keep up: the accelerating rate of technology’s evolution, the public’s increasing appetite for and adoption of new technologies, and political inertia — or what Brookings senior fellow Jonathan Rauch has called demosclerosis (“government’s progressive loss of the ability to adapt”). And in 2019, POPVOX Foundation cofounder Marci Harris explained that Congress actually faces the pacing problem on three different levels: external (its failure to “keep pace with emerging innovations that are changing industries and society”); interbranch (its compromised capacity to act coequally as it falls behind the executive branch); and internal (its delayed adoption of modern practices and technologies in its own operations).

A central element of the pacing problem concept is that it progressively worsens: as time passes, the gulf between technology-driven changes and the government’s lag grows wider. The only way to begin to close this gap is by taking proactive steps to increase government capacity. Acting early is imperative.

In early 2023, the House of Representatives began responding to advances in generative AI, demonstrating its willingness to rise to meet the internal pacing problem challenge. These steps can be viewed as part of a yearslong House modernization effort that began in 2019.footnote1_Boa2MY1JK5KqOJ4t48jdJssyw5Xnop0C4onj5EOyXV8_wiiit3nNYThW1In January 2019, the House established the Select Committee on the Modernization of Congress to investigate and develop recommendations, focusing on improving congressional efficiency and effectiveness. This modernization work — which provides an ongoing forum for exploring new approaches and overseeing institutional efforts — was transferred to a permanent Modernization Subcommittee under the Committee on House Administration in January 2023.As one AI expert remarked in a June 2023 meeting with congressional staff, when it comes to addressing LLM and generative AI use, “Congress is not late yet.”

On April 6, 2023, the House chief administrative officer (CAO) announced an AI working group, distributing 40 ChatGPT Plus licenses and establishing a forum for staff to anonymously share their experiences using the tool. On June 26, the CAO issued guidance for staff use of generative AI tools and announced the provisional authorization of OpenAI’s ChatGPT Plus by the Committee on House Administration (CHA), with stipulations that the tool be used for research and evaluation only, that it be used only with nonsensitive data, and it only be used with privacy settings enabled. Notably, the privacy settings requirement (which was still in place as of this writing) means that some features are not available for congressional staff, including plug-ins that can parse PDFs; language processing tools that help with translation, text generation, modeling, and a wide range of other tasks; and the recently announced ability for users to build no-code GPTs (essentially, custom versions of ChatGPT created for a specific purpose).

Also in June, CHA brought on a nonpartisan GAO detailee to advise committee members and lead the effort to draft policies for generative AI use by the legislative branch. In the months since, the committee has held multiple listening sessions with internal and external stakeholders and developed a road map for a phased rollout of new tools that prioritizes information-sharing and transparency.

On September 14, 2023, CHA and its bipartisan Subcommittee on Modernization issued its first “flash report” on AI strategy and implementation in the House. The report instructed congressional support agencies (like the Library of Congress, the Office of the Clerk, and the Government Publishing Office) to outline efforts toward governance frameworks for AI use, pilot programs, upskilling, or advisory board development. It also requested details on measures “to establish or maintain AI use case inventories,” and on any such information shared on public websites. Subsequent reports released on October 20November 14, and December 18 provided updates on those requests and refined guidance to support agencies. The committee plans to focus heavily on AI throughout early 2024, and it held its full committee hearing titled Artificial Intelligence (AI): Innovations Within the Legislative Branch on January 30, 2024. 

These forward-looking steps around generative AI, building on the foundation of modernization efforts over the past five years, are cause for optimism that the House can rise to meet the moment. Another notable example is the newly launched Comparative Print Suite (CPS), which the House Office of the Clerk deployed in 2022. CPS employs natural language processing (NLP) for bill comparison; it can visually depict changes made through line edits or amendments and can even show how a bill would modify current law. CPS’s release confirms that Congress can move beyond cloud services and off-the-shelf technology to develop bespoke solutions that directly address needs within the legislative workflow. It demonstrates that, even with generative AI still in early stages, legislatures are already leveraging automated technologies for operational and administrative applications that directly address issues in the legislative workflow while implementing safeguards to best protect the public’s interest.

Although the Senate was later in announcing guidance on internal generative AI use, the chief information officer for the Senate sergeant at arms released a policy in December 2023 authorizing controlled use of the three leading chat-based AI services — ChatGPT, Microsoft’s Bing Chat (now Microsoft Copilot), and Google’s Bard — for research and evaluation purposes. As with the House guidance, the guidelines allow staff to experiment and become familiar with the technology while maintaining a cautious approach to wider deployment and make clear that human involvement remains a cornerstone of responsible use. Additionally, the Senate Committee on Rules and Administration held a hearing titled The Use of Artificial Intelligence at the Library of Congress, Government Publishing Office, and Smithsonian Institution on January 24, 2024.

End Notes

AI Integration and Potential Improvements to Legislative Workflows

While Congress is taking tangible steps to keep pace with changes that AI will bring, these measures are just a start. Continued AI integration into congressional workflows, particularly in a bicameral manner, could yield important benefits. In a September 2023 Harvard Business School study, researchers demonstrated that consultants using the generative AI tool GPT-4 for tasks within its capability range were demonstrably more productive — completing 12.2 percent more tasks, doing so 25.1 percent more quickly, and producing 40 percent higher-quality results — than a control group. The skills tested included “conceptualiz[ing] and develop[ing] new product ideas, focusing on aspects such as creativity, analytical skills, persuasiveness and writing skills,” and “problem-solving tasks using quantitative data, customer and company interviews, and including a persuasive writing component.”

The work of these consultants easily maps to much of the work in a typical congressional office: developing ideas for new bills and issue analysis; gleaning information from data and meetings with stakeholders and constituents; assisting in the preparation of memos, press releases, and letters; and fielding questions. If new AI tools — used appropriately and with adequate guardrails — can up the quantity and timeliness of legislative tasks without sacrificing accuracy or quality, then exploring them further is a no-brainer. 

Researchers Fotios Fitsilis and Jörn von Lucke recently published the results of their work surveying people working in parliaments worldwide about possible use cases for generative AI in the legislative ecosystem. Notably, most of their work took place before widespread understanding and adoption of generative AI, but the findings are still relevant for gauging how those working in legislatures think that automated technologies could make their work more efficient. The paper identified a range of potential applications — from knowledge management to curating public input, operational efficiencies to oversight — that build on innovations deployed in legislatures around the world utilizing other AI technologies, such as machine learning, NLP, voice recognition, and computer vision. Used with proper safeguards, such technologies can help Congress and other legislatures process and preserve institutional knowledge, automate staff tasks, improve oversight, and strengthen constituent engagement.

Processing and Preserving Institutional Knowledge 

Plagued by complex structures and frequent turnover, Congress struggles to retain institutional knowledge. Staff and legislators (including newly elected ones) join and leave the House and Senate regularly. For example, as of December 2023, 33 representatives and 8 senators had issued announcements resulting in seat vacancies leading into the 2024 U.S. elections. Staff turnover happens in even greater numbers. One of those — the upcoming retirement of the long-serving counsel of a key House committee — illustrates this challenge. As with any senior staff turnover, this departure represents a substantial loss of knowledge and expertise. Compounding that loss, more seasoned lobbyists often outmatch younger, less experienced congressional staff.

A range of existing automated technologies can help preserve institutional memory, enabling new lawmakers and congressional staff to easily access information that might otherwise be difficult and time-consuming to locate. These technologies include:

  • Tagging: Italy’s parliament uses AI to organize legislative amendments by clustering similar texts.
  • Summarizing: In Iceland, citizens can offer suggestions about urban problems on the Better Reykjavik website, which incorporates an AI tool to summarize submissions for policymakers, classify ideas, and flag abusive content.
  • Translating: For legislatures required to work in multiple languages, NLP-assisted translation can speed up processes. In June 2023, the European Union announced a new system to facilitate immediate access to machine-translated commission press releases in all 24 official EU languages. (A commission spokesperson did acknowledge, however, that “humans . . . remain essential to spotting any mistakes and adapting machine-translated texts to EU lingo.”)
  • Transcribing: Estonia’s parliament uses voice recognition to help transcribe committee proceedings and legislative sessions.
  • Advising on internal rules and procedures: South Africa is developing a specialized non-generative AI chatbot for accessing parliamentary information.
  • Processing large volumes of information: As the “AI Lobbyist” promises the ability to “read the entire 2,500+ page National Defense Authorization Act” and “extract [relevant] components . . . with just a few clicks,” several third-party generative AI tools offer similar functionality, at least for somewhat smaller chunks of large documents. With a recently announced expanded token window, GPT-4 and Anthropic’s Claude make it possible to drop over 250 pages of text to extract summaries or detailed information about various sections. ChatGPT plug-ins make it easy to parse PDF documents. While that may not yet mean digesting 2500+ pages at once, if used prudently, a section-by-section copy/paste with generative AI tools would be an improvement on the PDF word search process of the Hill’s current workflow.

Looking forward, the GAO’s Innovation Lab has already begun experimenting with using LLMs to query the agency’s reports. As the GAO’s chief data scientist explained in a recent FedScoop interview, “one particular use case the watchdog is thinking about for a generative AI tool that’s trained on the GAO’s reports is using an application programming interface (API) to ‘plug into information within Congress.gov’ on committee hearings and summarize what the agency has already reported on the topic.” And, in October 2023, the GAO deployed its own internal LLM, Project Galileo, which uses a commercial API to access GAO reports.

Automating Tasks and Augmenting Human Researchers

When appropriately used, certain LLMs can process vast amounts of information and assist in synthesizing that information into comprehensible formats like memos and presentations. As Graves, Harris, and Schuman noted, “the clear opportunity for Congress is to use AI to help free up staffing hours from communications and lower-level office tasks, in a similar way to the productivity boost achieved from typewriters and computers.”

One former legislative aide recounted her experience of being overwhelmed by information when preparing for votes, often resorting to hurried online searches:

We were inundated . . . with too much information on a given topic to be able to parse it and take it all into consideration prior to prepping the boss for a vote. We did the absolute best we could on the tight timelines we were given once a vote was announced, but oftentimes, we only had enough time to Google the topic of the bill [and] try to find credible sources on how it was going to affect our state.

Some congressional offices are already using commercial generative AI tools to assist with these tasks. On November 20, 2023, POPVOX Foundation and the R Street Institute hosted webinars for communications and legislative staffers to help them better leverage these tools.

Improving Oversight 

AI could act as a pivotal bridge between the authorizers and implementers of federally directed programs. Traditionally, Congress and the executive branch have operated in separate silos, with limited visibility into each other’s processes and regulations. Automation can offer efficient and effective ways for staff to gain insights into agency operations, bypassing the need for extensive networking and physical visits while potentially improving oversight, increasing understanding of program success, and opening avenues for improvement. As described above, AI-driven summarizing and clustering tools can also assist a legislature in parsing lengthy agency reports and other information. Thanks to Rep. Derek Kilmer (D-WA) and former Rep. Blake Farenthold (R-TX)’s Open Government Data Act, government data — including data from federal agencies — is now more accessible and digestible.footnote1_fX6skIZt7FoXOpks8BTwFlha8vaFmTj14l4uOPsG39I_wj7BAGTY3Gdd1This bill was included in the Foundations for Evidence-Based Policymaking Act of 2018, which built on the 2014 Digital Accountability and Transparency Act. For more information on evidence-based policymaking, see the GAO’s 2023 report Evidence-Based Policymaking: Practices to Help Manage and Assess the Results of Federal Efforts.This improved data availability, combined with AI summarizing and clustering tools, could empower congressional staff and lawmakers to be more informed, better prepared for hearings, and more effective in their oversight roles. Such AI-facilitated interbranch communication could lead to more efficient and informed legislative processes, ultimately benefiting the entire governmental ecosystem.

Strengthening Public Engagement 

Generative AI’s ability to explore different facets of an issue could also help staff and legislators gain a broader and more nuanced understanding of how policies might affect or be viewed by various groups. When deployed in custom applications or with targeted data, generative AI can help synthesize and summarize public opinion. For example, the Taiwanese parliament uses the digital platform vTaiwan to conduct public opinion polls about certain issues and produce opinion maps that help lawmakers and participants better understand the electorate’s viewpoints. In Brazil, the AI tool Ulysses helps members of parliament sort through public comments on bills. The platform lets users consider a wider array of factors than a time-pressed staff would be able to glean from more cursory constituent interactions. Properly used, such a tool could foster more thoughtful and effective policymaking.

Mekela Panditharatne, Dan Weiner, and Douglas Kriner write in a recent essay in this series that, with adequate and enforceable safeguards and appropriate tools, federal agency regulators could use LLMs “trained on the corpus of materials and comments relevant to rulemakings to assist in processing, synthesizing, and summarizing information provided by the public.” Similarly, Congress-wide LLMs could help legislators and staff sort through public input. Though Congress does not have a public comment system analogous to those of federal agencies, different congressional committees have experimented with public input on bills. AI could make that model easier to institute on a larger scale. 

Legislative staff around the world are beginning to adopt generative AI tools to help inform and write first drafts of press releases and other public-facing documents. As in the international examples above, these tools can augment existing systems by helping staffers manage large volumes of constituent requests in a variety of modalities, especially as the technology improves in accuracy and sophistication. For instance, offices inundated with voicemails from constituents could use voice recognition and tagging software to transcribe and organize those messages, allowing staff to spend more time addressing the substantive issues raised. In 2018, the OpenGov Foundation (which ceased operations in 2019) explored the acute need for this functionality in its report From Voicemails to Votes and launched a prototype called Article One.

More recently, one congressional staffer described being hired as a legislative correspondent for a representative from a state where she had never lived. Her job was to answer anywhere from 500 to 2,000-plus constituent letters a week, which entailed drafting a response to each one. Due to the sheer volume and her lack of local knowledge, she often ended up writing a simple, generic response — a disservice to the lawmaker missing an opportunity to meaningfully connect with and inform a constituent, who in turn deserves a proper response on the issue in question. AI could improve this experience and improve response time both by helping staffers sift through and grasp constituent letters and by crafting replies that are more responsive to constituents’ concerns and reflective of the legislator’s voice and message.

End Notes

Guardrails to Address the Risks of AI

AI is not a silver bullet. Strategic adoption of AI systems must address their intrinsic risks and implement the necessary guardrails. Major concerns include the potential for inaccuracy and bias and security and confidentiality risks. To mitigate these, legislatures should institute appropriate standards and staff training, data and privacy security measures, and processes that preserve human monitoring, evaluation, and oversight. A recent report by the POPVOX Foundation offers numerous proposals for legislatures to consider, setting achievable goals for often slow-moving institutions. 

Another essay in this series offers election officials a framework for considering how and when to employ AI in their work. Although the context here is different, we recommend a similar approach for legislatures exploring whether and how to adopt new AI tools: making such choices conscientiously and transparently; carefully planning new systems’ integration into existing workflows and processes; and establishing thorough review processes to monitor and assess the output of deployed AI tools, bearing in mind the concerns outlined below. 

Quality and Accountability 

AI can help streamline tasks, summarize data, and improve access to information, but human involvement remains essential to review and adjust outputs for accuracy, coordinate with and cross-check against authoritative sources, and remain in charge of strategic thinking and oversight. For speeches, constituent correspondence, media outreach, and some other legislative and oversight tasks, congressional staffers must produce content in the lawmaker’s voice. Without specialized training on past statements and materials, generative AI tools could introduce a tone, message, or inference that is counter to a legislator’s intent. As with new staffers or interns, these tools too should be refined with examples of the policies, stylistic preferences, and substantive conventions of a particular representative and their office in order to be fully integrated into a congressional workflow.

Generative AI tools should never be considered a “source” of information but rather a tool for extracting, summarizing, or outlining documents. For example, OpenAI’s ChatGPT’s current training cutoff is April 2023,footnote1_DbS3rFteA7PNUEhLiiALLGexZzEmcA4vrjku15I4TD8_jZsJwk0C5v1o1With plug-ins and features such as “browse with Bing,” ChatGPT can access current information.and Anthropic’s Claude’s cutoff is December 2022.footnote2_I61soXf4odA76CdwyZwAQB12RUSSmoIGjX7gbPJeW18_jAB8qAPFv0g62This information is accurate as of January 24, 2024, but companies update the cutoff dates regularly.These tools remain susceptible to “hallucinating” false information beyond their existing scope. 

As a recent article explains, generative AI tools continue to have fundamental limitations for many analytical tasks: in a study of several hundred white-collar workers, “ChatGPT greatly improved the speed and quality of work on a brainstorming task, but it led many consultants astray when doing more analytical work.” Tasks involving “reasoning based on evidence” — which characterizes many legislative tasks — proved particularly challenging. Furthermore, AI’s potential for bias, resulting from its creators’ implicit assumptions or those within data sets on which it is trained, is well documented. For instance, AI systems used to evaluate tenants have been criticized for utilizing data sets with built-in racial prejudices, and AI facial recognition tools may discriminate against certain ethnicities. 

To address these risks, there is no substitute for human review. Generative AI can be used to save overburdened congressional staff time and tedious work on first passes, but ultimately, humans must carefully review output to ensure accuracy, account for biases, and perform the core “thinking work” of government. There should be clear guidelines and institution-hosted training to this effect, clarifying when in the legislative process is appropriate for staff to use generative AI tools, for which tasks, and at what point human review becomes essential. Any specialized AI tools developed for legislatures to carry out aspects of their work (such as addressing constituent requests for information) must also account for these issues. 

Security and Confidentiality 

Without the proper guidance, internal regulations, and oversight, staffers using generative AI tools could inadvertently jeopardize security and information confidentiality by sharing sensitive data via commercial tools.

The House CAO’s June 2023 guidelines remind staff never to share constituents’ personally identifiable information or other sensitive data in using generative AI tools. For internal work involving confidential or sensitive information, clear rules guiding which data should be shared with generative AI are critical, as are necessary data privacy protections.

In addition to risks associated with the use of generative AI tools, legislatures are vulnerable to the same cybersecurity risks that other institutions face as a result of advances in AI technology, such as increasingly sophisticated phishing scams that can be used to gain access to computer systems. Policies and training practices for lawmakers and congressional staff must be continuously updated to address the current threat environment. Cybersecurity training for congressional users should extend to LLM tools and include guidance on protecting sensitive data. 

Authenticity 

Verification processes are paramount when using generative AI to interact with the public. As Beth Noveck of Governance Lab testified at a House Financial Services Committee hearing on astroturfing (i.e., when individuals or groups deceptively manufacture an information campaign that claims to represent public sentiment), “automating the comment process might make it easier for interest groups to participate by using bots — small software ‘robots’ — to generate instantly thousands of responses from stored membership lists.” Indeed, generative AI can be maliciously employed to accelerate astroturfing. While this possibility has been explored primarily in the context of administrative agencies’ notice-and-comment processes, congressional use of AI tools to gauge public sentiment and solicit input could face similar risks. 

In order to guard the authenticity of public engagement, legislatures could consider verification of human activity through CAPTCHA systems or other tools in forums for soliciting digital public comments to preserve maximum accessibility while protecting data privacy, as Panditharatne, Weiner, and Kriner explain. It should be noted, however, that AI tools are already circumventing many CAPTCHAs and this approach may limit legitimate constituent communication. Also, crucially, using new technologies to facilitate public interactions should be an addition to rather than a replacement for more traditional methods. Digital interactions are no substitute for meeting ordinary constituents in town halls, committee hearings, and elsewhere and hearing in person about their lived experiences.

• • •

AI is evolving at breakneck speed and legislatures must be agile and proactive in its uptake. Ongoing monitoring, evaluation, and information-sharing are imperative to ensure that new technologies and processes are deployed safely and appropriately. These measures are also a vital way to identify new risks and devise mitigation strategies. As the 2022 Global Parliamentary Report emphasizes, “robust evaluation processes will help parliaments ensure that their investment is well-placed and is contributing to the expected outcomes.”

These realities underscore why the steps already undertaken in the U.S. House of Representatives to establish communications channels and public reporting expectations are so important. Ludovic Delepine, the head of the European Parliament’s archives unit, recently explained at a convening hosted by the Inter-Parliamentary Union, 

AI is already in parliaments. Almost everyone is talking about it. . . . The pace of AI developments is very fast; every two months of new solutions surpass anything seen before. People will have many questions, both at the business and technical level. Hence maintaining an exchange on use cases — how AI gets applied in parliaments — is very much needed. 

As the executive branch and private industries adjust their practices to harness these new tools, it is essential that Congress keep up in order to conduct effective oversight and remain relevant in a changing world. Circumspection may seem like the safest course, but with technological innovation racing forward at a historic pace, the legislative branch must keep up with both AI advances and the corollary needed safeguards. If it fails to do so, it risks falling so far behind that catching up in the future becomes impossible. As Delepine made clear, Congress should remain in conversation with other parliaments as well as other sectors to exchange ideas and stay abreast of new developments.

End Notes

The Path Forward

Congress has already started to take a forward-looking and innovative approach to AI. These are significant first steps, but much more is needed. Both chambers can explore using AI to access and organize congressional records. They can also set up systems to allow staff and lawmakers to responsibly employ AI to aid in policy research and in sifting through information from the executive branch and from constituents. Congress can start training staff on AI use and establishing data and security protocols and evaluation and adaptation procedures. Necessary guardrails should be instituted alongside these changes. 

AI can make legislatures more effective, more representative, more efficient, and more transparent. The time to act is now, before Congress falls too far behind the executive branch and the private sector. The following recommendations should guide the legislative branch as it takes further steps to integrate AI into its work.

Recommendations for Congress’s Continued AI Adoption

  • Update guidance for congressional offices to give them broader options for using existing tools — with appropriate security and privacy guidelines, including:
    • Providing explicit guidance to staff on appropriate use of AI tools that includes rules on safeguarding sensitive or confidential data. Once such safeguards are implemented, consider whether it is appropriate to remove the requirement that commercial tools be used with “no chat history” enabled, allowing users to access plug-ins and additional functionality.
    • Providing explicit guidance for congressional offices or committees that want to create their own no-code GPTs in the OpenAI GPT store.
    • Evaluating new generative AI tools as they emerge, always with a mind to necessary guardrails.
    • Emphasizing that congressional offices can use traditional AI tools (such as natural language processing and machine learning APIs) for a variety of tasks with appropriate safeguards.
    • Installing necessary safety and verification measures (such as CAPTCHA systems, where appropriate) if AI tools are used to gather public input.
    • Institutionalizing monitoring and evaluation processes to assess the utility and security of new tools as they are introduced and adapting those processes as AI technology evolves.
  • Encourage staff-focused professional development programs, such as the House’s CAO Staff Academy and the Senate’s Office of Education and Training, to create courses and trainings regarding the safe use of generative AI in congressional offices, including updated cybersecurity training.

Recommendations for Other Legislative Bodies Newly Incorporating Generative AI

  • Start now — further delay will exacerbate the pacing problem. Allowing staff to experiment with existing tools (with adequate safeguards to protect private or sensitive information) will increase familiarity with new technologies for lawmakers and staff alike and will foster better and more responsive guidance.
  • Create initial use policies and guidance and update them regularly.
  • Designate one staffer or group of staffers to coordinate chamber-wide policies and provide a single point of contact.
  • Hold regular public meetings to encourage staff members to share their own uses, concerns, and questions. Consider hosting speakers and outside experts.
  • Establish a communications channel for related updates (such as the Committee on House Administration’s monthly AI flash reports). 

Maya Kornberg, PhD, is a research fellow in the Brennan Center’s Elections and Government Program, where she leads research on Congress, information and misinformation in politics, and civic engagement. 

Marci Harris, JD, LLM, is the cofounder and CEO of POPVOX, Inc., and cofounder and executive director of the nonprofit POPVOX Foundation.

Aubrey Wilson is director of government innovation at the POPVOX Foundation and previously served as deputy staff director and director of oversight and modernization for the Committee on House Administration in the U.S. House of Representatives.