Skip Navigation
Analysis

House Meeting on White House AI Overreach Highlights Congressional Inaction

Congress must act to mitigate potential harms of this rapidly evolving technology.

April 15, 2024
View the entire Artificial Intelligence and Civil Liberties & Civil Rights collection

On March 28, the White House released its final guidance for federal agency use of artificial intelligence (AI). Just a week prior, the House of Representatives Oversight Committee held a hearing on “White House Overreach on AI,” during which lawmakers and witnesses bemoaned such executive actions to regulate AI. Leaving aside partisan rhetoric, all agreed on one thing: Congress has failed to regulate this technology, spurring action from the Executive branch and state legislatures to promote safe, fair, and effective AI systems.  

The hearing focused on well-trodden topics, like the importance of supporting American innovation to compete with China’s massive investments in AI and criticisms around President Joe Biden’s controversial use of the Defense Production Act in the October AI executive order. Republican legislators and three witnesses framed the President’s executive order as “an authoritative move” that embodies “executive overreach.” But Democrats contested these characterizations and criticisms, arguing, as Representative Gerry Connolly (D-VA) put it, “when one [branch] fails to act, that creates a vacuum that almost demands the other [branches] act.” 

Congress has held countless hearings and gatherings with experts, including Senate Majority Leader Chuck Schumer’s (D-NY) bipartisan AI Insight Forums, and legislators have proposed dozens of bills on AI, but none of these efforts have inspired Congress to pass AI laws, an unfortunate reality that will be borne by the public.

Many of the AI bills Congress has introduced would help bolster civil rights, liberties, and privacy protections. Some bills, like the Federal AI Risk Management Act of 2024and the AI Research, Innovation, and Accountability Act of 2023, would direct federal agencies (often the National Institute of Standards and Technology) to develop technical guidelines and standards for AI. Others, such as the DEEPFAKES Accountability Act and the AI Labeling Act of 2023, would establish transparency mechanisms and legal remedies to target the specific harm of unlabeled and deceptive AI content. A number of bills, including the Federal AI Governance and Transparency Act and the AI LEAD Act, would create governance requirements for federal agency deployment of AI, strengthening oversight and transparency of government AI systems. But as Connolly pointed out, “given [Congress’s] pace,” it’s hard to be optimistic that any of this legislation will pass soon, and as a result, AI systems lack oversight and redress mechanisms, which are critical for fostering equitable and safe AI deployment.

Lack of meaningful congressional action on AI has placed greater pressure not only on executive action but also on the states, many of which seem determined to forge ahead, even if federal lawmakers fail to do so. Representatives at the hearing disagreed about whether state and local governments should lead AI regulatory efforts, or, as R Street Institute’s Adam Thierer stated at the hearing, whether such efforts would create a confusing and muddled “patchwork of conflicting mandates.”Conversely, Representative Eric Burlison (R-MO) suggested that state action could serve as a “microcosm of experiments especially in a field that we know so little about at this point.” A similar dynamic has played out with respect to privacy legislation. Several states—including California, Indiana, and Iowa—have enacted legislation protecting the data of their residents, while Congress has not yet managed to do so. This has created precedents for Congress to build on, and at the same time introduced obstacles for federal action because states want to preserve the standards they have created and their ability to enforce them.

Several states have already started regulating AI. ConnecticutLouisiana, and Texas have passed laws establishing commissions or councils to study AI and its potential effects. Some states have gone further, passing laws that criminalize AI-altered sexual imagery of minors (Louisiana S. 175 and Texas H. 2700) and track AI systems developed and deployed by state agencies (California A. 302). Bills similar to these are still pending in several states, in addition to other proposed bills that target different AI-related priorities, such as establishing disclosure requirements for AI-generated content, with major focus on election materials (New York A. 9103); requiring bias audits or impact assessments (New York S.7623); and prohibiting certain use cases altogether, such as this Maine bill that would prohibit health care facilities from substituting the direct care of a registered nurse with an AI system, or this New York bill that would, among other things, prohibit state agencies from using AI systems for decisions related to the delivery of public benefits.

​If these efforts get out too far ahead of federal action, Congress could face pre-emption questions as it has with privacy legislation. Moreover, a patchwork of state laws leaves millions of people across the country more vulnerable to the harms and dangers of unregulated AI technologies that can entrench biases, produce discriminatory results, and disproportionately impact communities of color (a point that was unfortunately raised by just one representative at the hearing). Patchwork regulations also create problems for the companies being regulated, who may face varying standards for their technology. 

In criticizing the White House’s actions on AI and discussing the problems that may result from a patchwork of state AI laws, the House Oversight hearing perhaps unintentionally highlighted its own branch’s inaction on AI. Congress should heed the lesson from the House hearing and act to mitigate potential harms of this rapidly evolving technology.