Skip Navigation
Analysis

How AI Threatens Civil Rights and Economic Opportunities

The government and private companies use AI systems plagued by errors and biases that especially affect minority communities.

November 16, 2023

In recent months, Congress has displayed a growing interest in tackling artificial intelligence. Lawmakers have been engaging in discussions, hosting forums, and holding hearings on how to exploit the benefits and prevent the harms stemming from this rapidly evolving technology.

But these conversations ignore the obvious: government agencies and companies are already employing AI systems that churn out results riddled with inaccuracies and biases that threaten civil rights, civil liberties, and economic opportunities. The impacts of AI are here and now, and Congress must confront them without further delay.

Across the nation, law enforcement agencies use AI to make critical decisions, impacting individuals’ civil rights and civil liberties, especially those of immigrant and minority communities. A recent news story revealed that ICE has been using an algorithm to analyze whether social media posts are “derogatory” to the United States and then utilize that data for immigration enforcement.

And facial recognition technologies widely deployed by law enforcement lack accuracy in identifying nonwhite faces, putting Latinos and other people of color at risk of erroneous matches. Such mistakes have led to the wrongful arrest and incarceration of Black Americans, like a recent case in Detroit where a pregnant woman was arrested, detained, and charged for robbery and carjacking based on a faulty match.

Inaccuracies or flawed designs within AI systems can also create barriers to accessing essential public benefits. In Michigan, one algorithm deployed by the state’s unemployment insurance agency wrongly flagged around 40,000 people as committing unemployment fraud, resulting in fines, denied benefits, and bankruptcy.

And an Idaho agency reduced Medicaid benefits for individuals with intellectual and developmental disabilities based on an algorithm that used flawed data to incorrectly predict the level of assistance needed by beneficiaries. The use of bad data resulted in devastating impacts for recipients who were forced to forgo necessary medical care and access to employment or specialized housing.

Biased algorithms used to screen out undesirable job candidates can limit access to employment for marginalized communities including women, people of color, and those with disabilities. An AI tool developed by Amazon to evaluate resumes for software developer positions systematically downgraded women’s applications because it was trained on a dataset primarily consisting of male resumes.

Employers also use AI to predict the performance of job applicants. One tool, HireVue, claims that it can assess the employability of job candidates based on data points collected during virtual interviews such as facial expressions, word choice, and intonation. But face and voice recognition technologies are less accurate when assessing the faces of those with darker skin and the voices of Black applicants or those with accentsHireVue can also disadvantage applicants with disabilities who may have atypical facial expressions or speech patterns.

Meanwhile, the unregulated use of personal information to train AI models threatens privacy and freedoms of speech and association. Large language models are often trained on data — including personal information — scraped indiscriminately from wide swaths of the internet, undermining an individual’s control over their information.

Manipulated video and audio known as deepfakes can be used as part of campaigns to discredit political opponents. Rana Ayyub, a critic of the Indian government, claims that she was the target of a campaign meant to silence her, including through a deepfake depicting her in a pornographic video.

More issues abound. AI-generated media is driving advanced consumer fraud and extortion schemes, with older people being particularly susceptible. Manipulated video and audio are impacting election information and public discourse. And the massive amount of energy needed to train and sustain large language models threatens our environment and climate.

If Congress wants to meaningfully tackle AI, its existing impacts can no longer be ignored.

letter sent this month to Congress by the Brennan Center and more than 85 other public interest organizations suggests a place to start: draw on the expertise of civil society and the communities most impacted by these technologies to come up with regulation that addresses the harms AI is already causing while also preparing for its future effects.