Skip Navigation
Analysis

National Security Carve-Outs Undermine AI Regulations 

The government’s two-tiered approach puts Americans’ rights at risk.

View the entire Artificial Intelligence and Civil Liberties & Civil Rights collection

This piece first appeared at Just Security.

Earlier this month, the U.S. Senate wrapped up its “AI Insight Forums” with a 21-person panel focusing on national security. The vast majority of tech executives and former government officials at the forum—in which we were both pleased to participate—called for national security agencies to rapidly incorporate AI into their operations. But it is crucial that agencies safeguard people from the risks of these technologies, which President Joe Biden’s recent executive order rightly recognizes “can lead to and deepen discrimination, bias, and other abuses” if used irresponsibly. Unfortunately the executive order itself, as well as the administration’s recent draft policy on AI, have a major flaw: while they seek to ensure that the government’s use of AI systems is fair, effective, and transparent, they essentially exempt the national security apparatus, including significant parts of the FBI and the Department of Homeland Security (DHS).

This two-tiered approach is a mistake. It allows for the development of a separate—and likely less protective—set of rules for AI systems such as facial recognition, social media monitoring, and algorithmic risk scoring of travelers, which directly affect people in the United States. It excuses national security agencies from following sensible, baseline rules to assess the impact of AI: ensure that it is independently evaluated and tested in the real world, take steps to mitigate harms such as discrimination, train staff, consult with stakeholders, and offer recourse for harms.

These agencies are already rapidly integrating AI into a host of consequential operations and are accelerating their development of these technologies. The full scope of these activities, including what safeguards (if any) are in place to prevent discrimination and protect privacy and other rights, is hard to gauge because little public information is available. What we do know demonstrates the acute risks of carving out national security systems from basic AI rules.

Already, the Department of Homeland Security’s Automated Targeting System relies on secret algorithms to predict “threats” and to identify travelers for heightened scrutiny. DHS also entered into a $3.4 million contract with researchers to develop algorithms to undertake a “risk assessment” of social media accounts for purported “pro-terrorist” sympathies for use by immigration authorities.

Through its domestic intelligence arm, DHS runs several programs that comb through Americans’ social media posts in search of “derogatory information” and dangerous “narratives and grievances,” scooping up information about individuals’ personal and political views. During the racial justice protests of 2020, for example, DHS used social media monitoring tools to assemble dossiers on protestors and journalists to share with law enforcement. Each year, DHS sends thousands of unverified summaries of what it finds on social media to police departments around the country, providing justification for surveillance and, in some cases, prosecution.

None of these programs is supported by empirical evidence demonstrating the validity of its approach. In 2017, the department’s inspector general called out five of DHS’s pilot social media monitoring programs for failing to even measure efficacy. Other internal reviews have repeatedly shown that these types of programs are of questionable or “no value.” Nor have these systems—as far as we know—ever been tested to see if they entrench bias or reproduce stereotypes. The policy framework for regulating these programs is weak, with loopholesriddling the department’s general policies on racial profiling and First Amendment-protected activities.

The FBI has deployed AI-powered facial recognition tools to identify suspects for investigation and arrest, and has worked with the Defense Department to develop tools that could identify people in video footage from street cameras and flying drones. It has pursued such efforts without adequate testing, training, or safeguards for civil rights and civil liberties. The FBI has even contracted with Clearview AI, a notorious company that scrapes photos from the internet to create faceprints without consent, claiming access to 30 billion facial photos. It has embraced these systems despite evidence that facial recognition technology disproportionately misidentifies and misclassifies people of color, trans people, women, and members of other marginalized groups. Already, the use of facial recognition by police has resulted in six reported cases of false arrest and wrongful incarceration of Black people.

These types of clear and present risks will only grow as agencies incorporate more AI into their systems and explore other unreliable technologies, such as “sentiment analysis” and gait recognition. National security, intelligence, and law enforcement agencies must be subject to the same baseline standards that apply throughout the federal government, with only narrow—and transparent—modifications where strictly justified by specific national security needs. Congress should strengthen independent oversight by creating a body like the Privacy and Civil Liberties Oversight Board, which was established after 9/11 to review counterterrorism programs.

The time to act is now, before irresponsible and damaging AI systems become entrenched under the banner of “national security.”

Faiza Patel is senior director of the Liberty and National Security Program at the Brennan Center for Justice at NYU Law. Patrick  C. Toomey is the deputy director of the ACLU’s National Security Project.