Skip Navigation
Analysis

Border Agents’ Secret Facebook Group Highlights Social Media Vetting Risks for Immigrants

The secret Facebook group of Border Patrol agents disparaging migrants highlights the urgent need for Congress to exercise robust oversight and safeguard against discrimination in the government’s use of social media information.

August 1, 2019

The following originally appeared in Just Security and Slate

With the recent news of a secret Facebook group of current and former Border Patrol agents whose members mock the suffering and deaths of migrants, the American public got a glimpse of the xenophobic culture in the nation’s largest federal law enforcement agency – Customs and Border Protection. CBP has vast power at U.S. borders, and its reach now extends into screening travelers’ and migrants’ social media accounts, which often reveal political and religious beliefs as well as personal information about an individual’s family, sexuality, and immigration status. The combination of that kind of power and the sorts of overt bias exhibited in the Facebook group makes it even more urgent for Congress to exercise robust oversight and safeguard against discrimination in the government’s use of social media information.

While shocking, the behavior in the Facebook group, which counted among its members Border Patrol Chief Carla Provost, is hardly unexpected. The biases many border agents hold has long been clear in CBP’s interactions with Muslims, who are disproportionately targeted with invasive questioning and searches when entering the United States. Muslims crossing the border have been detained and interrogated about their religious beliefs by CBP and asked to hand over their social media information. This practice, which prompted an ongoing lawsuit by the Council on American Islamic Relations, is part of the playbook for border agents, even though a senior Department of Homeland Security (DHS) lawyer admitted in the suit that such questioning has never led agents to discover any criminal wrongdoing on the part of a traveler.

The federal government argues that social media checks keep the nation safe and ensure that those entering the country do not pose security risks. Such probing is routine for those applying for visas or visa-free travel, seeking asylum or immigration benefits, and crossing borders. But social media content is notoriously contextdependent and difficult to interpret. Interpreting the social media activity of people from all over the world is hard enough, but trying to use social media to determine whether someone would be a threat to national security or a positively contributing member of society is impossible, especially for an agency plagued with prejudice.

As detailed in a recent report from the Brennan Center for Justice at NYU School of Law, even DHS’s own evaluations of its social media pilot programs discovered that it is difficult to determine “with any level of certainty” whether social media content indicated a security concern. DHS agents don’t seem to know what types of social media posts count as indications of a security risk, and officials have even asked for additional guidance from the department on how to identify such threats.

Accordingly, the results of screenings are more likely to reflect misinterpretations and biases than actual security threats. For instance, analysts for the Oregon Department of Justice mistook a tweet with the logo of Public Enemy, a hip hop group popular in the late-1980s and 90s, as evidence of an imminent threat to law enforcement. It is not hard to imagine how posts criticizing the current administration could get flagged as threatening by an agent lacking clear guidelines.

Computers do no better. DHS has expanded its use of automated tools and algorithms to analyze online accounts to complement its manual vetting of social media. Though algorithms may sound more objective than humans, they retain the biases of the humans who create them. Just as CBP agents had difficulty defining what security threats look like on social media, algorithms rely on proxies that tend to reflect stereotypes and assumptions about the groups being vetted.

And now ICE is using an automated social media monitoring tool to generate prioritized rankings of individuals for deportation based on perceived threat level. But there is no public information about the validity of those assessments or whether they are discriminatory.

DHS is investigating 70 of the 9,500-member border agent Facebook group, though it is unclear what consequences the agents will face, if any. Previous instances of egregious conduct within CBP have not led to any meaningful repercussions, and CBP reportedly knew about the Facebook group since 2016 but took no serious action. Indeed, many ranking CBP officials were members of the group.

By contrast, the migrants, travelers, Americans, and others undergoing social media screenings can face major consequences from tweets taken out of context, Facebook posts mistranslated, and biased vetting by humans or algorithms. Families can be separated through deportation or denial of visas, refugee applications can be denied, and individuals can be indefinitely placed on bloated, secretive No-Fly Lists.

Government social media monitoring is ripe for abuse in any context, but the implications of CBP’s environment of racism and lack of oversight demand action. Congress should require transparency and accountability for DHS’s use of social media and prevent further discrimination. Policymakers and the public need to know how DHS is using, evaluating, and analyzing social media data in order to understand the full impact of these programs. DHS must work in the national interest, and it can do so only if its programs protect rather than infringe on fundamental freedoms and are based on evidence of effectiveness, not xenophobia.