On March 18, 2022, the Brennan Center submitted a request under the Freedom of Information Act to the Department of Homeland Security (DHS) and the DHS Science and Technology Directorate for information on Project Night Fury, which was conducted by the Data Analytics Technology Center overseen by the directorate’s Office of Science and Engineering. Through our request, we sought documents including contracts, communications, research products, and training and compliance materials relating to Project Night Fury.
What little information is publicly available about Night Fury was revealed in a DHS Inspector General report released on March 7, 2022. The inspector general’s investigation began after the office received a tip regarding potential privacy violations pertaining to Night Fury, which began in September 2018 and cost almost $790,000. According to the inspector general’s report, the Office of Science and Engineering aimed to use Night Fury to develop capabilities to “identify potential terrorism risks” on social media and other open-source platforms. As part of the project, the office contracted with a university (now revealed to be the University of Alabama at Birmingham) to collect social media data. The complaint to the inspector general alleged that the project “specifically included data collection of millions of social media records, including posts, videos, and photos.” Though the report does not provide additional information about Night Fury, it found that the Science and Technology Directorate did not consistently comply with guidelines and policies governing privacy and sensitive information requirements in its research and development projects.
Read the FOIA request here.
In response to our request, DHS produced two contracts with the University of Alabama at Birmingham: an initial contract dated September 21, 2018, and an extension from August 27, 2019, through December 23, 2021. Though the cost is redacted, the government’s public contracting website reveals that they totaled almost $790,000. DHS also produced three versions of the privacy threshold analysis for the project, though none is dated, and it is unclear which is the final version. DHS staff conduct privacy threshold analyses to determine how a new or expanded program or system could impact privacy; the DHS Privacy Office then reviews the analysis to determine if further compliance documentation is required. None of the analyses that were produced included comments from the Privacy Office, though the inspector general’s report indicates that the office received it.
According to the contracts we obtained, the primary purpose of Night Fury was to develop new methods to automatically analyze information from social media to expand DHS’s existing capabilities. Though the project primarily focused on detecting networks of “pro-terrorist” accounts and groups online, DHS also contemplated expanding the scope of the project to include drug trafficking, human smuggling, and other topics. According to one of the privacy analyses, DHS envisioned that the capabilities developed through Night Fury would mature over three incremental phases from September 2018 to September 2021, ultimately aiming to deploy them within the department or commercialize them. However, it appears DHS shuttered the project in 2020 without producing a final research product.
Night Fury’s tasks fall into three main categories. The first involved identifying and tracking “terrorist propaganda” and “pro-terrorist” accounts on Facebook, Twitter, online forums, and “lesser social media communities” like Telegram and VK (Russia’s main social media platform) based on undefined criteria. The University of Alabama at Birmingham and DHS planned to work together to “identify relevant attributes” that would identify Facebook accounts and groups as being “pro-terrorist” by ranking them or assigning them a “Risk Score.” The university also agreed to develop methods to decide whether a Twitter account that was “linked to a confirmed pro-terrorist social media account” should itself be considered pro-terrorist, using criteria such as “keyword set comparisons.” These tasks would be completely automated, with the evident goal of minimizing any human intervention — despite research demonstrating the limitations of algorithmic analysis of social media content. Over the course of the project, the university would compile a list of accounts it identified on Facebook and Twitter, along with their postings and other related data, and provide the data to DHS. The contracts also directed the university to build models to “identify key influencers of pro-terrorist thought” whose messages spread across different forums and social media platforms.
These directives from DHS raise a host of concerns. While one of the University of Alabama’s main tasks was to identify “pro-terrorist” accounts, the contract does not define the term, vesting both the university and DHS with significant discretion in identifying the attributes that would justify the designation. There are also no apparent parameters for what would “link” someone to such a pro-terrorist account. Linked accounts could potentially include people who are far removed from the account in question or those whose interactions were limited to a single instance, such as “liking” a Facebook post or retweeting a tweet. The contract also fails to explain or define what a “risk score” would measure — whether the risk of being a terrorist, being pro-terrorist, committing a violent act, or something else — nor does it explain how automated social media analysis would be an appropriate tool to make that determination. The focus on automation is reminiscent of a discarded Trump administration proposal, the Extreme Vetting Initiative, which would have used automated means to scan the open internet, including social media, to determine which applicants for immigrant visas posed a risk of committing a criminal or terrorist act or would contribute to the national interest. As a number of experts in machine learning and other automated decision-making techniques advised, any such system would be both “inaccurate and biased.”
Under the system contemplated by Project Night Fury, it appears that social media users could be labeled as threats in the absence of any evidence of criminal activity or planning, or based on a misinterpretation of innocuous behavior. Given the decades-long history of discriminatory surveillance by DHS and other government agencies, including on social media, there is an acute risk that the characteristics deemed relevant would disproportionately sweep in members of minority communities, especially Muslims, as civil society organizations also warned in the case of the Extreme Vetting Initiative. Indeed, though the contract generally refers vaguely to “pro-terrorist” threats, one of the deliverables is “reporting of pro-Jihad Twitter accounts and related data and statistics,” highlighting the likely focus on Muslim users (as well as the widespread misunderstanding of the term “jihad”). Such unfettered monitoring could also be used to surveil activists and others engaging in protest. DHS’s history of labeling dissent as a threat to justify its surveillance of constitutionally protected activities illustrates how easily the capabilities DHS strived to develop through Night Fury could be abused.
The second category of tasks was to develop ways to infer users’ location without relying on available location metadata. The potential methods for doing this included using keywords and hashtags — presumably to look for other clues to location — as well as “proven location influencer accounts,” which could refer to social media accounts whose followers tend to be confined to a particular location, such as a local mayor. As part of this process, the university was to develop techniques to classify the account holder’s language and, by extension, their “likely region.” DHS also indicated that it might identify “regions of particular interest” for the university to focus on in developing the location inference capability, raising questions about what regions it wanted to target and what proxy characteristics might be used to determine who was in those regions.
Last, the University of Alabama agreed to work on creating automated tools to discern whether a particular social media account is a bot “programmatically generated to exert influence” to spread “terrorist propaganda.” It was also meant to detect foreign influence campaigns. Though the contract does not indicate what kinds of techniques it envisioned the university would develop, DHS included “intermediary techniques” that include evaluating ratios of “Tweets:Friends:Followers:Likes” that are “statistically unlikely.”
For all these activities, the contract makes only passing references to protections for privacy, civil rights, and civil liberties and does not erect safeguards to ensure that the project’s activities do not infringe upon Americans’ constitutional rights. The privacy threshold analyses also do not describe any safeguards, but they do state that DHS would collect a wide range of sensitive information, including account names, emails, phone numbers, pictures, and posts from members of the public. As demonstrated by the complaint that led to the inspector general’s investigation, the project evidently raised eyebrows for collecting reams of sensitive information, potentially without implementing effective oversight or measures to secure the information collected.
While Night Fury was evidently shuttered, it is unclear whether any of its contemplated capabilities were developed or shared with DHS. The project illustrates our ongoing concerns about the use of social media by DHS and other federal agencies to make high-stakes judgments in the absence of any concrete evidence of efficacy or robust guardrails to protect against discrimination and overreach. We will release any additional information we receive about the Night Fury project.