Skip Navigation
Fellows

Tightening the National Security Ratchet

In national security policy, U.S. intelligence agencies often implement new measures following an emergency without evaluating whether the measures work or improve our national security policy.

May 14, 2015

Cross-posted on Just Security.

A ratchet is a device that employs mechanical impediments to allow movement in only one direction. As such, it is a useful metaphor for national security policy, where restrictive measures taken in response to emergencies quickly become normalized, preventing a return to the pre-emergency state. As new threats inevitably emerge, new restrictions are employed, creating ever-tightening security practices, regardless of their necessity or efficacy. One impediment preventing a release of the national security ratchet is the failure to properly evaluate the effectiveness of new powers granted. Another is excessive secrecy that agencies manipulate to avoid accountability. Two recent evaluations of the FBI counterterrorism activities demonstrate these phenomena.

The first is a report by the 9/11 Review Commission published last March which examines the FBI’s progress in implementing the 2004 recommendations of the 9/11 Commission (a different group) and transitioning into a “threat-based, intelligence driven organization.” [Full disclosure: the 9/11 Review Commission interviewed me during its evaluation.] Despite lauding the FBI’s efforts to reform itself after the 9/11 terrorist attacks, the Review Commission’s report “highlights a significant gap between the articulated principles of the Bureau’s intelligence programs and their effectiveness in practice.” One might think that such a disappointing finding after 11 years of concerted effort toward reform would lead the Review Commission to call for a serious re-evaluation of the FBI’s post-9/11 counterterrorism tactics and authorities. But that’s not how the national security ratchet works. The national security ratchet always demands the agencies be given more authority, with the hope they’ll use it better.

In fact, the government rarely employs standard social science research methodologies to evaluate the effectiveness of its security methods. Dr. Cynthia Lum of George Mason University’s Center for Evidence-Based Crime Policy discusses a systematic review of terrorism studies she conducted for the Campbell Collaboration, and her book, Evidence-Based Counterterrorism Policy


Only seven of the 20,000 terrorism studies Lum identified evaluated the effectiveness of counterterrorism measures using moderately rigorous methods, leaving the government and the public with little evidence about what works and what doesn’t. The reasons agencies fail to test their methods are complex, but Lum suggests it may rise from the fear of discovering that invested efforts and resources may have been wasted.

The 9/11 Review Commission investigation similarly failed to utilize rigorous social science research methods, examining only a statistically insignificant five FBI counterterrorism investigations: David Headley’s participation in the 2008 attacks in Mumbai, India; Nidal Hasan’s attack at Ft. Hood, Texas; Fasail Shahzad’s failed attempt to bomb Times Square; Najibullah Zazi’s interdicted plan to attack the New York City subway; and Tamerlan Tsarnaev’s Boston Marathon attack. Remarkably, though the FBI successfully prevented only one of these attacks, the Review Commission declared the existing legal authorities provided in the Patriot Act and the Foreign Intelligence Surveillance Act, among others, were “essential to the investigations in each case.” The Review Commission then argued the FBI “should ensure Congress is aware of the critical value of these programs as it considers retaining, refining, and expanding the Bureau’s authorities as the threat evolves” (my emphasis). In other words, despite its finding that the FBI failed to satisfy the 9/11 Commission’s recommendations, it concluded Congress should continue giving the FBI more authority, in the hope that it will become better.

Other evidence presented in the Review Commission’s report tends to argue against the effectiveness of these expanded surveillance authorities. The Review Commission quoted the Webster Commission’s finding that the FBI’s investigation of Nidal Hasan was hampered by “a ‘crushing’ volume of data.” Similarly, the Review Commission’s discussion of the FBI’s failure to heed repeated tips that David Headley was a terrorist raised “the important question faced by all intelligence agencies … [of] how to scan and assess voluminous amounts of collected information strategically … .” And despite the FBI’s expanded investigative authorities and electronic dragnets, Headley, Zazi, and Shahzad were able to travel overseas to terrorist training camps and back to the US without detection. Yet questions about whether the FBI’s expanded intelligence collection was overwhelming agents with useless data and false leads went largely unexamined.

Likewise unexamined were the negative consequences of the FBI’s expanded authorities to interfere with the privacy and civil liberties of millions of ordinary Americans not suspected of any wrongdoing. Continuing to do more of the wrong things won’t make the FBI better. The Review Commission did not examine whether cheaper, more focused, and less intrusive methods might have been equally or even more effective.

Lum discusses why requiring evidence-based counterterrorism research is essential to effective policy:


Lum argues that evidence-based research provides an objective means to enable fiscal responsibility and democratic accountability, essential elements of effective governance.

The second recent report examining FBI counterterrorism methods reveals how government agencies use secrecy to maintain or expand new authorities despite evidence of ineffectiveness. The New York Times recently obtained a less-redacted copy of a 2009 combined inspectors general report regarding what it called the “President’s Surveillance Program” (PSP), essentially the extra-legal electronic surveillance programs later authorized by the FISA Court under new interpretations of certain Patriot Act provisions, and by Congress through the FISA Amendments Act.

The report made public in 2009 included a section assessing the effectiveness of the PSP. The Justice Department Inspector General mentioned an internal FBI survey evaluating the program, quoting FBI leadership as saying that based on the results of the survey they determined that the program was “of value.” The report said other agents were “generally supportive” of the programs “as one tool among many” that could “help move cases forward.” The IG, however, expressed a healthy skepticism of these views, documenting how the excessive secrecy surrounding the programs led to some FBI complaints that the PSP-derived tips did not provide enough information to appropriately prioritize the leads. But a reader of the 2009 report would likely conclude that FBI officials had reason to believe the PSP was valuable to their counterterrorism mission.

It turns out the FBI did something extremely unusual and actually scientifically evaluated how helpful the leads were to counterterrorism investigations. The newly declassified version of the IG report describes a 2006 FBI assessment that involved 30 FBI analysts and an FBI statistician who tracked a statistically significant number of PSP leads disseminated through 2005 to evaluate their worth. They found only 1.2 percent made a “significant contribution” to FBI counterterrorism efforts. Moreover, several of the significant tips related to subjects of ongoing FBI investigations, raising the question of whether a more focused approach would have proven a more efficient use of resources. A second study focusing on leads coming from the PSP email metadata dragnet found no significant contribution to any FBI terrorism investigation. The results of these surveys were redacted from the 2009 public report, not to protect national security, but apparently to shield these programs from accountability. FBI leadership’s determination that the PSP leads were “of value” seems particularly specious and self-serving given this hard data, but this claim was actually modest compared to what the Justice Department later told the FISA Court about the value of just one aspect of the PSP, the domestic telephone metadata collection.

In 2006, the FISA Court authorized the domestic telephone metadata portion of the PSP under a broad new interpretation of Section 215 of the Patriot Act, but imposed specific minimization procedures limiting the government’s use of the data. In 2009, however, the government advised the FISA Court that it had routinely violated these procedures. Judge Reggie Walton demanded more information and stricter compliance with the FISA Court mandated procedures, but ultimately allowed the program to continue based on “the government’s explanation, under oath, of how the collection of and access to such data are necessary to analytical methods that are vital to the national security of the United States.” Later analysis would determine that the telephone metadata program never identified a single terrorist or stopped a terrorist plot.

So even in the rare instances where agencies properly evaluate their programs, secrecy allows them to mislead their overseers and avoid the public accountability that might release the national security ratchet.

Dr. Lum and her colleagues at the Center for Evidence-Based Crime Policy are developing tools to objectively evaluate law enforcement policies and practices. If Congress is serious about making the FBI become an intelligence-led organization, it should compel the FBI to embrace evidence-based research methods for evaluating its counterterrorism and law enforcement policies. Security resources should go to programs that work, not to protecting programs that don’t.