Skip Navigation
Analysis

Social Monitoring of Students Not Making Schools Safer. Risks Outweigh Rewards.

New tools to monitor and report on “suspicious activity” in schools treat children like potential suspects without making them any safer.

Cross-posted from the Sun Sentinel

As part of a response to last year’s Parkland school shooting, lawmakers in Florida this month, over the objections of some of their colleagues, passed a bill that would open the door to arming teachers. But the same bill, which was signed into law last week by Governor DeSantis, had another dangerous idea: new ways to monitor and report on “suspicious activity” in schools, an initiative that builds on a failed model from the war on terror. This approach would turn our children into potential suspects without making them any safer.

It’s no surprise that states and schools are turning to surveillance tools — they are being promoted by federal agencies as the solution to school shootings and other ills. The Department of Homeland Security has issued a “Guide for Preventing and Protecting Against Gun Violence” in K-12 schools, which says little about guns. It instead encourages schools to have teachers and students report suspicious behavior using a post-9/11 program called the National Suspicious Activity Reporting Initiative.

The Department of Education has advanced a similar approach, which is also found in new bills on threat assessments recently introduced in Congress.

But in more than 15 years of operation, this system hasn’t made us safer, and it isn’t likely to make schools safer either. A 2012 report from a two-year-long, bipartisan Senate investigation of fusion centers concluded that the system had “yielded little, if any, benefit to federal counterterrorism intelligence efforts.” In fact, reports produced by fusion centers were “shoddy,” “rarely timely,” and consisted of “predominantly useless information.”

Suspicious activity reporting was developed in the wake of the September 11 attacks; it encourages state and local police to act as the eyes and ears of federal counterterrorism officials, reporting undefined “suspicious activity” they spot in the course of their duties. These reports are then sent to fusion centers — a shared space for federal, state, local, and tribal law enforcement officials, along with private sector partners — where they ostensibly facilitate the flow of information and contribute to fusion centers’ own analyses of suspicious activity.

The Senate investigation also found that the “suspicious” activities reported through fusion centers frequently had no connection to violence or criminality. Muslims, in particular, were singled out for suspicion: a DHS officer flagged as suspicious a seminar on marriage held at a mosque, while a north Texas fusion center advised keeping an eye out for Muslim civil liberties groups and sympathetic individuals and organizations.

Nevertheless, school districts across the country are turning to online surveillance tools to suss out suspicious activity. The Brennan Center’s review of a government contracting database showed that as of 2018, at least 63 school districts had purchased social media monitoring software (though the number is likely far higher since inclusion in the database is voluntary), and at least seven school districts in Florida have purchased the tools since 2015. None of the vendors selling this software has been able to point to empirical evidence that their product can reliably predict violence.

In fact, it is difficult for computers to effectively interpret posts on Facebook and Instagram. Programs that rely on key words will inevitably capture reams of irrelevant information: the police in Jacksonville, Florida discovered that flagging the word “bomb” would turn up not early signs of threats but posts describing pizza or beer as “the bomb” (meaning excellent). While natural language processing programs, which attempt to discern the meaning of social media postings, are meant to differentiate between “bomb pizza” and a bomb threat, these tools don’t work well in practice, especially when it comes to posts from members of minority groups or non-English speakers.

These difficulties will surely be magnified when it comes to teens, who are known to use coded language to keep grownups from catching on. And in the current environment of fear, schools can easily overreact to posts brought to their attention: this month two students sued a New Jersey school that suspended them for posting Snapchat pictures of legally owned guns with no suggestion of a threat.

Moreover, just as Muslims are tagged with the terrorism label, children of color will too easily be tarred as criminal. With school discipline disproportionately targeting African American and Latinx youth regardless of the severity of the offense, there is a clear risk that their online activities will be regarded as suspicious and reported under the system recommended by federal officials. With fusion centers in the mix, children could become the subject of law enforcement investigations on the basis of prejudice rather than proof, with an inartful post potentially stored in FBI and police databases.

Schools have a responsibility to explore new ways to keep children safe. And they have always watched students in hallways and classrooms. But these new monitoring tools cast a far wider net, sweeping in a vast range of data and acclimating children to a surveillance state, with little showing of effectiveness and particularly high stakes for children of color. We should put the brakes on programs that treat children as potential suspects and instead invest in initiatives that might actually make schools safer, such as increased resources for mental health and counseling, ensuring that adults keep guns locked away, and of course the elephant in the room: gun control. Continuing down the road we’re on would simply repeat past mistakes, this time with kids.