Skip Navigation

Monitoring Kids’ Social Media Accounts Won’t Prevent the Next School Shooting

Invading students’ privacy isn’t the solution.

Cross-posted on the Wash­ing­ton Post

The Park­land, Fla., school shoot­ing has reignited the national conver­sa­tion on what can be done to prevent such tragedies, which seem to occur with fright­en­ing regu­lar­ity. One option, which already is used by many schools and prob­ably will be adop­ted by more, is to employ compan­ies that monitor students’ social media feeds to flag threats of viol­ence, as well as beha­vior such as bully­ing and self-harm.

Miami-Dade County’s school system has asked for $30 million in upgrades that include “advanced monit­or­ing of social media,” while schools in Cali­for­nia, Ohio, Tennessee and Virginia have indic­ated that social media monit­or­ing, includ­ing by third-party compan­ies, is a key secur­ity feature.

But schools should think long and hard before they go down this path. There is little evid­ence that such monit­or­ing works, and these prac­tices raise plenty of ques­tions about privacy and discrim­in­a­tion.

Nikolas Cruz, the suspec­ted perpet­rator of the Park­land shoot­ing, hardly presents a case for schools to proact­ively check social media. If anything, it shows that people already alert law enforce­ment when they see genu­inely threat­en­ing mater­ial online. Cruz was repor­ted to the FBI and local police at least three times for disturb­ing posts; one call to the FBI warned that he might become a school shooter, while a separ­ate call flagged a YouTube post saying that the user wanted to become a “profes­sional school shooter” (although the poster wasn’t iden­ti­fied as Cruz until after the shoot­ing).

And Cruz’s expli­cit declar­a­tion of intent is the excep­tion, not the rule, which means monit­or­ing the Inter­net would­n’t usually turn up such warn­ings. Our informal survey of major school shoot­ings since the 2012 Sandy Hook killings in Newtown, Conn., shows that only one other perpet­rat­or’s social media accounts indic­ated an interest in school viol­ence: Adam Lanza, the Newtown shooter, posted in discus­sion forums about the Columbine high school shoot­ing and oper­ated Tumblr accounts named after school shoot­ers. These post­ings were not a secret, and while view­ers at the time may not have known whether to take the threats seri­ously, it is hard to imagine in the current climate that his posts would not be repor­ted to the author­it­ies — as they should be.

Gener­ally, school shoot­ers’ online profiles — which wind up being extens­ively analyzed in the wake of attacks — reveal little that sets them apart from other teen­agers. The Face­book page for the perpet­rator of a 2014 shoot­ing in Trout­dale, Ore., is typical. It showed that he liked first-person shooter and milit­ary-themed games like “Call of Duty,” in addi­tion to vari­ous knife and gun pages. Mean­while, the offi­cial “Call of Duty WWII” Face­book page boasts nearly 24 million follow­ers, while over 1.3 million people have “liked” the Reming­ton Arms Face­book page.

An algorithm trawl­ing the Web for people who like viol­ent video games or fire­arms would be swamped with far more hits than any law enforce­ment agency or school admin­is­trator could conceiv­ably review. The same would be true of any program that looked for words like “gun,” “bomb” or “shoot,” as the Jack­son­ville, Fla., police depart­ment discovered the hard way when its social media monit­or­ing tool — while produ­cing zero evid­ence of crim­inal activ­ity — flagged comments about crab burgers, pizza or beer being described as “bomb,” or excel­lent. (It also caught two uses of the phrase “photo bomb.”)

Social media monit­or­ing tools can also result in discrim­in­a­tion against minor­ity students. While there is little publicly avail­able inform­a­tion on what such tools look for, it is likely that — much like the equi­val­ent tools used by law enforce­ment agen­cies — they will incor­por­ate biases. A recent ACLU report showed that the Boston Police Depart­ment’s social media monit­or­ing efforts contrib­uted noth­ing to public safety while search­ing for terms like “Ferguson” and “#black­livesmat­ter,” as well as terms likely to be used by Muslim users, like “#muslim­livesmat­ter” and “ummah,” the Arabic word for community.

There is also substan­tial evid­ence to suggest that chil­dren of color, espe­cially those who are Muslim, would be treated as danger­ous and perhaps subject to extra monit­or­ing, despite the fact that the major­ity of school shoot­ers have been white. Take the case of Ahmed Mohamed, the Muslim teen­ager who brought a homemade clock to his Dallas-area high school and was promptly arres­ted on the suspi­cion that it concealed a bomb.

Chil­dren of color appear likely to be treated more harshly in general, in light of research show­ing that black chil­dren exper­i­ence more punit­ive school discip­line from preschool through high school — even when their white peers break the same rules. This appears to play out online as well: When an Alabama school hired an ex-FBI agent to scour students’ social media accounts, 86 percent of the students expelled as a result were black, in a school district that was only 40 percent African Amer­ican.

As many Amer­ic­ans cheer the Park­land shoot­ing surviv­ors for their polit­ical activ­ism, it is import­ant to recog­nize the chilling effect of ongo­ing surveil­lance. While students’ privacy and free speech rights may be dimin­ished when using school WiFi networks and school-issued devices, social media monit­or­ing extends into their out-of-school social and recre­ational lives. Given that 92 percent of Amer­ican teens go online daily and 24 percent are online almost constantly, monit­or­ing programs can oper­ate like listen­ing devices that record every utter­ance and pass it on to school admin­is­trat­ors. Yes, this scru­tiny may on occa­sion reveal risky beha­vior that requires inter­ven­tion. But far more often, it will also squelch young people’s abil­ity to express them­selves — and prob­ably drive conver­sa­tions to commu­nic­a­tions chan­nels that cannot be easily monitored.

This is not to say that schools should never look at students’ Face­book posts. But they should gener­ally do so only when there is a reason — for example, when a student or parent has flagged concern­ing beha­vior or when the school is invest­ig­at­ing online harass­ment or bully­ing. Every school must have in place policies avail­able to parents, teach­ers and students specify­ing when it will look at social media post­ings. Such policies should be narrowly tailored to avoid impinging on the privacy and free speech rights of students, and they should limit the shar­ing of data with third parties and include proced­ures for delet­ing inform­a­tion when a child gradu­ates or leaves the school, as well as safe­guards to ensure that chil­dren of color are not unfairly targeted.

In the wake of yet another school shoot­ing, Amer­ic­ans are under­stand­ably look­ing for ways to keep students safe. We should focus our atten­tion on meas­ures that have been proved to work, such as sens­ible gun controls and ensur­ing that parents and peers know whom to contact to report threats and to receive help, rather than expens­ive tools that are unlikely to make us secure but carry substan­tial costs for the very chil­dren we are trying to protect.

(Image: AP)