This piece was originally published by Tech Policy Press.
In the wake of the horrific shootings in Buffalo, New York, and Uvalde, Texas, much of the national conversation has focused, appropriately, on gun control measures. Some commentators and politicians have also, however, proposed monitoring social media to detect threats of violence and intervene before the next deadly attack.
New York Gov. Kathy Hochul, for example, issued an executive order days after the Buffalo shooting that directs the state police to establish a unit within the New York State Intelligence Center that will be tasked with surveilling social media. Its mission will be to identify “online locations and activities that facilitate radicalization and promote violent extremism,” an extremely vague mandate. This follows the governor’s promise in her State of the State speech earlier this year to hire additional analysts to monitor social media every day and to keep tabs on “the chatter” online, for which over half a million dollars was allocated in the latest state budget. Some pundits and candidates have echoed this call, urging an increase in social media surveillance and even the creation of a department specifically focused on observing the platforms.
Many people who commit violence undoubtedly use social media. When nearly three-quarters of American adults and over four-fifths of teens are on at least one social media platform, it is no surprise that those numbers would include individuals contemplating or even planning for violence. Indeed, recent reporting has revealed that the Uvalde shooter made threats about sexual assaults and school shootings on the social media platform Yubo — and that the users who saw those threats reported them to the platform not once but many times, to no avail. Had the platform passed those warnings on to law enforcement, it would have been appropriate to scrutinize them in more detail and to develop a plan for intervention that could have averted the catastrophe.
That is a far different scenario, however, than engaging in the kind of unfettered social media monitoring that Hochul appears to be setting in motion. There is simply no proof that widespread social media monitoring reliably works to avert threats. In fact, the Uvalde school district’s experience suggests exactly the opposite: the district had contracted with the monitoring company Social Sentinel (though the precise timing is unclear), and it still failed to foresee and prevent the deadliest school shooting since the Sandy Hook slayings in 2012.
If this were the worst one could say about these technologies — that they’re not particularly effective — then perhaps there would be little harm in adding them to the school safety arsenal. That seems to be the understandable attitude of many of the school officials using them: with money tight, school counselors in short supply, and an exploding crisis in youth mental health, why not pay a relatively small sum to monitor online activity, just in case.
As a Human Rights Watch researcher has pointed out, however, much as schools would reject the use of “toxic materials to build classrooms,” they should pause before rolling out “unproven, untested surveillance technologies on children.” In fact, not only are these tools untested, but their potential harms are well known.
Research from the Center on Democracy and Technology shows that students who know they are being monitored are reluctant to express themselves openly, chilling creative speech and expression and fostering young people’s expectation that surveillance is the norm. When students’ posts are flagged for potential action by school officials, history suggests that youth of color, disabled students, and other marginalized youth are disciplined at disproportionate rates and with disproportionate severity.
Social media is also notoriously contextual, meaning that tools looking for keywords relating to harm are instead likely to be swamped with irrelevant messages with little relation to legitimate threats. Youth are especially adept at concealing the meanings of their communications. Automated tools directed at videos, images, and audio have substantial weaknesses as well, raising further questions about the usefulness of these technologies at scale. Content posted by youth who do not speak English as their first language, or who are members of marginalized or insular social groups, are particularly likely to find their posts flagged or simply misunderstood by these tools. Additionally, surveillance tools are always vulnerable to mission creep: police have deployed social media monitoring technologies against activists and protesters, and it would be easy for districts to use similar tools to track and punish students protesting discriminatory policies.
Many have noted that some of the Uvalde shooter’s most disturbing messages were sent privately via various social media platforms, from Yubo to Facebook to Instagram, with observers rightly pointing out that those messages would not have been visible to monitoring tools in any event. While this is certainly another reason that these tools are not the silver bullet some may hope for, focusing on this element risks validating the use of widespread monitoring technologies the next time yet another retrospective analysis of a violent act turns up evidence that the perpetrator posted publicly about guns, misogynistic threats, or any of the number of other things that belatedly appear to be clues. It could also justify the use of undercover social media accounts to connect with individuals online — a tactic the NYSIC was recently revealed to be using, which raises its own substantial concerns.
Instead, we should focus on shoring up supports for our youth, fostering connectivity and community, ensuring that police follow time-tested standards requiring reasonable suspicion of criminal activity to pursue an investigation, and doing everything we can to keep weapons of mass murder out of the hands of the American public.