Skip Navigation
Analysis

LAPD Documents Show What One Social Media Surveillance Firm Promises Police

Voyager Labs claims that social media posts and friend lists can predict “extremism.”

Illustration featuring a Los Angeles police car surrounded by social media notification icons
Brennan Center for Justice/epantha/werbeantrieb/Anastasiia Konko/Getty

The Bren­nan Center released docu­ments Wednes­day from the Los Angeles Police Depart­ment shed­ding light on the services being marketed by social media monit­or­ing firm Voyager Labs to law enforce­ment. The records, obtained through a free­dom of inform­a­tion lawsuit, illu­min­ate Voyager’s own services and offer a broader window into the typic­ally secret­ive industry of social media monit­or­ing.

The docu­ments raise seri­ous concerns about how the use of such products by police threatens First Amend­ment rights and has a dispro­por­tion­ate impact on Muslims and other margin­al­ized groups, as well as whether compan­ies like Face­book and Twit­ter are living up to their prom­ises to keep surveil­lance compan­ies from misus­ing data in ways that viol­ate the plat­forms’ terms of service.

In its sales pitches, Voyager declares that it can assess the strength of people’s ideo­lo­gical beliefs and the level of “passion” they feel by look­ing at their social media posts, their online friends, and even people with whom they’re not directly connec­ted. The company also claims that it can use social media and arti­fi­cial intel­li­gence tools to accur­ately assess the risk to public safety posed by a partic­u­lar indi­vidual.

Excerpt from Voyager sales pitch Source: Los Angeles Police Depart­ment

To the contrary, however, the posts and activ­it­ies of those who commit viol­ent acts resemble the activ­ity of count­less other users who never engage in viol­ence. Using law enforce­ment resources to target all social media users who share interests with indi­vidu­als who have engaged in viol­ence would result in a crush­ing volume of inform­a­tion and would yield thou­sands or even millions of false posit­ives. And invest­ig­a­tions and arrests based on AI-driven conclu­sions are likely to be deployed dispro­por­tion­ately against activ­ists and communit­ies of color

Two case stud­ies that Voyager sent the LAPD illus­trate the company’s troub­ling prom­ises.

The first exam­ines the social media activ­ity of Adam Alsahli, who attacked the naval air station in Corpus Christi, Texas, in May 2020. After the attack, Voyager analyzed Alsah­li’s “activ­it­ies and inter­ac­tions” and high­lighted social media activ­ity it claimed could aid in an invest­ig­a­tion of the incid­ent. The company also sugges­ted that Alsah­li’s beha­vior on social media show­cased the types of activ­ity that would invite “proact­ive vetting and risk assess­ment.”

Excerpt from Voyager sales pitch Source: Los Angeles Police Depart­ment

Accord­ing to Voyager’s own descrip­tions, however, its analysis seems to rely heav­ily on ordin­ary reli­gious themes and refer­ences to Alsah­li’s Arab herit­age and language:

Excerpt from Voyager sales pitch Source: Los Angeles Police Depart­ment
Excerpt from Voyager sales pitch Source: Los Angeles Police Depart­ment

While Alsahli himself may have been motiv­ated by Islamic extrem­ism, flag­ging this type of content to identify purpor­ted threats would do little more than target millions of social media users who are Muslim or speak Arabic, subject­ing them to discrim­in­a­tion online that mirrors their treat­ment offline. Voyager’s claims that its tools offer “imme­di­ate and complete trans­la­tion” of Arabic and “100 other languages” are suspect as well: natural language processing tools have widely vary­ing accur­acy rates across languages and Arabic has proven partic­u­larly chal­len­ging for auto­mated tools. And even literal trans­la­tion of social media content often misses key cultural context.

Voyager’s claims about its AI tools are simil­arly tenu­ous. The service appears to be focus­ing on events like terror­ism or mass viol­ence, but there is no evid­ence that language and images used on social media are predict­ive of those types of rare events, whether analyzed by human review­ers or auto­mated tools. Of even more concern, Voyager says it uses AI tools to produce a color-coded risk score signi­fy­ing the user’s “ties to or affin­ity for Islamic funda­ment­al­ism or extrem­ism,” with no human review involved.

While human involve­ment would hardly solve the myriad under­ly­ing issues, Voyager offers no proof for its asser­tion that an auto­mated system could, with any accur­acy, gauge “within minutes” whether someone subscribes to a partic­u­lar ideo­logy. Indeed, this approach is remin­is­cent of the repeatedly debunked asser­tion that there is a common or iden­ti­fi­able “path” to radic­al­iz­a­tion.

In fact, Voyager’s own language exposes a fatal weak­ness in its approach: funda­ment­al­ism and extrem­ism are not illegal, and an “affin­ity” for them is not evid­ence of plan­ning for viol­ence. Even an accur­ate categor­iz­a­tion of indi­vidu­als with “ties” to “extreme” ideo­lo­gies, whether under­pinned by Islamic or any other beliefs, would provide no action­able inform­a­tion to law enforce­ment.

Excerpt from Voyager sales pitch Source: Los Angeles Police Depart­ment

second case study on Muslim Broth­er­hood activ­ist Bahgat Saber, who urged his social media follow­ers to infect offi­cials at Egyp­tian consu­lates and embassies with Covid-19, reveals both the aston­ish­ing amounts of inform­a­tion avail­able through social media and the tenu­ous connec­tions that Voyager considers evid­ence of poten­tial extrem­ism. For this study, Voyager began by extract­ing inform­a­tion from the public profiles of Saber’s nearly 4,000 Face­book friends. Without any suspi­cion that these indi­vidu­als had done anything illegal, Voyager pulled their inform­a­tion into a search­able, monit­or­able data­set:

Excerpt from Voyager sales pitch Source: Los Angeles Police Depart­ment

When Voyager concluded that none of Saber’s Face­book friends could be tagged as “extrem­ist threat[s],” it went a step further, analyz­ing the connec­tions of Saber’s friends — people to whom he had no direct connec­tion — and suggest­ing that those indi­vidu­als’ ideo­lo­gies were indic­at­ive of Saber’s.

Excerpt from Voyager sales pitch Source: Los Angeles Police Depart­ment
Excerpt from Voyager sales pitch Source: Los Angeles Police Depart­ment

The prob­lems with this logical leap are obvi­ous: social media contacts range from distant relat­ives to class­mates and acquaint­ances from years past to people encountered at a single event. In other words, these contacts may include indi­vidu­als with whom a person has not spoken in years or only a hand­ful of times. The like­li­hood of know­ing their ideo­lo­gies, much less shar­ing them, is far from certain.

The emphasis in Voyager’s mater­i­als on Muslim users’ activ­ity also suggests seri­ous bias and a disreg­ard for data indic­at­ing the far right presents the greatest threat of extrem­ism in the United States. For example, while some Voyager mater­i­als describe its abil­ity to assess ideo­lo­gical strength in general terms, others, like the Alsahli case study, seem to suggest these tools are targeted to Muslim content. And it is telling that both of Voyager’s test cases analyze Muslim men.

Finally, the fact that Voyager — like other compan­ies such as Media Sonar and Dataminr, which we’ve previ­ously repor­ted on — is able to harvest social media inform­a­tion years after the major plat­forms barred the use of their sites for surveil­lance raises the ques­tion of whether the plat­forms are doing enough to identify and block these compan­ies.

We obtained these docu­ments through our public records litig­a­tion against the LAPD, which we pursued (with the assist­ance of law firm Davis Wright Tremaine) as part of our ongo­ing effort to increase trans­par­ency of and account­ab­il­ity for law enforce­ment’s monit­or­ing of indi­vidu­als and groups on social media.  It’s time for change through­out the ecosys­tem of social media surveil­lance, from the compan­ies devel­op­ing these tools to the plat­forms fail­ing to aggress­ively police them to the law enforce­ment agen­cies seek­ing them out and purchas­ing them.