Across the country, police departments have adopted automated software platforms driven by artificial intelligence (AI) to compile and analyze data. These data fusion tools are poised to change the face of American policing; they promise to help departments forecast crimes, flag suspicious patterns of activity, identify threats, and resolve cases faster. However, many nascent data fusion systems have yet to prove their worth. Without robust safeguards, they risk generating inaccurate results, perpetuating bias, and undermining individual rights.
Police departments have ready access to crime-related data like arrest records and crime trends, commercially available information purchased from data brokers, and data collected through surveillance technologies such as social media monitoring software and video surveillance networks. Police officers analyze this and other data with the aim of responding to crime in real time, expeditiously solving cases, and even predicting where crimes are likely to occur. Data fusion software vendors make lofty claims that their technologies use AI to supercharge this process. One company describes its tool as “AI providing steroids or creating superhuman capabilities” for crime analysts.
The growing use of these tools raises serious concerns. Data fusion software allows users to extract volumes of information about people not suspected of criminal activity. It also relies on data from systems that are susceptible to bias and inaccuracy, including social media monitoring tools that cannot parse the complexities of online lingo, gunshot detection systems that wrongly flag innocuous sounds, and facial recognition software whose determinations are often flawed or inconsistent — particularly when applied to people of color.
Police use of technology can have beneficial uses as well. Body worn cameras help shed light on police interactions with civilians. AI-enhanced license plate readers (LPRs) decrease errors, helping police better match license plates associated with stolen vehicles and other crimes. Yet police departments are adopting ever more powerful data fusion tech with little testing, proof of efficacy, or understanding of the dangers posed to civil rights and civil liberties.
Public information about newer data fusion tools is scant, but two police tools — and the concerns frequently raised about them — offer insight into this technology’s capabilities and implications: predictive policing programs, which purport to predict where and when a crime is likely to occur and even who is likely commit it; and social media analysis software, which allows police to mine people’s online presence and identify connections, locations, and potential threats. Both technologies pose substantial risks, from perpetuating bias to chilling and even enabling the targeting of constitutionally protected speech and activity; independent studies assessing how much they contribute to public safety are scarce.
AI-enabled data fusion capabilities amplify these concerns. Without oversight and transparency measures in place, the use of these tools risks infringing on civil rights and civil liberties by magnifying bias and inaccuracies and encouraging bulk data collection.