Connected devices add eyes and ears around our homes, inside our cars, and on our bodies. Through sensors, cameras, microphones, and other technology, they are constantly collecting data and sending it to company servers for analysis and storage. Police have already come calling for this information generated by the so-called “internet of things,” with significant privacy implications for Americans.
Google Nest’s Hello Doorbell uses facial recognition to tag and store a roster of “familiar faces” that frequently pass within the camera’s range. Amazon Alexa’s microphones can capture private conversations inside homes and cars. Wearables like Fitbit can track a person’s movements and vital signs.
The deep well of data these devices create allows law enforcement to analyze a person’s proximity to a crime, assess relationships between victims and suspects, and even review recordings of incriminating statements. With the decreasing cost of data storage, information collected by the internet of things can often be retained indefinitely, a practice encouraged by a business model that relies on user data to improve and develop new products.
For example, a California man was charged with the murder of his step-daughter after her Fitbit revealed that the woman’s heart rate significantly spiked, then quickly dropped before stopping, all around the time that a neighbor’s Ring surveillance camera showed the man’s car parked at her house.
At the same time, the Constitution places limits on law enforcement access and overreach that must be balanced against the government’s convenience. In a world where people are subjected to ongoing surveillance by public and private actors, there is an urgent need to update regulatory frameworks and rethink privacy protections to account for the inescapable role of technology companies in everyday life.
Privacy and civil rights concerns
Connected devices raise serious privacy concerns, as they can reveal sensitive information about people’s homes, movements, and interactions with others. As of 2019, nearly 70 percent of American households had at least one such gadget. From connected thermostats to digital assistants, these devices collect information from inside the home — a space explicitly safeguarded by the Fourth Amendment and protected by courts against both technological and physical intrusions.
Cameras inside a home can capture people’s images and movements, digital assistants can capture private conversations, and connected thermostats can track when people enter and leave various rooms. Other devices, such as activity trackers and in-car navigation systems, create detailed records of people’s movements over long periods of time. These records can reveal not only the location of a person’s home or workplace, but also their associations and their participation in constitutionally protected activities such as prayer and protest.
Even devices that monitor public spaces can raise privacy concerns. For example, companies that sell doorbell and outdoor cameras offer users the ability to create “activity zones” to monitor pre-set areas for “familiar faces” or for certain movements. These cameras often monitor outdoor spaces such as driveways or sidewalks. Whenever these cameras detect a person or movement, they capture audio and video footage and send an alert to the device owner. Recordings of the detected activity (sometimes referred to as “events”) can be downloaded and shared with police and to social media at the push of a button.
The proliferation of connected devices provides expansive opportunities for the government to assemble detailed portraits of people’s lives. Many companies offer entire suites of connected devices: Google sells everything from connected cameras to thermostats and activity trackers to digital assistants, and even offers private security monitoring through its partnership with Brinks. Police can further augment data from connected devices with data collected by their own substantial arsenals of surveillance tools. This type of comprehensive tracking would have been unimaginable before the digital age and eliminates practical limits on surveillance, such as the expense of allocating personnel to engage in 24/7 monitoring.
Finally, it is not always possible to opt out of this type of surveillance. A landlord may unilaterally install facial recognition cameras or thermostats, or a homeowners’ association may install license plate readers with little opportunity for domestic or service workers to object to the practice or request that their data be deleted. At least seven states have passed laws requiring utility companies to allow consumers to opt out of smart-meter installation, highlighting the importance of — and barriers to — enabling individual choice.
The proliferation of connected devices particularly threatens the civil rights of communities of color. Tools such as facial recognition, speech recognition, and emotion detection have documented racial biases that limit these technologies’ ability to accurately identify and understand communities of color. These biases are driving nationwide conversations about whether such tools should be used by police. But even in jurisdictions with bans on police use, prohibitions may not always reach private actors, and private entities like homeowner’s associations, for instance, have a long history of racial discrimination.
While police departments across the country are under scrutiny not just for over-policing but for using surveillance technology to target communities of color, connected devices provide new ways to obscure those practices by obtaining data from cooperative landlords or employers instead of having to comply with transparency and accountability controls beginning to take root around the country.
Legal frameworks for access to data from connected devices
The Fourth Amendment protects “persons, houses, papers, and effects” from unreasonable searches and seizures. Over time, the Supreme Court’s application of the Fourth Amendment has evolved in response to “innovations in surveillance tools.” For example, the Court has ruled that law enforcement must obtain a warrant before searching a suspect’s cell phone during an arrest, installing a GPS tracker on an automobile for long-term monitoring, or obtaining historical cellphone location information.
But there are limitations. Under current doctrine, the Fourth Amendment does not protect some types of information that people knowingly or unknowingly expose to the public. Thus, the Supreme Court has ruled that — contrary to empirical research on individuals’ expectations of privacy — the government can pick out sensitive materials from an individual’s trash and conduct aerial surveillance over a back yard without a warrant.
Similarly, the Court takes the view that in some situations, people do not have a reasonable expectation of privacy in information they provide to others, including companies with which they do business. Commonly referred to as the third-party doctrine, this rule has historically allowed police to obtain data such as bank records or a log of dialed telephone numbers without a warrant. However, the Court recently signaled in Carpenter v. United States that this doctrine loses its force where the revealing nature of the information sought bolsters the individual’s privacy interests in the data shared with third parties, and where disclosure is not truly voluntary because the use of a technology such as a cellphone is “indispensable to participation in modern society.”
The Supreme Court has not yet ruled on how the Fourth Amendment applies to data from every connected device, but its guidance in Carpenter will be instructive. The Fourth Amendment’s applicability to a particular device may ultimately depend on a variety of factors, such as where a device is located, the intimacy and comprehensiveness of data that is retained and retroactively searchable by law enforcement, and a person’s ability to avoid having the device collect their data.
While fitness trackers or a car’s GPS system may enable data collection that is “detailed, encyclopedic, and effortlessly compiled” — one of the factors the Court considered — connected cameras or a digital assistant might collect more limited records, depending on their use. On the other hand, a digital assistant or indoor camera’s presence inside a home means that it is at the heart of the Fourth Amendment’s protections. Similarly, a network of license plate readers or doorbell cameras may make it functionally difficult for a person to avoid having their movements comprehensively tracked and retroactively searchable by law enforcement.
Legal protections do not start and end with the Fourth Amendment. Government access to information collected by tech companies is also limited by statute.
Specifically, the Stored Communications Act establishes a process that the government must follow when it seeks certain types of electronically stored data. When it comes to the content of communications, the law requires differing levels of judicial oversight, obligating police to obtain a warrant, court order, or subpoena. Where the data sought does not relate to the content of a communication — for example, information about when and to whom a message was sent — there are generally fewer obligations on the government. The law also allows companies to voluntarily disclose information to law enforcement in an emergency or when the service provider’s own property is in danger.
However, the Stored Communications Act regulates a subset of service providers, and the data collected by connected devices may not always be covered by the law. While connected devices may collect sensitive information, the way in which that information is collected and the reasons why it is stored may place it outside the scope of the law.
Similarly, the Stored Communications Act may not regulate companies that store electronic communications for their own purposes. This likely includes connected devices, which collect and store data for a number of reasons, from improving products to user customization.
If police want to obtain real-time communications from connected devices, they will typically need to obtain a special wiretap warrant. In addition to establishing probable cause, this warrant requires a number of more stringent procedures to ensure that individuals’ communications are collected only where strictly necessary. In theory, police could obtain a wiretap order to intercept communications collected by a digital assistant inside a car or a home. There may be practical limitations, however, as there is no obligation for technology companies to build the infrastructure that permits real-time wiretapping.
Finally, state laws may place additional restrictions on the ability to collect data from connected devices in the first place. For example, many states require both parties to a conversation to consent to recording, which may obligate homeowners and companies alike to take steps to obtain the explicit consent of guests whose conversations may be captured by tools like a digital assistant. Similarly, Illinois requires companies require explicit consent before capturing and storing biometric identifiers such as a photo of a person’s face. Google Nest’s privacy statement nods to this requirement without explicitly telling customers about the law. Instead, it tells them that “[d]epending on where you live . . . you may need to get explicit consent to scan the faces of people visiting your home.”
Short of situations where the law clearly requires the government to obtain a warrant or follow another legal process, corporate decisions to disclose data to law enforcement will in most circumstances be governed by their privacy policies.
Privacy policies give companies considerable leeway to disclose user data to the government without following any sort of legal process. In those situations, the decisions to voluntarily share data with law enforcement, to notify users, and to disclose the data sharing in a transparency report largely remain matters of company discretion.
Many companies also maintain separate law enforcement policies in which they provide more detailed instructions for law enforcement seeking access to user data. For example, Ring requires a search warrant or user consent to disclose content information, but may disclose non-content data such as subscriber information, purchase history, and service usage with a “subpoena, search warrant, or other court order.”
Law enforcement access to connected devices in practice
It is increasingly common for law enforcement officers to collect data from connected devices as part of an investigation, and not always under judicial supervision. Asking a person to voluntarily provide data eliminates the need to follow a legal process. Companies like Amazon further simplify the process by building a portal by which law enforcement can request Ring data from Amazon’s customers. Users can disable the request feature, but they are automatically opted in by default.
In New Hampshire, for instance, a man was accused of shooting his brother in the arm in a dispute in a driveway. Footage of the altercation was captured by several Ring cameras owned by neighbors, who provided their footage to police. A judge allowed audio of the incident to come in, ruling that the defendant should have expected the driveway communications between himself and his brother to be publicly exposed.
Connected devices have also been used to contradict an account of events. A Pennsylvania woman who alleged she was raped was later charged with making false statements and tampering with evidence after Fitbit data she voluntarily provided to police suggested she had been moving around her home during the time she claimed to be asleep.
In other instances, police have obtained warrants to access data from connected devices. In Arkansas, unusual water usage tracked by a smart meter was used to substantiate claims that a defendant attempted to clean up a murder scene. Prosecutors also sought to obtain a warrant for recordings of the defendant’s Amazon Echo, but the man eventually voluntarily disclosed the recordings.
And in the case of a Connecticut man currently facing trial for the murder of his wife, police have obtained warrants for the victim’s Fitbit and several other connected devices throughout the house that revealed movements and other information contradicting his account of events.
Lack of transparency from tech companies
Despite the proliferation of connected devices and law enforcement’s appetite for the data, many companies still do not publish public transparency reports. While companies like Amazon and Google publish reports revealing disclosures of user data across their entire suite of products, the reports do not break down the number of law enforcement requests by product or specify the type of data provided. This makes it difficult to understand how often police are requesting data from, for instance, a connected doorbell rather than an email account. Companies like eufy, Pioneer, SimpliSafe, BMW, Subaru, and OnStar do not publish transparency reports at all.
Despite aggregated numbers, transparency reports uniformly show a significant increase in law enforcement requests for data. Amazon reports reveal a 264 percent increase in U.S. law enforcement requests from its first transparency report in 2015 through June of 2020. Google’s reports reveal a 109 percent increase in requests from the second half of 2015 to the second half of 2019.
Efforts to address police surveillance must seek transparency and oversight regarding law enforcement’s ability to leverage connected devices and private surveillance systems. Across the country, cities and municipalities are already reining in unaccountable police surveillance, with some even banning the use of certain technologies like facial recognition.
At the same time, the rise in private data collection introduces new complexities. While the majority of connected devices are not part of a central network, it is possible that homeowners could be asked to connect their footage in real time to a police center.
This eventuality is already underway in Jackson, Mississippi, where the city plans to launch a pilot program allowing homeowners and businesses to pipe their camera streams directly into the city’s Real Time Crime Center. This pilot program was announced less than three months after Jackson became the first southern city to ban police use of facial recognition technology. And in San Francisco, the police department obtained real-time access to a private surveillance network operated by a business district that it then used to monitor racial justice protesters this past summer.
We are living through a time of significant technological shift, magnified by a proliferation of connected devices that collect and store snapshots of people’s lives, often without genuine consent or buy-in by those being surveilled. In a world where so many aspects of our lives are mediated through third party service providers, there is an increasing need to reassess the adequacy of constitutional and statutory protections.
By relying on private connected devices, police are able to achieve increasingly comprehensive views into where people go and what they say. Not only is this incompatible with the right to privacy, it can have a chilling effect on other constitutional rights like the right to free expression, as omnipresent government eyes and ears can make people less comfortable with expressing controversial thoughts and beliefs — even within the privacy of a home. These expansive new abilities are also ripe for abuse in light of our nation’s long history of illegal domestic spying, from J. Edgar Hoover’s FBI to the post-9/11 NSA. Moreover, for communities of color, the dangers of discriminatory targeting are present whether they are participating in racial justice protests or simply living in a neighborhood targeted by anti-gang policing.
While transparency and oversight laws for police surveillance are important and commonsense first steps, there is an ongoing need for new regulations that meaningfully guard individual rights and freedoms in the digital age. We will be exploring what those protections might look like in our coming work.