Search About Newsletters Donate
Support independent, nonprofit journalism.

Become a member of The Marshall Project during our year-end member drive. Our journalism has tremendous power to drive change, but we can’t do it without your support.

Closing Argument

What San Francisco’s Killer Robots Debate Tells Us About Policing

Among unanswered questions: How will the courts treat cases that involve police robots?

A robot operates in the middle of a street. A police officer walks behind it, while holding an extension cord reel connected to the robot. In the background, two police cars block the street from ongoing traffic.
A San Francisco police officer uses a robot to investigate a bomb threat, in 2008.

This is The Marshall Project’s Closing Argument newsletter, a weekly deep dive into a key criminal justice issue from reporter Jamiles Lartey. Want this delivered to your inbox? Subscribe to future newsletters here.

It sounds like something out of a dystopian science fiction thriller — faceless, headless police robots, armed and authorized to kill with the push of a button.

“We all saw that movie…no killer robots,” San Francisco protesters wrote on a picket sign last week, as they opposed a move by city leaders to authorize police use of lethal force with department-issued robots. The San Francisco Board of Supervisors approved the measure.

The backlash was so severe that board members quickly reversed the decision Tuesday, sending parts of the proposed law back to a committee for further review.

But across the country, the debate is just beginning over how much power, if any, police departments should be giving to “robocops.”

Honolulu police last year used SPOT, a $150,000 robodog purchased with federal COVID relief money, to take the temperatures of people at a homeless camp. The American Civil Liberties Union criticized the practice as dehumanizing to people looking for shelter.

The New York Police Department similarly leased a futuristic-looking robotic dog from manufacturer Boston Dynamics as part of a test program in December 2020. The department used it in several instances, including a hostage situation in the Bronx. By April 2021, after months of criticism, the department ended its lease and returned the robot, which officials had nicknamed Digidog. Boston Dynamics and several other robot manufacturers in October condemned the use of firearms on robots, but others have equipped robots with sniper rifles and other weapons.

In spite of the failed attempts to introduce police robots at some departments, other cities, like St. Petersburg, Florida, seem determined to keep their police robot dogs. And the U.S. Department of Homeland Security is pursuing plans to deploy robot dogs at the borders.

Proponents of the equipment highlight the usefulness of robots for tasks like defusing bombs, providing surveillance in hostage situations and, in their most lethal capacity, injuring or killing people to stop them from attacking other human beings.

In this way, police robots are not new. In 2016, Dallas police used a robot armed with a pound of C-4 explosives to kill a man who had fatally shot several police officers.

“We believe that we saved lives by making this decision,” Dallas Police Chief David Brown told CNN shortly afterwards. “And you know, again, I appreciate critics, but they’re not on the ground, and their lives are not being put at risk by debating what tactics to take. And I’ll leave that to them for a later discussion.”

What remains largely untested, however, is how America’s courts will treat police robots in comparison with their human counterparts.

Much of the court-established standards on use of force and qualified immunity — a legal doctrine that generally shields officers from lawsuits for what they do on the job — relies on contact between human police officers and the people they encounter. The line establishing qualified immunity, for example, hinges on what “a reasonable officer” would perceive to be a deadly threat. How will the courts use that standard to account for robots, even those that are operated by human beings?

And what will happen when the robot malfunctions or otherwise compromises a criminal case? In a Florida murder case last year, police and prosecutors blamed a police-deployed robot for possibly moving a bullet shell casing found at the alleged crime scene.

It’s also unclear how the courts would assess a case in which a robot is involved in a crime, like killing someone unlawfully. International courts have found that robots lack the ability to form criminal intent. An international tribunal in the former Yugoslavia suggested that in cases like these, the robot’s operator is at fault. But what if the operator claims the machine malfunctioned, or that he or she didn’t receive proper training on how to use it? None of these questions account for lawsuits in use-of-force cases, or for cases involving robots that rely on artificial intelligence instead of direct human controls.

Controversy also surrounds the secrecy from law enforcement and prosecutors when it comes to sophisticated electronic equipment originally developed for the military. As San Francisco decides what to do about police robots, the California Supreme Court could soon consider whether the public should have access to search warrants and other court records related to police use of electronic surveillance from cell phone tracking devices. Prosecutors in other states have thrown out criminal cases rather than risk exposing proprietary information from cell phone tracking equipment they purchased from private companies.

Meanwhile, details of police robot technology are fiercely guarded. Boston Dynamics recently sued rival robodog manufacturer Ghost Robotics on claims that Ghost used some robot response methods too similar to those contained in Boston Dynamics patents.

With robot manufacturers battling one another, and police facing criticism from some city leaders and activists alike, the debates are unfolding in ways academics predicted years ago. “While we cannot anticipate every issue that this technology raises,” a 2016 article in the University of California Los Angeles Law Review concluded, “we can address many of them now, well before these hypotheticals find their way to our streets.”

Daphne Duret Twitter Email is a staff writer for The Marshall Project. She reports on policing issues across the country and is based in south Florida.