Can a robot kill humans? This question is no longer just science fiction. In San Francisco, the police now want to create facts.
It’s an idea that hasn’t just frightened people since “Terminator”: For as long as there have been robots, they’ve been afraid that the machines could one day take a life. The police in San Francisco now want to create the legal basis for this. Although the aim is not to create an automatic killer machine, the project raises questions.
The push is part of a new draft rule from the San Francisco Police Department (SFPD). So far, it has not been specifically regulated whether the existing robots of the law enforcement agencies may also use deadly force. In a preliminary draft, this should be banned. The police have now deleted the corresponding sentence. Instead of a general ban on killing, there should only be one restriction: “Robots will only be used for lethal violence if there is a specific danger to the lives of citizens or officials and it is the best existing option,” says the new draft. In plain language, this means: Under certain circumstances, robots should be allowed to take human life.
No decision by the machine
The criticism was unsurprisingly great. “We live in a dystopian future where we’re debating whether police can use robots to execute citizens without a trial, jury or judge,” civil rights attorney Tifanei Moyer told Mission Local. “It’s not normal. And no legal expert or ordinary citizen should pretend it’s normal.”
It’s not even about the most extreme step: The SFPD robots are not autonomous machines, because they are still controlled by a human. In contrast to the New York City police force, for example, who are already using autonomous robots to take decisions, there is always a human behind the actions of the machine – and would also have to fire the fatal shot.
The new draft passed unanimously within the city’s Rules Committee. Committee member Aaron Peskins explained his decision by saying that the situation was simply unclear beforehand. In addition, the police have explained “that there are scenarios in which the use of deadly force is the only option.”
Unanswered questions remain
How exactly the police in San Francisco would let their current 17 robots use this deadly force is not clear. The manually controlled robots are primarily used to deal with dangerous situations such as defusing explosives. According to police spokesman Robert Rueca, there are “no concrete plans” on how they could be used to use force.
However, there are already examples of this. In Denver in 2016, a police robot used an explosive charge to kill an entrenched sniper who had previously shot five officers. In Oakland, the police had suggested loading live ammunition into a firing device intended for detonating explosive charges – only to later move away from it. Attachments have also been presented for fully automatic robots that would turn them into a kind of self-propelled gun (you can find out more here).
Ultimately, however, the most difficult question is that of accountability. As with the use of drones in war zones, a human would have to make the decision and pull the trigger in the case of a police operation. But what if a shot accidentally fires? Or a situation is misjudged? This is exactly what the civil rights movement ACLU is warning of. The limited sensor technology of the robots could make a misjudgment much more likely. And that increases the risk that violence will be used “inappropriately or against the wrong targets,” she says. In view of the debate about police violence, not only in the USA, these questions are even more urgent.
Sources: Draft SFPD, Mission Local