Quantcast

New River Valley Times

Sunday, December 22, 2024

KILLER ROBOTS HAVE BEEN APPROVED TO FIGHT CRIME. WHAT ARE THE LEGAL, ETHICAL CONCERNS?

KILLER ROBOTS HAVE BEEN APPROVED TO FIGHT CRIME. WHAT ARE THE LEGAL, ETHICAL CONCERNS?

Last week, the San Francisco Board of Supervisors voted to allow its police department to use robots to kill suspected criminals. The decision was met with both praise and incredulity.The decision came six years after Dallas police outfitted a robot with explosives to end a standoff with a sniper who had killed five officers. The Dallas incident is believed to be the first time someone was intentionally killed in the U.S. by a robot. But, judging by the vote from San Francisco, it might not be the last.What are the legal concerns when governments turn to machines to end human life? UVA Today asked University of Virginia law professor Ashley Deeks, who has studied this intersection, to weigh in. Deeks recently returned to the faculty from serving as White House associate counsel and deputy legal adviser to the National Security Council.

Q. First, what is your reaction to San Francisco’s decision?A. I’m surprised to see this coming out of San Francisco, which generally is a very liberal jurisdiction, rather than out of a city that is known as being “tough on crime!” But it’s also important to understand what capabilities San Francisco’s robots have and don’t have. These are not autonomous systems that have could independently select who to use force against. Police officers will be operating them, even if remotely and from some distance. So calling them “killer robots” could be a little misleading.Q. When a police officer uses deadly force, he or she is ultimately responsible for the decision. What complications might arise, legally, if a robot does that actual killing?

A. According to the Washington Post, the San Francisco Police Department doesn’t plan to equip its robots with firearms in the future. Instead, this policy seems to envision a situation in which the police could equip a robot with something like an explosive, a taser or a smoke grenade. San Francisco’s policy would still involve a human “in the loop,” since a human would be remotely piloting the robot, controlling where it goes, and deciding whether and when the robot should ignite explosives or incapacitate a suspect. So the link between a human decision-maker and the use of deadly force would still be easy to identify.

Where it could get more complicated is if the robot stops working as intended and accidentally harms someone through no fault of the operator. If the victim or her family sues, there could be issues about whether to hold the manufacturer, the police department or both responsible. But that isn’t such a different question from what happens when a police officer’s gun accidentally fires and injures someone due to a manufacturing flaw.

Q. Aside from the legal questions, what are the ethical questions society will have to face when robots take life? Or are the legal and ethical questions intertwined?

A. The legal and ethical questions are related. Ideally, the legal rules that states and localities enact will reflect careful thinking about ethics, as well as the Constitution, federal and state laws, and smart policy choices. On one side of the balance are the considerable benefits that come from tools that help protect police officers and innocent citizens from harm. Since many uses of deadly force happen because officers fear for their lives, properly regulated and carefully used robots could reduce the use of deadly force because they could reduce the number of situations in which officers are at risk. 

On the other side are concerns about rendering police departments more willing to use force, even when it’s not a last resort; about accidents that could arise if the robotic systems aren’t carefully tested or the users aren’t well trained; and about whether the use of robots in this way somehow this cracks open the door to the future use of systems that have more autonomy in law enforcement decision-making.

One novel question that could arise is whether police departments should establish more cautious use-of-force policies when it’s a robot delivering that force, because the robot itself can’t be killed or harmed by the suspect. In other words, we may not want to allow robots to use force to defend themselves. 

Q. You’ve studied how police are more often using artificial intelligence as a crime-fighting tool. And some governments may be developing autonomous weapons systems that can select targets on their own in armed conflicts. Do you see a time when police start considering AI-powered robots to make lethal decisions?

A. A significant part of what militaries do during wartime is identify and kill enemy forces and destroy their enemy’s military equipment. AI tools are well-suited to help militaries make predictions about where particular targets will be located and which strikes will help win the war.  There is a heated debate about whether states should ever deploy lethal autonomous systems that can decide on their own who or what to target, but again, the idea is that those systems would be deployed during wartime, not peacetime.

All of this is really different from what police do. Police officers can only use force to gain control of a situation where there’s no reasonable alternative. An officer can constitutionally use deadly force only against a person who might escape arrest for a serious crime or who poses a threat of serious bodily injury or death to the officer or a third person. It’s really hard to imagine that police departments in the United States would or legally could use autonomous robots that would make independent decisions, based on their algorithms, about when to use force.

Q. Do you see the San Francisco decision as an anomaly, or do you expect other cities and police departments to explore this in the future?

A. It’s worth noting that San Francisco’s policy still must pass a second vote and be approved by the mayor, so it’s not a done deal yet. In terms of past examples, as you noted, the Dallas Police Department did something similar in 2016, when it used a robot with an extendable arm to place a pound of explosives near a shooter who had holed up after killing five officers and wounding seven others. The Dallas PD then detonated the C4, killing the shooter.

A lot of police departments around the country own explosive ordnance disposal robots, which they obtained as excess military equipment from the Pentagon. I wouldn’t be surprised if other states and localities decide that now is a good time to clarify their policies on whether or how to use these robots in ways that may kill or injure suspects. Cities may decide whether to adopt policies like San Francisco’s, or they might conclude that they want to see these uses of robots prohibited.

It’s going to be important to have a robust debate about the details of these policies. In which specific circumstances could robots be used to deliver force? How senior must the officials be who approve the use in a given case? How confident must the operators be that the systems are reliable? In addition, it’s going to be important for a range of people to weigh in – not just police departments and civil liberties advocates, but also lawyers, ethicists and citizens.

Original source can be found here

ORGANIZATIONS IN THIS STORY

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS