Circumventing “Dirty Hands” with Lethally Autonomous Robots.
Robot Love
Explaining Dirty Hands
The concept of dirty hands was brought to the forefront of popular philosophical thought by Michael Walzer in his piece Political Action: The Problem of Dirty Hands. In it, Walzer explains the dilemma of a political leader when confronted with a decision to violate a moral principle(s) in order to prevent a disaster of some sort. Explained differently, it is the moral conflict between doing the wrong thing for the right reasons, or committing a moral wrong in order to do “good."
Examples of dirty hands situations relate to a political leader that is in some sort of a moral quandary, a “rock and a hard place." For example, a terrorist in the political leader’s stead is known to have planted a bomb. Ignoring the legitimate criticisms of torture and its ability to actually secure quality information, and assuming that in this instance that it can be be used successfully, should the political leader torture the terrorist to acquire the information?
Should the political leader torture the family of the terrorists if it means the terrorists would divulge vital information?
Dirty hands theory, of course, does not insist on the results being the best possible outcome in order to take action. Torturing terrorists may or may not actually bring about positive information and this is part of the dilemma. To understand dirty hands on a less violent level think of a person with “scruples” who runs for office and is resisting the temptation to make a deal to provide government contracts to a particular person for their support. If they do not make the deal they will lose but if they do make the deal this person can perform the “greater” objectives his/her supporters’ desire.
In order for either of these examples to carry the weight that Walzer wishes to impress on the reader both of these leaders have to be, in so many words, good men/women. They have to struggle with the decision. There has to be some sort of recognition or understanding that in order to accomplish some higher goal or to avoid some greater disaster he or she will have to violate a moral principle. They have to know that what they are doing is wrong and, this is important, that they are directly responsible for the decision.
Walzer expresses that a political leaders ability to feel and/or show guilt is exactly what is needed for the right type of leader to make these types of decisions. “Personal anguish sometimes seems the only acceptable excuse for political crimes and we don’t want to be ruled by men that have lost their souls” (Walzer 176). That is to say we would rather see leaders hate to do the evil they do rather than glory in it...but either way they will have to do evil.
Explaining Lethally Autonomous Robots
The term robot comes from the Czech word Robota which means forced labor. A lethally autonomous robot (LAR) is one that can choose its target and make decisions to use deadly force independent of human review or approval (Krishnan). The technology for this type of action is not in the distant future it is readily available now, although not explicitly used. The Department of Defense spends approximately six billion dollars every year on robotic weapons systems. Fully autonomous weapons are expected by some military and robotic experts to be deployable in the next 20 to 30 years (Docherty).
As it stands now, humans are “in-the-loop” or involved with decisions to use force deployed by robotic technology such as unmanned aerial vehicles, otherwise known as drones. The 2012 report Losing Humanity: The Case Against Killer Robots released by Human Rights Watch and The International Human Rights Clinic (IHRC) at Harvard Law School defined three levels of human involvement with robotic weaponry:
1. Human-in-the-Loop Weapons: Robots can only choose their targets and deliver force under the instruction of a human commander.
2. Human-on-the-Loop Weapons: Robots are able to select their targets and strike under human oversight (who can circumvent the robots’ decisions).
3. Human-out-of-the-Loop Weapons: Robots that are fully autonomous and are able to select targets and deliver force without any human involvement.
Drones like the Predator and Reaper are only the first generation of more robust robotic weaponry, weapons that will literally be enabled to kill independent of human decision-making (United States. Cong.10). What is coming and what should be discussed are Human-on-the-Loop Weapons and to a greater extent Human-out-of-the-Loop weapons.
The US Department of Defense expects that eventually “…unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure."
The US Airforce has said “increasingly humans will no longer be ‘in the loop’ but rather ‘on the loop’—monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input."
A 2004 US Navy report on underwater vehicles stated, “While admittedly futuristic in vision, one can conceive of scenarios where UUVs sense, track, identify, target, and destroy an enemy—all autonomously” (Docherty).
Enabling a robot to decide via computer programming whether or not to use deadly force on a human opens up a whole new discussion of dirty hands. It potentially eliminates the very act of a political and military leader struggling with moral decisions because those decisions will be abdicated to LARS. In a sense, a robot becomes the ethicists and the strategist. If the robot deems a dirty hands decision to be “right,” then why should a human think any differently?
0 Comments so far
Jump into the conversation