Circumventing “Dirty Hands” with Lethally Autonomous Robots.
wwaid?
Circumventing Dirty Hands
Even the cheapest of calculators are trusted to provide simple mathematical answers. The algorithms within a calculator are never tested before someone uses it to balance their bank accounts. It is, for the most part, trusted to accomplish the needed calculations, even when errors are becoming evident it is still surprising when a calculator goes wrong. Certainly, as machines become more complicated more care is placed into ensuring that it is operating according to predetermined standards, consumer airplanes go through frequent rigorous testing to ensure its hardware and software components are operating properly.
What happens when machines are no longer just the embodiment of concrete engineering principles? What happens when machines are programmed to make ethical/moral decisions in the same manner that a principled human would? Will they still, or eventually, be trusted in the same way that one can trust a cheap calculator... or an airplane?
Ronald C. Arkin, Roboticist at Georgia Institute of Technology is working towards developing autonomous robots that are discriminating and ethical. He has articulated “the most comprehensive architecture for a compliance mechanism” (United States. Cong. Qtd. Ed Baret 14). Arkin proposes that LARS be equipped with an "ethical governor" and "strong artificial intelligence."
An ethical governor restricts LARS to act in accordance with the Laws of War and the Rules of Engagement and strong artificial intelligence will be designed to match and exceed human intelligence. A consequence of this is viewing and developing LARS not as cold calculating killers but as civilizing forces with the ability to be more humane than humans, “It is my contention that robots can be built that do not exhibit fear, anger, frustration or revenge and ultimately…behave in a more humane way than even human beings” (Arkin 1).
By programming LARS with all of the ethical moralizing aspects of humanity without including traits like fear and self-preservation, which may cause unjust acts of force, LARS may become the eventual calculator of ethical conundrums in battle and in strategy development while still supporting the overall plan. Arkin proposes equipping robots with lethal autonomy but requiring “ethical autonomy” as part of the robotic makeup. Potentially eliminating the very need to think through moral quandaries and essentially evading dirty hands as the decisions are left to LARS and other AI systems to decide.
Is it possible that leaders may ask themselves “WWAID?" As in, "What would artificial intelligence do?” If Arkin is successful then artificial intelligence and LARS, specifically, may eventually embody the ethical ideal of human hood, a sort of modern day, technological idol that directs mankind towards ethical behavior.
Developing LARS with strong AI and an ethical governor provides an interesting twist to Walzer’s acceptance that politicians are not, typically, held responsible for their actions because they act as officials of the state. He writes “there is rarely a Czarist executioner waiting in the wings for politicians with dirty hands, even the most deserving among them” (Walzer).
Except that, LARS may become the “executioner waiting in the wings.”
Responsibility Gap
Even if Arkin is correct, that LARS may one day have an effect on war that engenders more humanity than violence the current technology is not able to attain those ethical ideals, yet. It is able to target and deliver force autonomously, however. A key aspect of dirty hands is for leadership to acknowledge (and regret) immoral actions to attain some ultimate act.
The utilization of LARS, however, removes that sense of responsibility from leadership, as LARS will be programmed to accomplish specific tasks. If they are to be autonomous, human leadership may end up less inclined to assume responsibility for the behaviors of robots because they will be out of the “loop”. This may be a very purposeful decision so as to keep from having to engage with these types of moral crossroad scenarios. Who has dirty hands when immoral decisions are made and the result is unsuccessful (or successful)? Is it the commander, the programmer, the manufacturer, the politician… or is it the robot?
We may do well to ask ourselves if we are better off with the dirty hands of political leadership or with leadership that has circumvented even the possibility of having dirty hands by assigning moral decision making to lethally autonomous robots.
Written by Arash Kamiar
Arash@MetroJacksonville.com
@ArashWaiting
Sources:
Docherty, Bonnie Lynn. Losing Humanity: The Case against Killer Robots. [New York, N.Y.]: Human Rights Watch, 2012. Print.
Krishnan, Armin. Killer robots legality and ethicality of autonomous weapons. Farnham, UK: Ashgate, 2009. Print.
United States. Cong. House of Rep. Subcommittee on National Security and Foreign Affairs of the Committee on Oversight and Government Regorm. Rise of the Drones: Unmanned Systems and The Future of War. Hearings 111 Cong., 2nd sess. Washington: GPO, 2011. Print.
United States Army Research Office. Ethical Robots in Warfare. By Ronald C. Arkin. 2009. Print.
Walzer, Michael. “Political Action: The Problem of Dirty Hands.” Philosophy and Public Affairs, Vol. 2, No. 2 (Winter, 1973): 160-180. Print.
0 Comments so far
Jump into the conversation