This “Ethical Trap” is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision Making
Document Type
Peer-Reviewed Article
Publication Date
4-2017
Abstract
In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading.
DOI
10.1007/s11948-016-9785-y
Recommended Citation
Miller, K. W., Wolf, M. J., & Grodzinsky, F. (2017). This “ethical trap” is for roboticists, not robots: on the issue of artificial agent ethical decision making. Science and Engineering Ethics, 23, 389–401. Doi: 10.1007/s11948-016-9785-y
Comments
First online: 26 April 2016.