Developing Automated Deceptions and the Impact on Trust
Document Type
Peer-Reviewed Article
Publication Date
3-2015
Abstract
As software developers design artificial agents (AAs), they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs.
DOI
10.1007/s13347-014-0158-7
Recommended Citation
Grodzinsky, F., Miller, K.W., & Wolf, M.J. (2015). Developing automated deceptions and the impact on trust. Philosophy & Technology 28(1), 91-105. doi: 10.1007/s13347-014-0158-7
Comments
Published first online: February 18 2014.