Editorial for the Special Section on Ethics and Affective Computing

2012 
THE sudden escalation in informational and computational technologies is quickly making things possible that were impossible just a few years ago. As these new possibilities become realities, very real ethical dilemmas arise which are challenging the very foundations of ethics, traditionally conceived. One need only consider the 3D printers that are about to hit the market and that will allow individuals to print working firearms at will. Such a possibility will, no doubt, have policy makers wondering how to handle the situation in the absence of existing laws to cover such an inevitability. Challenges are mounting on other fronts as well, issues with predator drones and autonomous weaponry being among them. Such issues may well make the topic of this issue seem trivial. It is not. For instance, one of the ethical issues attached to affective computing reaches to the foundations of ethics by challenging our common sense belief that truth-telling is a value and that deception is simply wrong, at least in most contexts. In brief, the problem can be stated this way: If robots are to be widely adopted in society, they need to be like us. Thus, giving them simulated emotions seems essential. For instance, when it comes to the use of robotic pets in eldercare, lifeless, unaffective robots would be poorly suited to the task for which they are designed. At the same time, to give such robotic pets the ability to act in such a way as to make us feel good seems to be simply deceptive. If deception is wrong simpliciter, then so are simulated emotions; but if the use of simulated emotions is wrong, then implementing the affective qualities needed to make some machines able to function as needed would also seem wrong. Something is either amiss with our common understanding of the ethics of deception, or research in affective computing, which often amounts to designing machines precisely in order to deceive us, is misguided. The situation is not limited to such innocuous creatures as mere pets either, though when we realize that a robotic pet may simultaneously be a weapon or a spy, the issues start to compound. In the first paper of this section, “Are Emotional Robots Deceptive?”, Mark Coeckelbergh hits the central issue just mentioned head on. Taking a common sense approach, Coeckelbergh notes that robots must be suitably designed to respond appropriately in such a way that humans understand what is genuinely being communicated in order to facilitate open cross-entity communication. However, this must be done carefully in such a way that humans do not dismiss robot communication with what he calls a “deception response.” In “Red-Pill Robots Only, Please,” Bringsjord and Clark challenge approaches like Coeckelbergh’s. Playing off the Matrix of movie fame, blue-pill robots are engineered to deceive, and embracing them will lead to a cascade of moral issues by pushing our society further away from values associated with truth toward those associated with pleasure. Our love for “digital illusions” is consonant with their argument and may indicate that there is already cause for concern, even prior to the prevalence of affective, blue-pill machines. Sullins keeps us on the pleasure track with “Robots, Love and Sex: The Ethics of Building a Love Machine.” Admittedly, something always sounds a little goofy and unimportant, if not slightly embarrassing, when raising the topic of sex robots, though few have any doubt that they will be among us in record numbers. Sullins invites us to take the issue seriously by putting forth the notion of “erotic wisdom,” while simultaneously arguing that we must lay down some constraints when it comes to designing machines that can manipulate human psychology at such a deep level. Steering a sensible course between the issues, Cowie argues in “The Good Our Field Can Hope to Do, the Harm It Should Avoid” that, while most affective applications are morally neutral, simulated affects might well amount to a kind of deception. However, the situation is not as simple as good versus bad since there are several moral positives that can come from research in this area. This paper enumerates some of the moral positives and negatives that pertain here to underscore the balancing act that researchers must undergo when approaching the design of affective machinery. In “The Affect Dilemma for Artificial Agents: Should We Develop Affective Artificial Agents,” Scheutz takes a little bit of a different angle, noting that robots without affects and affective sensibilities may well cause more harm than those with them, but this also transforms them into patients for our moral regard. In this paper, Scheutz argues that we must nonetheless build them offering five reasons to do so before closing with a brief enumeration of the challenges ahead. Finally, Guarini offers a critique of my own work in ethical theory with his paper “Conative Dimensions of Machine Ethics: A Defense of Duty.” I have argued 386 IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 3, NO. 4, OCTOBER-DECEMBER 2012
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []