Poems, essays, and other writings by eric bleys

Ethics Reflection Paper

Is it Ethical to Create Artificial Moral Agents?


Increasingly, robotics is producing machines with greater levels of autonomy. As a result, questions of responsibility in robotics are now more complex than ever. The following question is among the most challenging. Should we design artificial moral agents (Asaro, 2006)? This question is further complicated by the context of the need to pursue moral goods. While the powers of robotics are tempting to use as tools for accomplishing moral goods, many of these same capabilities can likewise be used for evil. Along with moral agency, a great power, these robots may develop capabilities with unintended and unfortunate consequences. 


Despite these concerns, one major reason for creating artificial moral agents is that they might become better than humans at certain important tasks. Possibly, a robotic moral agent might be better at diplomacy, surgery or humanitarian work. Developers might have an opportunity to program machines away from human weaknesses that hinder our ability to accomplish these positive moral responsibilities. Robotics might also enhance aspects of human nature through integration with human biological systems. If these enhancements implanted into the body are also autonomous, perhaps this could allow parts of the body to make better reactive decisions than they would otherwise. Perhaps integration of human nature with robotic moral agents would allow for improvements to the moral functioning of humanity. Interaction between human and machine moral systems could lead to new horizons of moral thought, feeling and intention. 


The possibility for humans and machines to engage in complex moral dialogue and debate is another opportunity for moral progress. Moral agency in robots need not only be about moral action, but also moral thought, reflection and consultation. Importantly, emotion plays a major role in many ethical theories because of its significant influence on human behavior. Among the emotions, there are some such as anger, despair and envy, that when felt in the extreme, tend to cause humans to act badly. We ought to explore how robots might be able to help us to control these emotions in order to prevent them from reaching the dangerous extremes that so often lead to wicked behavior. 


However, to some extent, robots will always reflect human nature because they are created by human beings. For this reason, it might be impossible to remove from robots the negative aspects of human nature. Another cause for concern is the potential of autonomous robots to act against human interests. If very powerful autonomous robots feel disrespected by humans, this might lead to a supremacist society dominated by the cruel wishes of robotic masters. 


Furthermore, autonomous robotic enhancements might lead humans to avoid responsibility by blaming their mistakes on the robotics implanted into their bodies. The robotic ethics consultant might also lead human actors to bad decisions if they falsely assume that robots possess superior skills of moral reasoning. With every ethical enhancement, we must always ask ourselves whether this enhancement will diminish human effort in ways that ultimately harm our ethical capabilities. By making a task easier, humans always to some extent lose the skills developed from performing the task in a more difficult way. Technologies have frequently made certain tasks and hence certain skills obsolete. In the case of morality, we are dealing with a skill of supreme importance; for this reason, we need to ask ourselves whether the “easy path” of robotic morality tools might diminish the skills of natural human moral development. Additionally, we need to consider the potential outcome of such diminished capabilities. If human moral reasoning declines, this could lead to the decline in moral quality of the machine agents because they would be programmed according to a diminished human concept of morality. 


In most ethical theories, empathy is viewed as an essential tool for accomplishing our moral responsibilities. One starting point for considering the issue is to ask whether humans would be able to empathize with artificial moral agents in order to peacefully collaborate with them. The ability to be responsible for one’s actions involves being aware of the value of one’s actions. This ability to impose value judgments on one’s behavior requires an awareness of one’s self as a metaphysical unit. In other words, self-consciousness must exist for an entity to be a moral agent. Morality requires empathy for the consciousness of others. Because robots are extremely different from humans, we will not be able to understand the consciousness of robots. Therefore, we will be unable to empathize with robotic consciousness and hence it is immoral to build robotic consciousness. Robotic moral agency requires robotic self-consciousness and hence we ought not to create artificial moral agents. However, it might be possible to still create robotic ethics consultants and emotion controllers without giving them full and complete moral agency. The possibilities of un-conscious and semi-autonomous robots as moral assistants should be pursued. 




Bibliography 


Asaro, P., 2006. What Should We Want From a Robot Ethic? Global. International Review of Information Ethics. 



The Problem of Induction