A deflationary view of morally competent robots.
Long before there are robots that are true, morally responsible agents,
many of them (“m-robots”) will have strong behavioural and functional
similarities to human moral agents. The design and evaluation of
m-robots should (both in the interests of producing the best designs,
and of doing what is right) eschew conceptualisations which view the
m-robot as a moral agent. Rather, I argue, those engaging in such
activities should adopt the deflationary view of m-robot morality: the
ethical questions around an m-robot’s actions concern not the purported
moral standing of the m-robot itself, but rather and solely the moral
standing of the relevant humans and human organisations involved in the
design, manufacture, and deployment of m-robots. An extreme version of
the deflationary view, which I will not defend, maintains that there is
no difference in kind between the ethical questions raised by robot
action and those raised by any other technology. Instead, I will
acknowledge the novelty of the ethical questions raised by m-robots, but
claim that they are best solved by re-conceptualising them in a
deflationary manner. Consequently, some specific recommendations are
offered concerning what our goals should be in designing m-robots, and
what kind of architectures might best achieve those goals.
“Caring robots – more dangerous than killer robots?
It might seem, at first glance, that military robotics raises many more
ethical worries than does the use of robots in caring roles. However,
this superficial impression deserves revision for a number of reasons.
Firstly, there is overwhelming evidence that robots are a very effective
tool with which to manipulate human emotional responses. It might
theoretically be possible to do this only in ethical ways of benefit to
individuals and society. Unfortunately there has been little or no
discussion of exactly what these ways might be. For the caring robots
now being developed by the private sector there is no guidance
whatsoever on these issues. We can therefore expect at best, the
manipulation of emotions in order to maximize profits. At the worst we
can expect dangerous mistakes and disreputable deceit.
There has also been very little discussion outside the specialist field
of robot ethics of just which caring roles are suitable for robots and
which roles we might wish, on good reasoned grounds, to reserve for
humans. This is surely a matter that deserves widespread public debate.
Finally, there is now a large number of international conventions,
legislation, and rules of engagement which directly impact on the
development and deployment of military robots. In complete contrast, the
field of social, domestic, and caring robots is without any significant
legislation or ethical oversight. Caring, not killing, is now the wild
lawless frontier of robotics.