A deflationary view of morally competent robots.
Long before there are robots that are true, morally responsible agents,
many of them (“m-robots”) will have strong behavioural and functional
similarities to human moral agents. The design and evaluation of
m-robots should (both in the interests of producing the best designs,
and of doing what is right) eschew conceptualisations which view the
m-robot as a moral agent. Rather, I argue, those engaging in such
activities should adopt the deflationary view of m-robot morality: the
ethical questions around an m-robot’s actions concern not the purported
moral standing of the m-robot itself, but rather and solely the moral
standing of the relevant humans and human organisations involved in the
design, manufacture, and deployment of m-robots. An extreme version of
the deflationary view, which I will not defend, maintains that there is
no difference in kind between the ethical questions raised by robot
action and those raised by any other technology. Instead, I will
acknowledge the novelty of the ethical questions raised by m-robots, but
claim that they are best solved by re-conceptualising them in a
deflationary manner. Consequently, some specific recommendations are
offered concerning what our goals should be in designing m-robots, and
what kind of architectures might best achieve those goals.
“Caring robots – more dangerous than killer robots?
It might seem, at first glance, that military robotics raises many more
ethical worries than does the use of robots in caring roles. However,
this superficial impression deserves revision for a number of reasons.
Firstly, there is overwhelming evidence that robots are a very effective
tool with which to manipulate human emotional responses. It might
theoretically be possible to do this only in ethical ways of benefit to
individuals and society. Unfortunately there has been little or no
discussion of exactly what these ways might be. For the caring robots
now being developed by the private sector there is no guidance
whatsoever on these issues. We can therefore expect at best, the
manipulation of emotions in order to maximize profits. At the worst we
can expect dangerous mistakes and disreputable deceit.
There has also been very little discussion outside the specialist field
of robot ethics of just which caring roles are suitable for robots and
which roles we might wish, on good reasoned grounds, to reserve for
humans. This is surely a matter that deserves widespread public debate.
Finally, there is now a large number of international conventions,
legislation, and rules of engagement which directly impact on the
development and deployment of military robots. In complete contrast, the
field of social, domestic, and caring robots is without any significant
legislation or ethical oversight. Caring, not killing, is now the wild
lawless frontier of robotics.
It’s easy to be unaware of the fact that notions similar to, if not identical with, the concept of the “extended mind” were in circulation before, say, 1998. Yet there were writers advocating active (as opposed to philosophical) externalism before that date. I have noted before that Tuomela 1989 is one such source:
“The main arguments in [this] paper are directed against the latter thesis, according to which internal (or autonomous or narrow) psychological states as opposed to noninternal ones suffice for explanation in psychology. Especially, feedback-based actions are argued to require indispensable reference to noninternal explanantia, often to explanatory common causes.” — Methodological Solipsism and Explanation in Psychology, Raimo Tuomela, Philosophy of Science Vol. 56, No. 1 (Mar., 1989) , pp. 23-47.
But there is an even clearer statement of the thesis dating back a decade before that, in Aaron Sloman’s The Computer Revolution in Philosophy (available for free here):
“Because these ideas have been made precise and implemented in the design of computing systems, we can now, without being guilty of woolly and unpackable metaphors, say things like: the environment is part of the mechanism (or its mind), and the mechanism is simultaneously part of (i.e. ‘in’) the environment!” — Aaron Sloman, The Computer Revolution in Philosophy: Philosophy, science and models of mind, Harvester Press, 1978, Section 6.5.
Here we have not only the extended mind, but situatedness as well!
Admittedly, not everything Sloman says in that book is friendly to an externalist perspective on mind, but I doubt he would take that to be a criticism.
David Leavens reminded me of Gregory Bateson saying similar things in 1972:
“… we may say that ‘mind’ is immanent in those circuits of the brain which are complete within the brain. Or that mind is immanent in circuits that are complete within the system, brain plus body. Or, finally, that mind is immanent in the larger system — man plus environment .”
In “Intelligence as a Way of Life” (2000), I note, in precisely this context (the precursors of active externalism), that Bateson’s 1971 “The Cybernetics of ‘Self': A Theory of Alcoholism” says “the mental characteristics of the system are immanent not in some part, but in the system as a whole”, and also:
“The computer is only an arc of a larger circuit which always includes a man and an environment from which information is received and upon which efferent messages from the computer have effect. This total system, or ensemble, may legitimately be said to show mental characteristics”.
I then explicitly link his remarks to Tuomela 1989 and Clark and Chalmers 1998. Thanks again, David.
This morning, Tad Zawidzki drew my attention to the publication on Tuesday of this paper: Multisensory Integration in Complete Unawareness. What Faivre et al report there is exactly the kind of phenomenon that Ryan Scott, Jason Samaha, Zoltan Dienes and I have been investigating. In fact, we have been aware of Faivre et al’s study and cite it in our paper (that is currently under review).
Their work is good, but ours goes further. Specifically, we show that:
- a) Cross-modal associations can be learned when neither of the stimuli in the two modalities are consciously perceived (whereas the Faivre et al study relies on previously learned associations between consciously perceived stimuli).
- b) Such learning can occur with non-linguistic stimuli.
Together, a) and b) really strengthen the case against accounts that assert that consciousness is required for multi-sensory integration (e.g., Global Workspace Theory). Some defenders of such theories might try to brush aside results like that of Faivre et al by revising their theories to say that consciousness is only required for higher-level cognition, such as learning; and/or by setting aside linguistic stimuli as a special case of (consciously) pre-learned cross-modal associations which can be exploited by unconscious processes to achieve the appearance of multi-sensory integration. Our results block both of these attempts to save (what we refer to as) integration theories.
The principle of embodiment in cognitive science emphasises that the main object of cognition is to reason about systems which the agent itself is part of and can affect through its actions. I propose that particular real-world circumstances can undermine the assumption that the process of reasoning does not affect the systems being reasoned about, and explore why this is a problem for typical conceptions of rationality. We will also discuss how Sorensen’s concept of epistemic blind spots could affect mathematical reasoning, in light of the Lucas-Penrose argument about human transcendence of mechanism. But it will come as a surprise.
Working on thesis.
Working on Joint Session talk. Thought my subject – panpsychism and the composition problem – would be a welcome change from natural kinds and downward causation, but it turns out that deproblematising composition and adding the idea of the mind being composed of multiple virtual machines is a good way of arguing for non-reductive, downwardly causal mental properties.
Working on talk for E-int and Joint Session.
Went to 1st person approach conference in Berkeley – changed plan and gave a response to Susan Stewart’s criticism of synthetic phenomenology work.
Gave talk last week to philosophy faculty research progress meeting.
Going to Sweden on Monday till August.
Supervising MSc student – implementing web browsing advisor built on architecture inspired by Bernard Baars global workspace theory.
Preparing for presentation & working on thesis.
1 – The philosophy of mind reading group (see http://www.ifl.pt/index.php?id1=3&id2=8) had a meeting on a draft chapter of my book: Cognitive Technologies in Everyday Life: Tools for Thinking and Feeling. It generated some interesting discussion and it was very nice for me after all the time I’ve put into this.
2 – I’ve started organizing a research in progress group modelled on … you’ve guessed it E-I which will hopefully meet for the first time next week.
3 – Trying to finish a review for JCS of The Crucible of Consciousness by Zoltan Torey which is supposed to be in Friday.
Working on Joint Session talk.
- Wrote paper with Blay for “What Makes Us Moral?” conference in Amsterdam at month’s end and submitted it to the conference website. Presented the paper at a seminar here for last-minute feedback before submission.
- Re-wrote Chapter 5 of my thesis “The Limits of Concepts and Conceptual Abilities” into a standalone paper for a course I’m attending of the SweCog National Research School in Cognitive Science. Planning to submit it somewhere by month’s end.
- Doctoral thesis went lost in the (registered) post. So far neither Sweden nor the UK want to claim responsibility. Annoying as this may complicate my pay-grade change to postdoc status (seriously). Did I mention that the first time the bookbinders bound my thesis, they got my name wrong? :-P
Apologies for the delay.
- Collected my bound thesis from the bookbinders on Monday and posted it to Sussex: pretty much the last thing I have to do before I officially have earned my degree!
- Finished a first re-write of the paper I presented at Toward a Science of Consciousness – Stockholm, hoping to submit in the next few weeks (on the limits of concepts and conceptual abilities).
- Engaging in some email discussions with Blay and a philosopher here in Lund about compatibilism.
- Assisting Göran Sonesson with comments on a paper he is submitting, on the ability of chimpanzees to interpret different semiotic resources.
- Making painfully slow progress on the paper I need to write (so I can present!) at the What Makes Us Moral? conference in Amsterdam later this month.
- Trying to write a paper on Kirsh & Maglio’s epistemic action/pragmatic action distinction.
- Also Bob Chad sent his apologies for non-attendance today.
- Didn’t get the job in Norway.