Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots

Next month I’m giving a keynote address at the 2nd Joint UAE Symposium on Social Robotics, entitled: “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots”logo

Abstract: Advances in social robot design will be achieved hand-in-hand with increased clarity in our concepts of responsibility, folk psychology, and (machine) consciousness. 1) Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. 2) An intuitive understanding by human users of the (possibly quite alien) perceptual and cognitive predicament of robots will be essential to improving cooperation with them, as well as assisting diagnosis, robot training, and the design process itself. Synthetic phenomenology is the attempt to combine robot designs with assistive technologies such as virtual reality to make the experience-like states of cognitive robots understandable to users. 3) Making robot minds more like our own would be facilitated by finding designs that make robots susceptible to the same (mis-)conceptions concerning perception, experience and consciousness that humans have. Making a conscious-like robot will thus involve making robots that find it natural to believe that their inner states are private and non-material. In all three cases, improving robot-human interaction will be as much about an increased understanding of human responsibility, folk psychology and consciousness as it will be about technological breakthroughs in robot hardware and architecture.

Advertisements

5 thoughts on “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots

  1. Ron, you and Steve Torrance seem to be assuming that a robot has to be conscious in order to be morally responsible; I doubt that, for reasons briefly stated below. He contrasts ‘social-relational’ and ‘objective’ accounts and I can’t tell from his Abstract if he will be allowing the option of an objective, social relational account, which is my position on both consciousness and every other aspect of personhood, including all aspects of mentation, conscious or unconscious.
    Your warning about the need to track back responsibility for the consequences of use of a tool to the human users is highly apropos. Nevertheless it is now becoming recognised legally that a human being who is serving as a tool cannot be absolved from moral responsibility for consequences of personal actions, at least predictably fatal effect but also recently serious physical and even emotional harm, e.g. torturers under military command, supporters of a murder, extreme controlling behaviour. Undertrained and poorly managed engineers are not necessarily prosecuted, however; maybe that is close to what you are saying about current robots being far from ethical responsibility, although I doubt that the Alton Towers engineer(s) absolve themselves.
    From the position, the key issue is whether the robot can have reasons for her/his/its actions, i.e. whether the robot truly acts! – with intention.
    I believe an human individual’s intentional activities without external constraints (‘free will’) are psychologically determinate, and that we bring up children to work the same way. So I don’t see why we can’t allow that a physically and socially intelligent robot could be built which operates under values, means-end reasoning and prioritisation algorithms. I can accept that the best ‘social robots’ are still morally infantile or even less than doggy. With Torrance, we do need to avoid drawing simple borderline or even sketching grey areas. Would it be safest to start now identifying candidate reasons that actual robots might have for their actions, with a view to taking the next step of diagnosing whether a reason is the robot’s, not entirely a non-rational performance as enforced by the user? Let your tracking back of responsibility be ‘waylaid’ already by cases for blame or praise as we might to a 3-4-year old or even a cultured pet dog, without waiting for resolution of other issues, especially what I view as the very secondary and highly confused issues of consciousness.

  2. David, thanks for your comments.

    Could you indicate where in my abstract I assume that a robot has to be conscious in order to be morally responsible? What I do believe is that robots at present are not responsible, but it’s agency that they lack, and it is not clear to me that consciousness is required for agency.

    It might be that you and Steve mean something different by “objective” – he might just mean “not socially relative” or even “not social-relational”.

    I’m not sure exactly what social-relation accounts are like; do you allow that an individual X might be a person, have a mind, be conscious, etc. even though no other agents exist (or no other agents have any causal interaction of any kind with X)?

    Your point about the possible failure of the “just following orders” defence is well-taken. However, I think it would be a fallacy to infer from that that “serving as a tool” is therefore sufficient for responsibility. Certain conditions must be met to be a responsible agent, and meeting those conditions in general might have the effect that one’s responsibility persists through periods of soft (reason-involving) coercion. But I maintain that current artificial agents are not in this situation at all, as they do not meet those initial, general conditions for responsibility; as you put it, they do not truly act for their own reasons, with intention. This apparently puts me in the company of philosophical AI naysayers, but I disagree with them too. Since they usually rely on first principles rather than contingent, empirical facts, they usually overstate the case, arguing that it is *impossible* for a robot to be responsible. I do not agree – no one, to my mind, has shown that a robot cannot be conscious, or responsible. But I think there are good reasons for believing that such robots are not on the horizon.

    You ask:

    “Would it be safest to start now identifying candidate reasons that actual robots might have for their actions, with a view to taking the next step of diagnosing whether a reason is the robot’s, not entirely a non-rational performance as enforced by the user?”

    I suppose that’s one way to go, but I would word it this way: tracing the actual lines of responsibility in part by seeing how attributing responsibility to the robot fails. But I agree that we should do this without waiting for a resolution of the issues surrounding consciousness.

    Ron

    • Very sorry, Ron, about this greatly delayed reply to your kind response. Unfortunately I don’t routinely browse to PAICS.

      My reference to consciousness of the robot was elliptical at best. We agree (I think) that only entities capable of intentional activities can be held responsible but there’s a category of law that holds the agent responsible even if s/he is not aware that s/he was carrying out the offence. If there the law is ‘beyond morality’, there is still the classic responsibility for unintended consequences that could have been foreseen. As I understand, the focal content of an intention or a belief is usually (even normally?) conscious but such an achievement is not necessarily in awareness in every potentially relevant aspect at the time of action.
      (Awareness is necessarily of some delimited content: there is no ‘abstract’ state of awareness.)

      Regarding the objectivity of social processes, I do not believe that awareness or even moral responsibility depend on the existence of any other agents or communication with them – *present* existence! I do believe that awareness, intentions and moral responsibility depend on *past* existence of and interaction with a societal culture of the sort we are familiar with among groups of human beings.

      (I hope to ask some questions about your abstract with Aaron on qualia, which is why I’ve come into PAICS today.)

      I hope that’s a bit clearer. Fair enough to look for cases where a robot escapes responsibility, but I think the “tool” get-out is a pretty likely case – especially with currently envisaged uses (sic) of intelligent machines. Even though a robot socially intelligent enough to be worth studying in that way is a long way off, aren’t we agreed it’s worth considering now how to study such a being? – not just ethically (or legally) but also psychologically across the board!

      – David B at US

  3. David,

    You said:

    “Very sorry, Ron, about this greatly delayed reply to your kind response. Unfortunately I don’t routinely browse to PAICS.”

    You can sign up on the main page so that you get email notifications whenever there is a new post.

    “My reference to consciousness of the robot was elliptical at best. We agree (I think) that only entities capable of intentional activities can be held responsible but there’s a category of law that holds the agent responsible even if s/he is not aware that s/he was carrying out the offence.”

    Yes, but only if the agent is an agent – that is, only if s/he is aware of *something*, either at the time, or, at a minimum, at some time.

    “If there the law is ‘beyond morality’, there is still the classic responsibility for unintended consequences that could have been foreseen.”

    Again, only for an agent that can at least intend something.

    “As I understand, the focal content of an intention or a belief is usually (even normally?) conscious but such an achievement is not necessarily in awareness in every potentially relevant aspect at the time of action. (Awareness is necessarily of some delimited content: there is no ‘abstract’ state of awareness.)”

    That all seems right, but I don’t think it applies to the kinds of robots we are considering.

    “Regarding the objectivity of social processes, I do not believe that awareness or even moral responsibility depend on the existence of any other agents or communication with them – *present* existence! I do believe that awareness, intentions and moral responsibility depend on *past* existence of and interaction with a societal culture of the sort we are familiar with among groups of human beings.”

    Ok, thanks, that clarifies things, but I still can’t say whether Steve would drop his dichotomy of objective vs social-relational. Perhaps ask him?

    “(I hope to ask some questions about your abstract with Aaron on qualia, which is why I’ve come into PAICS today.)”

    Yes, I see you have done so – I’ll try to get to those too.

    “I hope that’s a bit clearer. Fair enough to look for cases where a robot escapes responsibility, but I think the “tool” get-out is a pretty likely case – especially with currently envisaged uses (sic) of intelligent machines. Even though a robot socially intelligent enough to be worth studying in that way is a long way off, aren’t we agreed it’s worth considering now how to study such a being? – not just ethically (or legally) but also psychologically across the board!”

    Yes, that is worthy of study, but I also think getting clear on the lines of human responsibility in complex technological systems, and not allowing misplaced talk of robot responsibility to cloud the issue, is of great importance right now, at the threshold of driverless cars, automated weapon systems, etc.

    Ron

  4. Pingback: Ethically designing robots without designing ethical robots | PAICS

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s