The Ethics of AI and Healthcare

ai-doctor2-570x300I was interviewed by Verdict.co.uk recently about the ethics of AI in healthcare.  One or
two remarks of mine from that interview
are included near the end of this piece that appeared last week:

http://www.verdict.co.uk/the-ai-impact-healthcare-industry-is-changing/

My views are on this are considerably more nuanced than these quotes suggest, so I am thinking of turning my extensive prep notes for the interview into a piece to be posted here and/or on a site like TheConversation.com.  These thoughts are completely distinct from the ones included in the paper Steve Torrance and I wrote a few years back, “Modelling consciousness-dependent expertise in machine medical moral agents“.

Ethically designing robots without designing ethical robots

robot_ethicsNext Thursday, November 17th, at 13:00 I’ll be leading the E-Intentionality seminar in Freeman G22. I’ll be using this seminar as a dry run for the first part of my keynote lecture at the UAE Social Robotics meeting next week. It builds on work that I first presented at Tufts in 2014.

Abstract:

Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. I look at one approach to ethically designing robots, that of designing ethical robots – robots that are given a set of rules that are intended to encode an ethical system, and which are to be applied by the robot in the generation of its behaviour. I argue that this approach will in many cases obfuscate, rather than clarify, the lines of responsibility involved (resulting in “moral murk”), and can lead to ethically adverse situations. After giving an example of such cases, I offer an alternative approach to ethical design of robots, one that does not presuppose that notions of obligation and permission apply to the robot in question, thereby avoiding the problems of moral murk and ethical adversity.

Artificial social agents in a world of conscious beings

I forgot to mention in the update posted earlier today that fellow PAICSer, Steve Torrance, will also be a keynote speaker at the 2nd Joint UAE Symposium on Social Robotics.  Here are his title and abstract.logo

Artificial social agents in a world of conscious beings.

Steve Torrance

Abstract

It is an important fact about each of us that we are conscious beings, and that the others we interact with in our social world are also conscious beings. Yet we are appear to be on the edge of a revolution in new social relationships – interactions and intimacies with a variety of non-conscious artificial social agents (ASAs) – both virtual and physical. Granted, we often behave, in the company of such ASAs as though they are conscious, and as though they are social beings. But in essence we still think of them, at least in our more reflective moments, as “tools” or “systems” – smart, and getting smarter, but lacking phenomenal awareness or real emotion.

In my talk I will discuss ways in which reflection on consciousness – both natural and (would-be) artificial – impacts on our intimate social relationships with robots. And I will propose some implications for human responsibilities in developing these technologies.

I will focus on two questions: (1) What would it take for an ASA to be conscious in a way that “matters”? (2) Can we talk of genuine social relationships or interactions with agents that have no consciousness?

On question (1), I will look at debates in the fields of machine consciousness and machine ethics, in order to examine the range of possible positions that may be taken. I will suggest that there is a close relation between thinking of a being as having a conscious phenomenology, and adopting a range of ethical attitudes towards that being. I will also discuss an important debate between those who take a “social-relational” approach to phenomenological and ethical attributions, and those who take an “objectivist” approach. I will offer ways to resolve that debate. This will help provide guidance, I hope, to those who are developing the technologies for smarter ASAs, which possibly may have stronger claims to be taken as conscious. On (2), I will suggest that, even for ASAs that are acknowledged not to be conscious, it is possible that there could be a range of ethical roles that they could come to occupy, in a way that would justify our talking of “artificial social agents” in a rich sense, one that would imply that they had both genuine ethico-social responsibilities and ethico-social entitlements.

The spread of ASAs – whether or not genuinely conscious, social or ethical – will impose heavy responsibilities upon technologists, and those they work with, to guide the social impacts of such agents in acceptable directions, as such agents increasingly inter-operate with us and with our lives. I will thus conclude by pointing to some issues of urgent social concern that are raised by the likely proliferation of ASAs in the coming years and decades.

Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots

Next month I’m giving a keynote address at the 2nd Joint UAE Symposium on Social Robotics, entitled: “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots”logo

Abstract: Advances in social robot design will be achieved hand-in-hand with increased clarity in our concepts of responsibility, folk psychology, and (machine) consciousness. 1) Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. 2) An intuitive understanding by human users of the (possibly quite alien) perceptual and cognitive predicament of robots will be essential to improving cooperation with them, as well as assisting diagnosis, robot training, and the design process itself. Synthetic phenomenology is the attempt to combine robot designs with assistive technologies such as virtual reality to make the experience-like states of cognitive robots understandable to users. 3) Making robot minds more like our own would be facilitated by finding designs that make robots susceptible to the same (mis-)conceptions concerning perception, experience and consciousness that humans have. Making a conscious-like robot will thus involve making robots that find it natural to believe that their inner states are private and non-material. In all three cases, improving robot-human interaction will be as much about an increased understanding of human responsibility, folk psychology and consciousness as it will be about technological breakthroughs in robot hardware and architecture.

Robot crime?

1041809723Yesterday I was interviewed by Radio Sputnik to comment on some recent claims about robot/AI crime.  They have made a transcription and recording of the interview available here.

Some highlights:

“We need to be worried about criminals using AI in three different ways. One is to evade detection: if one has some artificial intelligence technology, one might be able, for instance, to engage in certain kinds of financial crimes in a way that can be randomized in a particular way that avoids standard methods of crime detection. Or criminals could use computer programs to notice patterns in security systems that a human couldn’t notice, and find weaknesses that a human would find very hard to identify… And then finally a more common use might be of AI to just crack passwords and codes, and access accounts and data that people previously could leave secure. So these are just three examples of how AI would be a serious threat to security of people in general if it were in the hands of the wrong people.”

“I think it would be a tragedy if we let fear of remote possibilities of AI systems committing crimes, if that fear stopped us from investigating artificial intelligence as a positive technology that might help us solve some of the problems our world is facing now. I’m an optimist in that I think that AI as a technology can very well be used for good, and if we’re careful, can be of much more benefit than disadvantage.”

“I think that as long as legislators and law enforcement agencies understand what the possibilities are, and understand that the threat is humans committing crimes with AI rather than robots committing crimes, then I think we can head off any potential worries with the appropriate kinds of regulations and updating of our laws.”