I was interviewed by Verdict.co.uk recently about the ethics of AI in healthcare. One or
two remarks of mine from that interview
are included near the end of this piece that appeared last week:
My views are on this are considerably more nuanced than these quotes suggest, so I am thinking of turning my extensive prep notes for the interview into a piece to be posted here and/or on a site like TheConversation.com. These thoughts are completely distinct from the ones included in the paper Steve Torrance and I wrote a few years back, “Modelling consciousness-dependent expertise in machine medical moral agents“.
A paper by myself and Aaron Sloman, “Functionalism, Revisionism, and Qualia” has just been published in the APA Newsletter on Philosophy and Computing. (The whole issue looks fantastic – I’m looking forward to reading all of it, especially the other papers in the “Mind Robotics” section, and most especially the papers by Jun Tani and Riccardo Manzotti). Our contribution is a kind of follow-up to our 2003 paper “Virtual Machines and Consciousness”. There’s no abstract, so let me just list here a few of the more controversial things we claim (and in some cases, even argue for!):
- Even if our concept of qualia is true of nothing, qualia might still exist (we’re looking at you, Dan Dennett!)
- If qualia exist, they are physical – or at least their existence alone would not imply the falsity of physicalism (lots of people we’re looking at here )
- We might not have qualia: The existence of qualia is an empirical matter.
- Even if we don’t have qualia, it might be possible to build a robot that does!
- The question of whether inverted qualia spectra are possible is, in a sense, incoherent.
If you get a chance to read it, I’d love to hear what you think.
Our next E-Intentionality seminar is this Thursday, December 1st, at 13:00 in Freeman
G22. This will be a dry run of a talk I’ll be giving
as part of EUCognition2016, entitled “Architectural Requirements for Consciousness”. You can read the abstract here, along with an extended clarificatory discussion prompted by David Booth’s comments.
Next Thursday, November 17th, at 13:00 I’ll be leading the E-Intentionality seminar in Freeman G22. I’ll be using this seminar as a dry run for the first part of my keynote lecture at the UAE Social Robotics meeting next week. It builds on work that I first presented at Tufts in 2014.
Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. I look at one approach to ethically designing robots, that of designing ethical robots – robots that are given a set of rules that are intended to encode an ethical system, and which are to be applied by the robot in the generation of its behaviour. I argue that this approach will in many cases obfuscate, rather than clarify, the lines of responsibility involved (resulting in “moral murk”), and can lead to ethically adverse situations. After giving an example of such cases, I offer an alternative approach to ethical design of robots, one that does not presuppose that notions of obligation and permission apply to the robot in question, thereby avoiding the problems of moral murk and ethical adversity.