Functionalism, Revisionism, and Qualia

logoA paper by myself and Aaron Sloman, “Functionalism, Revisionism, and Qualia” has just been published in the APA Newsletter on Philosophy and Computing. (The whole issue looks fantastic – I’m looking forward to reading all of it, especially the other papers in the “Mind Robotics” section, and most especially the papers by Jun Tani and Riccardo Manzotti). Our contribution is a kind of follow-up to our 2003 paper “Virtual Machines and Consciousness”. There’s no abstract, so let me just list here a few of the more controversial things we claim (and in some cases, even argue for!):

  • Even if our concept of qualia is true of nothing, qualia might still exist (we’re looking at you, Dan Dennett!)
  • If qualia exist, they are physical – or at least their existence alone would not imply the falsity of physicalism (lots of people we’re looking at here )
  • We might not have qualia: The existence of qualia is an empirical matter.
  • Even if we don’t have qualia, it might be possible to build a robot that does!
  • The question of whether inverted qualia spectra are possible is, in a sense, incoherent.

If you get a chance to read it, I’d love to hear what you think.

Ron

Advertisements

Ethically designing robots without designing ethical robots

robot_ethicsNext Thursday, November 17th, at 13:00 I’ll be leading the E-Intentionality seminar in Freeman G22. I’ll be using this seminar as a dry run for the first part of my keynote lecture at the UAE Social Robotics meeting next week. It builds on work that I first presented at Tufts in 2014.

Abstract:

Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. I look at one approach to ethically designing robots, that of designing ethical robots – robots that are given a set of rules that are intended to encode an ethical system, and which are to be applied by the robot in the generation of its behaviour. I argue that this approach will in many cases obfuscate, rather than clarify, the lines of responsibility involved (resulting in “moral murk”), and can lead to ethically adverse situations. After giving an example of such cases, I offer an alternative approach to ethical design of robots, one that does not presuppose that notions of obligation and permission apply to the robot in question, thereby avoiding the problems of moral murk and ethical adversity.

A Role for Introspection in Anthropic AI

Congratulations to Sam Freed, who yesterday passed his Ph.D. viva with minor corrections!  The examiners were Mike Wheeler and Blay Whitby.  Sam was co-supervised by myself and Chris Thornton, and Steve Torrance was on his research committee.sam-freed

A Role for Introspection in Anthropic AI

SUMMARY

The main thesis is that Introspection is recommended for the development of anthropic AI.

Human-like AI, distinct from rational AI, would suit robots for care for the elderly and for other tasks that require interaction with naïve humans. “Anthropic AI” is a sub-type of human-like AI, aiming for the pre-cultured, universal intelligence that is available to healthy humans regardless of time and civilisation. This is contrasted with western, modern, well-trained and adult intelligence that is often the focus of AI. Anthropic AI would pick up local cultures and habits, ignoring optimality. Introspection is recommended for the AI developer, as a source of ideas for designing an artificial mind, in the context of technology rather than science. Existing notions of introspection are analysed, and the aspiration for “clean” or “good” introspection is exposed as a mirage. Nonetheless, introspection is shown to be a legitimate source of ideas for AI using considerations of the contexts of discovery vs. justification. Moreover, introspection is shown to be a positively plausible basis for ideas for AI since if a teacher uses introspection to extract mental skills from themselves to transmit them to a student, an AI developer can also use introspection to uncover the human skills that they want to transfer to a computer. Methods and pitfalls of this approach are detailed, including the common error of polluting one’s introspection with highly-educated notions such as mathematical methods.

Examples are coded and run, showing promising learning behaviour. This is interpreted as a compromise between Classic AI and Dreyfus’s tradition. So far AI practitioners have largely ignored the subjective, while the Phenomenologists have not written code – this thesis bridges that gap. One of the examples is shown to have Gadamerian characteristics, as recommended by (Winograd & Flores, 1986). This serves also as a response to Dreyfus’s more recent publications critiquing AI (Dreyfus, 2007, 2012).

Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots

Next month I’m giving a keynote address at the 2nd Joint UAE Symposium on Social Robotics, entitled: “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots”logo

Abstract: Advances in social robot design will be achieved hand-in-hand with increased clarity in our concepts of responsibility, folk psychology, and (machine) consciousness. 1) Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. 2) An intuitive understanding by human users of the (possibly quite alien) perceptual and cognitive predicament of robots will be essential to improving cooperation with them, as well as assisting diagnosis, robot training, and the design process itself. Synthetic phenomenology is the attempt to combine robot designs with assistive technologies such as virtual reality to make the experience-like states of cognitive robots understandable to users. 3) Making robot minds more like our own would be facilitated by finding designs that make robots susceptible to the same (mis-)conceptions concerning perception, experience and consciousness that humans have. Making a conscious-like robot will thus involve making robots that find it natural to believe that their inner states are private and non-material. In all three cases, improving robot-human interaction will be as much about an increased understanding of human responsibility, folk psychology and consciousness as it will be about technological breakthroughs in robot hardware and architecture.

Robot crime?

1041809723Yesterday I was interviewed by Radio Sputnik to comment on some recent claims about robot/AI crime.  They have made a transcription and recording of the interview available here.

Some highlights:

“We need to be worried about criminals using AI in three different ways. One is to evade detection: if one has some artificial intelligence technology, one might be able, for instance, to engage in certain kinds of financial crimes in a way that can be randomized in a particular way that avoids standard methods of crime detection. Or criminals could use computer programs to notice patterns in security systems that a human couldn’t notice, and find weaknesses that a human would find very hard to identify… And then finally a more common use might be of AI to just crack passwords and codes, and access accounts and data that people previously could leave secure. So these are just three examples of how AI would be a serious threat to security of people in general if it were in the hands of the wrong people.”

“I think it would be a tragedy if we let fear of remote possibilities of AI systems committing crimes, if that fear stopped us from investigating artificial intelligence as a positive technology that might help us solve some of the problems our world is facing now. I’m an optimist in that I think that AI as a technology can very well be used for good, and if we’re careful, can be of much more benefit than disadvantage.”

“I think that as long as legislators and law enforcement agencies understand what the possibilities are, and understand that the threat is humans committing crimes with AI rather than robots committing crimes, then I think we can head off any potential worries with the appropriate kinds of regulations and updating of our laws.”