(Another) joint paper with Aaron Sloman published

Screenshot 2017-06-13 16.25.17The proceedings of EUCognition 2016 in Vienna, co-edited by myself, Vincent Müller, Yulia Sandamirskaya and Markus Vincze, have just been published online (free access):  

In it is a joint paper by Aaron Sloman and myself, entitled “Architectural Requirements for Consciousness“.  Here is the abstract:

This paper develops, in sections I-III, the virtual machine architecture approach to explaining certain features of consciousness first proposed in [1] and elaborated in [2], in which particular qualitative aspects of experiences (qualia) are proposed to be particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of an agent that make that agent prone to believe the kinds of things that are typically believed to be true of qualia (e.g., that they are ineffable, immediate, intrinsic, and private). Section IV aims to make it intelligible how the requirements identified in sections II and III could be realised in a grounded, sensorimotor, cognitive robotic architecture.

AI: The Future of Us — a fireside chat with Ron Chrisley and Stephen Upstone

As mentioned in a previous post, I was invited to speak at “AI: The Future of Us” at the British Museum earlier this month.  Rather than give a lecture, it was decided that I should have a “fireside chat” with Stephen Upstone, the CEO and founder of LoopMe, the AI company hosting the event.  We had fun, and got some good feedback, so we’re looking into doing something similar this Autumn — watch this space.

Our discussion was structured around the following questions/topics being posed to me:

  • My background (what I do, what is Cognitive Science, how did I start working in AI, etc.)
  • What is the definition of consciousness and at what point can we say an AI machine is conscious?
  • What are the ethical implications for AI? Will we ever reach the point at which we will need to treat AI like a human? And how do we define AI’s responsibility?
  • Where do you see AI 30 years from now? How do you think AI will revolutionise our lives? (looking at things like smart homes, healthcare, finance, saving the environment, etc.)
  • So on your view, how far away are we from creating a super intelligence that will be better than humans in every aspect from mental to physical and emotional abilities? (Will we reach a point when the line between human and machine becomes blurred?)
  • So is AI not a threat? As Stephen Hawking recently said in the Guardian “AI will be either the best or worst thing for humanity”. What do you think? Is AI something we don’t need to be worried about?

You can listen to our fireside chat here.

What philosophy can offer AI

https3a2f2fcdn-evbuc-com2fimages2f279452862f1213613672012f12foriginal

My piece on “What philosophy can offer AI” is now up at AI firm LoopMe’s blog. This is part of the run-up to my speaking at their event, “Artificial Intelligence: The Future of Us”, to be held at the British Museum next month.  Here’s what I wrote (the final gag is shamelessly stolen from Peter Sagal of NPR’s “Wait Wait… Don’t Tell Me!”):

Despite what you may have heard, philosophy at its best consists in rigorous thinking about important issues, and careful examination of the concepts we use to think about those issues.  Sometimes this analysis is achieved through considering potential exotic instances of an otherwise everyday concept, and considering whether the concept does indeed apply to that novel case — and if so, how.

In this respect, artificial intelligence (AI), of the actual or sci-fi/thought experiment variety, has given philosophers a lot to chew on, providing a wide range of detailed, fascinating instances to challenge some of our most dearly-held concepts:  not just “intelligence”, “mind”, and “knowledge”, but also “responsibility”, “emotion”, “consciousness”, and, ultimately, “human”.

But it’s a two-way street: Philosophy has a lot to offer AI too.

Examining these concepts allows the philosopher to notice inconsistency, inadequacy or incoherence in our thinking about mind, and the undesirable effects this can have on AI design.  Once the conceptual malady is diagnosed, the philosopher and AI designer can work together (they are sometimes the same person) to recommend revisions to our thinking and designs that remove the conceptual roadblocks to better performance.

This symbiosis is most clearly observed in the case of artificial general intelligence (AGI), the attempt to produce an artificial agent that is, like humans, capable of behaving intelligently in an unbounded number of domains and contexts

The clearest example of the requirement of philosophical expertise when doing AGI concerns machine consciousness and machine ethics: at what point does an AGI’s claim to mentality become real enough that we incur moral obligations toward it?  Is it at the same time as, or before, it reaches the point at which we would say it is conscious?  At when it has moral obligations of its own? And is it moral for us to get to the point where we have moral obligations to machines?  Should that even be AI’s goal?

These are important questions, and it is good that they are being discussed more even though the possibilities they consider aren’t really on the horizon.  

Less well-known is that philosophical sub-disciplines other than ethics have been, and will continue to be, crucial to progress in AGI.  

It’s not just the philosophers that say so; Quantum computation pioneer and Oxford physicist David Deutsch agrees: “The whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology”.  That “not” might overstate things a bit (I would soften it to “not only”), but it’s clear that Deutch’s vision of philosophy’s role in AI will not be limited to being a kind of ethics panel that assesses the “real work” done by others.

What’s more, philosophy’s relevance doesn’t just kick in once one starts working on AGI — which substantially increases its market share.  It’s an understatement to say that AGI is a subset of AI in general.  Nearly all, of the AI that is at work now providing relevant search results, classifying images, driving cars, and so on is not domain-independent AGI – it is technological, practical AI, that exploits the particularities of its domain, and relies on human support to augment its non-autonomy to produce a working system. But philosophical expertise can be of use even to this more practical, less Hollywood, kind of AI design.

The clearest point of connection is machine ethics.  

But here the questions are not the hypothetical ones about whether a (far-future) AI has moral obligations to us, or we to it.  Rather the questions will be more like this: 

– How should we trace our ethical obligations to each other when the causal link between us and some undesirable outcome for another, is mediated by a highly complex information process that involves machine learning and apparently autonomous decision-making?  

– Do our previous ethical intuitions about, e.g., product liability apply without modification, or do we need some new concepts to handle these novel levels of complexity and (at least apparent) technological autonomy?

As with AGI, the connection between philosophy and technological, practical AI is not limited to ethics.  For example, different philosophical conceptions of what it is to be intelligent suggest different kinds of designs for driverless cars.  Is intelligence a disembodied ability to process symbols?  Is it merely an ability to behave appropriately?  Or is it, at least in part, a skill or capacity to anticipate how one’s embodied sensations will be transformed by the actions one takes?  

Contemporary, sometimes technical, philosophical theories of cognition are a good place to start when considering what way of conceptualising the problem and solution will be best for a given AI system, especially in the case of design that has to be truly ground breaking to be competitive.

Of course, it’s not all sweetness and light. It is true that there has been some philosophical work that has obfuscated the issues around AI, thereby unnecessarily hindering progress. So, to my recommendation that philosophy play a key role in artificial intelligence, terms and conditions apply.  But don’t they always?

The Ethics of AI and Healthcare

ai-doctor2-570x300I was interviewed by Verdict.co.uk recently about the ethics of AI in healthcare.  One or
two remarks of mine from that interview
are included near the end of this piece that appeared last week:

http://www.verdict.co.uk/the-ai-impact-healthcare-industry-is-changing/

My views are on this are considerably more nuanced than these quotes suggest, so I am thinking of turning my extensive prep notes for the interview into a piece to be posted here and/or on a site like TheConversation.com.  These thoughts are completely distinct from the ones included in the paper Steve Torrance and I wrote a few years back, “Modelling consciousness-dependent expertise in machine medical moral agents“.

Functionalism, Revisionism, and Qualia

logoA paper by myself and Aaron Sloman, “Functionalism, Revisionism, and Qualia” has just been published in the APA Newsletter on Philosophy and Computing. (The whole issue looks fantastic – I’m looking forward to reading all of it, especially the other papers in the “Mind Robotics” section, and most especially the papers by Jun Tani and Riccardo Manzotti). Our contribution is a kind of follow-up to our 2003 paper “Virtual Machines and Consciousness”. There’s no abstract, so let me just list here a few of the more controversial things we claim (and in some cases, even argue for!):

  • Even if our concept of qualia is true of nothing, qualia might still exist (we’re looking at you, Dan Dennett!)
  • If qualia exist, they are physical – or at least their existence alone would not imply the falsity of physicalism (lots of people we’re looking at here )
  • We might not have qualia: The existence of qualia is an empirical matter.
  • Even if we don’t have qualia, it might be possible to build a robot that does!
  • The question of whether inverted qualia spectra are possible is, in a sense, incoherent.

If you get a chance to read it, I’d love to hear what you think.

Ron

Ethically designing robots without designing ethical robots

robot_ethicsNext Thursday, November 17th, at 13:00 I’ll be leading the E-Intentionality seminar in Freeman G22. I’ll be using this seminar as a dry run for the first part of my keynote lecture at the UAE Social Robotics meeting next week. It builds on work that I first presented at Tufts in 2014.

Abstract:

Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. I look at one approach to ethically designing robots, that of designing ethical robots – robots that are given a set of rules that are intended to encode an ethical system, and which are to be applied by the robot in the generation of its behaviour. I argue that this approach will in many cases obfuscate, rather than clarify, the lines of responsibility involved (resulting in “moral murk”), and can lead to ethically adverse situations. After giving an example of such cases, I offer an alternative approach to ethical design of robots, one that does not presuppose that notions of obligation and permission apply to the robot in question, thereby avoiding the problems of moral murk and ethical adversity.

A Role for Introspection in Anthropic AI

Congratulations to Sam Freed, who yesterday passed his Ph.D. viva with minor corrections!  The examiners were Mike Wheeler and Blay Whitby.  Sam was co-supervised by myself and Chris Thornton, and Steve Torrance was on his research committee.sam-freed

A Role for Introspection in Anthropic AI

SUMMARY

The main thesis is that Introspection is recommended for the development of anthropic AI.

Human-like AI, distinct from rational AI, would suit robots for care for the elderly and for other tasks that require interaction with naïve humans. “Anthropic AI” is a sub-type of human-like AI, aiming for the pre-cultured, universal intelligence that is available to healthy humans regardless of time and civilisation. This is contrasted with western, modern, well-trained and adult intelligence that is often the focus of AI. Anthropic AI would pick up local cultures and habits, ignoring optimality. Introspection is recommended for the AI developer, as a source of ideas for designing an artificial mind, in the context of technology rather than science. Existing notions of introspection are analysed, and the aspiration for “clean” or “good” introspection is exposed as a mirage. Nonetheless, introspection is shown to be a legitimate source of ideas for AI using considerations of the contexts of discovery vs. justification. Moreover, introspection is shown to be a positively plausible basis for ideas for AI since if a teacher uses introspection to extract mental skills from themselves to transmit them to a student, an AI developer can also use introspection to uncover the human skills that they want to transfer to a computer. Methods and pitfalls of this approach are detailed, including the common error of polluting one’s introspection with highly-educated notions such as mathematical methods.

Examples are coded and run, showing promising learning behaviour. This is interpreted as a compromise between Classic AI and Dreyfus’s tradition. So far AI practitioners have largely ignored the subjective, while the Phenomenologists have not written code – this thesis bridges that gap. One of the examples is shown to have Gadamerian characteristics, as recommended by (Winograd & Flores, 1986). This serves also as a response to Dreyfus’s more recent publications critiquing AI (Dreyfus, 2007, 2012).