What philosophy can offer AI

https3a2f2fcdn-evbuc-com2fimages2f279452862f1213613672012f12foriginal

My piece on “What philosophy can offer AI” is now up at AI firm LoopMe’s blog. This is part of the run-up to my speaking at their event, “Artificial Intelligence: The Future of Us”, to be held at the British Museum next month.  Here’s what I wrote (the final gag is shamelessly stolen from Peter Sagal of NPR’s “Wait Wait… Don’t Tell Me!”):

Despite what you may have heard, philosophy at its best consists in rigorous thinking about important issues, and careful examination of the concepts we use to think about those issues.  Sometimes this analysis is achieved through considering potential exotic instances of an otherwise everyday concept, and considering whether the concept does indeed apply to that novel case — and if so, how.

In this respect, artificial intelligence (AI), of the actual or sci-fi/thought experiment variety, has given philosophers a lot to chew on, providing a wide range of detailed, fascinating instances to challenge some of our most dearly-held concepts:  not just “intelligence”, “mind”, and “knowledge”, but also “responsibility”, “emotion”, “consciousness”, and, ultimately, “human”.

But it’s a two-way street: Philosophy has a lot to offer AI too.

Examining these concepts allows the philosopher to notice inconsistency, inadequacy or incoherence in our thinking about mind, and the undesirable effects this can have on AI design.  Once the conceptual malady is diagnosed, the philosopher and AI designer can work together (they are sometimes the same person) to recommend revisions to our thinking and designs that remove the conceptual roadblocks to better performance.

This symbiosis is most clearly observed in the case of artificial general intelligence (AGI), the attempt to produce an artificial agent that is, like humans, capable of behaving intelligently in an unbounded number of domains and contexts

The clearest example of the requirement of philosophical expertise when doing AGI concerns machine consciousness and machine ethics: at what point does an AGI’s claim to mentality become real enough that we incur moral obligations toward it?  Is it at the same time as, or before, it reaches the point at which we would say it is conscious?  At when it has moral obligations of its own? And is it moral for us to get to the point where we have moral obligations to machines?  Should that even be AI’s goal?

These are important questions, and it is good that they are being discussed more even though the possibilities they consider aren’t really on the horizon.  

Less well-known is that philosophical sub-disciplines other than ethics have been, and will continue to be, crucial to progress in AGI.  

It’s not just the philosophers that say so; Quantum computation pioneer and Oxford physicist David Deutsch agrees: “The whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology”.  That “not” might overstate things a bit (I would soften it to “not only”), but it’s clear that Deutch’s vision of philosophy’s role in AI will not be limited to being a kind of ethics panel that assesses the “real work” done by others.

What’s more, philosophy’s relevance doesn’t just kick in once one starts working on AGI — which substantially increases its market share.  It’s an understatement to say that AGI is a subset of AI in general.  Nearly all, of the AI that is at work now providing relevant search results, classifying images, driving cars, and so on is not domain-independent AGI – it is technological, practical AI, that exploits the particularities of its domain, and relies on human support to augment its non-autonomy to produce a working system. But philosophical expertise can be of use even to this more practical, less Hollywood, kind of AI design.

The clearest point of connection is machine ethics.  

But here the questions are not the hypothetical ones about whether a (far-future) AI has moral obligations to us, or we to it.  Rather the questions will be more like this: 

– How should we trace our ethical obligations to each other when the causal link between us and some undesirable outcome for another, is mediated by a highly complex information process that involves machine learning and apparently autonomous decision-making?  

– Do our previous ethical intuitions about, e.g., product liability apply without modification, or do we need some new concepts to handle these novel levels of complexity and (at least apparent) technological autonomy?

As with AGI, the connection between philosophy and technological, practical AI is not limited to ethics.  For example, different philosophical conceptions of what it is to be intelligent suggest different kinds of designs for driverless cars.  Is intelligence a disembodied ability to process symbols?  Is it merely an ability to behave appropriately?  Or is it, at least in part, a skill or capacity to anticipate how one’s embodied sensations will be transformed by the actions one takes?  

Contemporary, sometimes technical, philosophical theories of cognition are a good place to start when considering what way of conceptualising the problem and solution will be best for a given AI system, especially in the case of design that has to be truly ground breaking to be competitive.

Of course, it’s not all sweetness and light. It is true that there has been some philosophical work that has obfuscated the issues around AI, thereby unnecessarily hindering progress. So, to my recommendation that philosophy play a key role in artificial intelligence, terms and conditions apply.  But don’t they always?

Russell, Russell: A Metaphysics emerges from the undergrowth

bertrand-russellThe final E-Intentionality seminar of 2016 will be led by Simon Bowes this Thursday, December 15th at 13:00 in Freeman G22.
Russell Russell:  A Metaphysics emerges from the undergrowth.
I will be examining recent arguments reviving Russellian monism, so-called neo-Russellian physicalism.  I will be asking whether it is viable both as a kind of physicalism and as a way of accounting for experiential properties in a material world.

The existence of qualia does not entail dualism

Our next E-Intentionality seminar is this Thurnaossday, December 1st, at 13:00 in Freeman
G22.  This will be a dry run of a talk I’ll be giving
as part of EUCognition2016, entitled “Architectural Requirements for Consciousness”.  You can read the abstract here, along with an extended clarificatory discussion prompted by David Booth’s comments.

Move Over, Truth: An Instrumental Metaphysics

The next E-Intentionality seminar will be 13:00-13:50 Thursday, November 10th 2016 in room Freeman G22 (not G31 like all the EI/CogPhi meetings so far this term).  Simon McGregor will present his research:

Move Over, Truth: An Instrumental Metaphysics
Most analytic philosophers are wedded to a realist metaphysics in which what matters is the truth or otherwise of philosophical assertions. I will argue for an utterly different metaphysical mode of thought, which focuses on reflective cognitive practice in the context of one’s lived concerns. This perspective understands rationality in terms of experienced instrumental justification, even for cognitive practices such as forming truth judgements.

Prediction Machines

51zmr2bn5hhl-_sx329_bo1204203200_This Thursday, November 3rd, from 13:00-13:50 in Freeman G31, Simon McGregor will lead the CogPhi discussion of Chapter 1 (“Prediction Machines”) of Andy Clark’s Surfing Uncertainty: Prediction, Action and the Embodied Mind.  Have your comments and questions ready beforehand.  In fact, feel free to post them in advance, here, as comments on this post.

How we represent emotion in the face: processing the content of information from and to the environment

The next E-Intentionality meeting will be Thursday, October 27th  in Freeman G31. Please note that David has offered to take preliminary comments in advance via email (D.A.Booth@sussex.ac.uk).

ascii_emotions

fonzu.deviantart.com

David Booth – ‘How we represent emotion in the face: processing the content of information from and to the environment’


This talk briefly presents an experiment which illustrates the scientific theory that e
mbodied and acculturated systems (such as you and me) represent information in the environment by causally processing its content in mathematically determinate ways. Three colleagues stated the strengths of emotions they saw in sets of keyboard characters that (badly) mimicked mobile parts of the human face. The mechanisms by which they rated the emoticons are given by formulae constructed deductively from discrimination distances between the presented diagrams and the memory of their features on occasions when a face has signalled the named emotional reaction to a situation. Five of the basic formulae of this theory of a mind have structures corresponding to classic conscious psychological subfunctions such as perceiving, describing, reasoning, intending and ’emoting’, and one to unconscious mental processing. Each formula specifies the interactions among mental events which, on the evidence, generated my colleagues’ answers to my questions. The calculations are totally dependent on prior and current material and societal affordances but say nothing about the development or ongoing execution of the neural or linguistic mechanisms involved, any more than do attractors, connectionist statistics or list programs. Functional accounts calculate merely amounts of information or other probabilistic quantities. Distinguishing among contents is equivalent to causal processing. Hence the plurality of mental, cultural and material systems in persons may accommodate a causation monism.

The Two Dimensions of Representation: Function vs. Content

Dear all,

The first E-Int seminar of the term will be this Thursday, October 13th, 13:00-13:50 in the Freeman Centre, room FRE- G22.  Jonny Lee, our E-Int seminar organiser, will speak.

Jonny Lee: The Two Dimensions of Representation: Function vs. Content 

The concept of mental representation features heavily in scientific explanations of cognition. At the same time, there is no consensus amongst philosophers about which things (if any things) are mental representations, and in particular how we can account (if we can) for the semantic properties paradigmatic of ordinary representation. In this paper I will discuss a recent development in the literature which distinguishes between the ‘function’ and ‘content’ dimension of mental representation, in an attempt to cast light on what a complete account of mental representation must achieve. I will argue that though the distinction is useful, chiefly because it shows where past philosophical projects have erred, there remain three “worries” about prising apart function and content. In elucidating these worries, I point to the possibility of an alternative to a traditional, essentialist theory of content, one which says that content comes part and parcel of how we treat mechanisms as functioning as a representations.