What philosophy can offer AI

https3a2f2fcdn-evbuc-com2fimages2f279452862f1213613672012f12foriginal

My piece on “What philosophy can offer AI” is now up at AI firm LoopMe’s blog. This is part of the run-up to my speaking at their event, “Artificial Intelligence: The Future of Us”, to be held at the British Museum next month.  Here’s what I wrote (the final gag is shamelessly stolen from Peter Sagal of NPR’s “Wait Wait… Don’t Tell Me!”):

Despite what you may have heard, philosophy at its best consists in rigorous thinking about important issues, and careful examination of the concepts we use to think about those issues.  Sometimes this analysis is achieved through considering potential exotic instances of an otherwise everyday concept, and considering whether the concept does indeed apply to that novel case — and if so, how.

In this respect, artificial intelligence (AI), of the actual or sci-fi/thought experiment variety, has given philosophers a lot to chew on, providing a wide range of detailed, fascinating instances to challenge some of our most dearly-held concepts:  not just “intelligence”, “mind”, and “knowledge”, but also “responsibility”, “emotion”, “consciousness”, and, ultimately, “human”.

But it’s a two-way street: Philosophy has a lot to offer AI too.

Examining these concepts allows the philosopher to notice inconsistency, inadequacy or incoherence in our thinking about mind, and the undesirable effects this can have on AI design.  Once the conceptual malady is diagnosed, the philosopher and AI designer can work together (they are sometimes the same person) to recommend revisions to our thinking and designs that remove the conceptual roadblocks to better performance.

This symbiosis is most clearly observed in the case of artificial general intelligence (AGI), the attempt to produce an artificial agent that is, like humans, capable of behaving intelligently in an unbounded number of domains and contexts

The clearest example of the requirement of philosophical expertise when doing AGI concerns machine consciousness and machine ethics: at what point does an AGI’s claim to mentality become real enough that we incur moral obligations toward it?  Is it at the same time as, or before, it reaches the point at which we would say it is conscious?  At when it has moral obligations of its own? And is it moral for us to get to the point where we have moral obligations to machines?  Should that even be AI’s goal?

These are important questions, and it is good that they are being discussed more even though the possibilities they consider aren’t really on the horizon.  

Less well-known is that philosophical sub-disciplines other than ethics have been, and will continue to be, crucial to progress in AGI.  

It’s not just the philosophers that say so; Quantum computation pioneer and Oxford physicist David Deutsch agrees: “The whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology”.  That “not” might overstate things a bit (I would soften it to “not only”), but it’s clear that Deutch’s vision of philosophy’s role in AI will not be limited to being a kind of ethics panel that assesses the “real work” done by others.

What’s more, philosophy’s relevance doesn’t just kick in once one starts working on AGI — which substantially increases its market share.  It’s an understatement to say that AGI is a subset of AI in general.  Nearly all, of the AI that is at work now providing relevant search results, classifying images, driving cars, and so on is not domain-independent AGI – it is technological, practical AI, that exploits the particularities of its domain, and relies on human support to augment its non-autonomy to produce a working system. But philosophical expertise can be of use even to this more practical, less Hollywood, kind of AI design.

The clearest point of connection is machine ethics.  

But here the questions are not the hypothetical ones about whether a (far-future) AI has moral obligations to us, or we to it.  Rather the questions will be more like this: 

– How should we trace our ethical obligations to each other when the causal link between us and some undesirable outcome for another, is mediated by a highly complex information process that involves machine learning and apparently autonomous decision-making?  

– Do our previous ethical intuitions about, e.g., product liability apply without modification, or do we need some new concepts to handle these novel levels of complexity and (at least apparent) technological autonomy?

As with AGI, the connection between philosophy and technological, practical AI is not limited to ethics.  For example, different philosophical conceptions of what it is to be intelligent suggest different kinds of designs for driverless cars.  Is intelligence a disembodied ability to process symbols?  Is it merely an ability to behave appropriately?  Or is it, at least in part, a skill or capacity to anticipate how one’s embodied sensations will be transformed by the actions one takes?  

Contemporary, sometimes technical, philosophical theories of cognition are a good place to start when considering what way of conceptualising the problem and solution will be best for a given AI system, especially in the case of design that has to be truly ground breaking to be competitive.

Of course, it’s not all sweetness and light. It is true that there has been some philosophical work that has obfuscated the issues around AI, thereby unnecessarily hindering progress. So, to my recommendation that philosophy play a key role in artificial intelligence, terms and conditions apply.  But don’t they always?

Guessing Games and The Power of Prediction

The CogPhi reading group resumes next week.  CogPhi offers the chance to read through and discuss recent literature in the Philosophy of Artifical Intelligence and Cognitive51zmr2bn5hhl-_sx329_bo1204203200_ Science.  Each week a different member of the group leads the others through the chosen reading for that week. This term we’ll be working through Andy Clark’s new book on predictive processing, Surfing Uncertainty: Prediction, Action and the Embodied Mind.

CogPhi meets fortnightly, sharing the same time slot and room as E-Intentionality, which meets fortnightly in the alternate weeks. Although CogPhi announcements will be made on the E-Int mailing list, attendance at one  seminar series is not required for attendance at the other.  CogPhi announcements will also be made here.

Next week, October 20th, from 13:00-13:50 in Freeman G31, Jonny Lee will lead the discussion of the Introduction (“Guessing Games”) and Chapter 1 (“Prediction Machines”).  Have your comments and questions ready beforehand.  In fact, feel free to post them in advance, here, as comments on this post.

EDIT:  Jonny sent out the following message yesterday, the 19th:

It’s been brought to my attention that covering both the introduction and chapter 1 might be too much material for one meeting. As such, let’s say we’ll just stick to the introduction. If you’ve already read chapter 1, apologies, but you’ll be ahead of the game. On the other hand, if the amount of reading was putting you off, you’ve now only got 10 pages to get through!

 

The Mereological Constraint

 

voice-in-head

brainworldmagazine.com

E-Intentionality, February 26th 2016, Pevensey 2A11, 12:00-12:50

Ron Chrisley: The Mereological Constraint

I will discuss what I call the mereological constraint, which can be traced back at least as far as Putnam’s writings in the 1960s, and is the idea, roughly, that a mind cannot have another mind as a proper constituent.  I show that the implications (benefits?) of such a constraint, if true, would be far-ranging, allowing one to finesse the Chinese room and Chinese nation arguments against computationalism, reject certain notions of extended mind, reject most group minds, make a ruling on the modality of sensory substitution, etc.  But is the mereological conjecture true?  I will look at some possible arguments for the conjecture, including one that appeals to the fact that rationality must be grounded in the non-rational, and one that attempts to derive the constraint from a comparable one concerning the individuation of computational states.  I will also consider an objection to the conjecture, that argues that it would confer on us a priori knowledge of facts that are, intuitively, empirical.

Audio (28.5 mb, .mp3)