The limits of the limits of computation

I’m very pleased to have been invited to participate in an exciting international workshop being held at Sussex later this month.

di9cgzmxoaaxn4c

My very brief contribution has the title: “The limits of the limits of computation”.  Here’s the abstract:

The limits of the limits of computation

The most salient results concerning the limits of computation have proceeded by establishing the limits of formal systems. These findings are less damning than they may appear.  First, they only tell against real-world (or “physical”) computation (i.e., what it is that my laptop does that makes it so useful) to the extent to which real-world computation is best viewed as being formal, yet (at least some forms of) real-world computation are as much embodied, world-involving, dynamics-exploiting phenomena as recent cognitive science takes mind to be.  Second, the incomputability results state that formal methods are in some sense outstripped by extra-formal reality, while themselves being formal methods attempting to capture extra-formal reality (real-world computation) — an ironic pronouncement that would make Epimenides blush.    One ignores these limits on the incomputability results at one’s peril; a good example is the diagonal argument against artificial intelligence.

 

 

Advertisements

Sussex Robot Opera on Sky News

On Monday Kit Bradshaw of Sky News Swipe interviewed some of us involved with the recent robot operas at the University of Sussex (see http://www.sussex.ac.uk/broadcast/read/40568).

The report is due to air this weekend on Sky News at the following times (UK):

  • Friday: 2130
  • Saturday: 1030, 1430 & 1630
  • Sunday: 1130, 1430 & 1630

(Subject to cancellation if there are breaking news stories or other big events).

You will also be able to view it from Friday evening on the Swipe YouTube channel: http://www.youtube.com/playlist?list=PLG8IrydigQfckEQNNdxoPiQ0GtAJLP5_5

I hope one particular bit didn’t get left on the cutting room floor.  When Kit asked the robot “How did it feel to sing in a robot opera?”, the robot replied “Hmm.  Well I’m sure it was wonderful for the other performers and the audience, but not for me.  I’m not conscious, so nothing feels like anything for me.  In fact, I don’t even understand the words I am saying right now!”

Descartes, Conceivability, and Mirroring Arguments

Yesterday, as part of a panel on machine consciousness, I saw Jack Copeland deliver a razor-sharp talk based on a paper that he published last year with Douglas Campbell and Zhuo-Ran Deng, entitled “The Inconceivable Popularity of Conceivability Arguments“.  To give you an idea of what the paper is about, I reproduce the abstract here:

Famous examples of conceivability arguments include: (i) Descartes’ argument for mind-body dualism; (ii) Kripke’s ‘modal argument’ against psychophysical identity theory; (iii) Chalmers’ ‘zombie argument’ against materialism; and (iv) modal versions of the ontological argument for theism. In this paper we show that for any such conceivability argument, C, there is a corresponding ‘mirror argument’, M. M is deductively valid and has a conclusion that contradicts C’s conclusion. Hence a proponent of C—henceforth, a ‘conceivabilist’—can be warranted in holding that C’s premises are conjointly true only if she can find fault with one of M’s premises. But M’s premises—of which there are just two—are modeled on a pair of C’s premises. The same reasoning that supports the latter supports the former. For this reason a conceivabilist can repudiate M’s premises only on pain of severely undermining C’s premises. We conclude on this basis that all conceivability arguments, including each of (i)—(iv), are fallacious.

It’s a great paper, but I’m not sure the mirroring move against Descartes works, at least not as it is expressed in the paper.  Although the text I quote below is from the paper, I composed this objection while listening to (and recalling) the talk.  I apologise if the paper itself, which I have not read carefully to the end, blocks or anticipates the move I make here (please let me know if it does).

First, the paper defines CEP as the claim that conceivability entails possibility:
(CEP) ⬦cψ→⬦ψ
Descartes then is quoted:
“I know that everything which I clearly and distinctly understand is capable of being created by God so as to correspond exactly with my understanding of it. Hence the fact that I can clearly and distinctly understand one thing apart from another is enough to make me certain that the two things are distinct, since they are capable of being separated, at least by God… [O]n the one hand I have a clear and distinct idea of myself, in so far as I am simply a thinking, non-extended thing; and on the other hand I have a distinct idea of a body, in so far as this is simply an extended, non-thinking thing. And accordingly, it is certain that I am really distinct from my body, and can exist without it.”(Cottingham, Stoothoff, & Murdoch, 1629, p. 54)
Then it is claimed that Descartes uses CEP:
“Setting φ, ψ, and μ as follows:
φ: Mind=Body
ψ: MindBody
μ: (MindBody),
we get:
D1.⬦c(Mind≠Body)
D2.⬦c(Mind≠Body)→⬦(Mind≠Body)
D3.⬦(Mind≠Body)→□(Mind≠Body)
D4.⬦(Mind=Body)→¬□(Mind≠Body)
____________________
D5. ¬(Mind=Body)
Here Descartes uses a (theistic) version of CEP to infer that it is possible for mind and body to be distinct. From this he infers they are actually distinct. Why does he think he can make this move from mere possibility to actuality? Presumably because he is assuming D3, or something like it, as a tacit premise (Robinson, 2012).”
But that reasoning is questionable.  Surely none of D1-4 are equivalent to CEP.  So what the authors must mean is that one of D1-4 (i.e., D2) relies on CEP.  But in the quoted passage, Descartes does not appeal to (or argue for) CEP.  Rather, he argues for a more restricted claim, one that more closely resembles D2 in structure, in that it infers specifically the possibility of distinctness from the conceivability of distinctness.  That is, rather than CEP, it seems to me that Descartes argues for, and uses, CDEPD (Conceivability of Distinctness Entails Possibility of Distinctness) :
(CDEPD) ⬦c(φψ) (φψ)
It is prima facie possible to hold CDEPD without holding CEP.  Further, there are arguments (such as the one Descartes puts forward in the quoted passage), that support CDEPD that do not prima facie support CEP.  That is, one can accept what Descartes says in the quoted passage, but, it seems, reject any attempt at an analogous (dare I say “mirroring”?) argument:
Hence the fact that I can clearly and distinctly understand one thing as being the same as another is enough to make me certain that the two things are the same, since they are capable of being ?, at least by God.
What could we put in place of the question mark to yield a proposition that is true?  To yield a proposition that is implied by what Descartes says in Meditations or elsewhere?  To yield a proposition that is required for Descartes’s conceivability argument to proceed?
There seems to be an asymmetry here between the conceivability of difference and the conceivability of sameness that allows Descartes to get by with CDEPD, rather than having to employ CEP.
Why does this matter?  It matters because the mirroring argument the authors make against Descartes effectively says:  “Descartes helped himself to CEP, so we can do the same.  Only instead of applying the CEP to a proposition about the conceivability of differences, we will apply it to a proposition about the conceivability of sameness.”  If what I have said above is right, then it is possible that Descartes was not helping himself to CEP in general, but to a more restricted claim about propositions involving distinctness, CDEDP.  Thus a mirroring argument would not be able to help itself to CEP, and thus would not be able to derive the required, contrary conclusion that way.  Further, there is no way to derive such a contrary conclusion using CDEDP instead.  So the mirroring argument against Descartes’s conceivability argument fails.
I have not yet checked to see if there are similar moves that Kripke and Chalmers can make to “break the mirror” in their respective cases.

(Another) joint paper with Aaron Sloman published

Screenshot 2017-06-13 16.25.17The proceedings of EUCognition 2016 in Vienna, co-edited by myself, Vincent Müller, Yulia Sandamirskaya and Markus Vincze, have just been published online (free access):  

In it is a joint paper by Aaron Sloman and myself, entitled “Architectural Requirements for Consciousness“.  Here is the abstract:

This paper develops, in sections I-III, the virtual machine architecture approach to explaining certain features of consciousness first proposed in [1] and elaborated in [2], in which particular qualitative aspects of experiences (qualia) are proposed to be particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of an agent that make that agent prone to believe the kinds of things that are typically believed to be true of qualia (e.g., that they are ineffable, immediate, intrinsic, and private). Section IV aims to make it intelligible how the requirements identified in sections II and III could be realised in a grounded, sensorimotor, cognitive robotic architecture.

Roles for Morphology in Computation

 

gr1

From Pfeifer, Iida and Lungarella (2014)

Tomorrow I’m giving an invited talk in Gothenburg at the Symposium on Morphological Computing and Cognitive Agency, as part of the The International Society for Information Studies Summit 2017 (entitled — deep breath — “DIGITALISATION FOR A SUSTAINABLE SOCIETY: Embodied, Embedded, Networked, Empowered through Information, Computation & Cognition!”).  Here’s my title and abstract:

Roles for Morphology in Computation

The morphological aspects of a system are the shape, geometry, placement and compliance properties of that system. On the rather permissive construal of computation as transformations of information, a correspondingly permissive notion of morphological computation can be defined: cases of information transformation performed by the morphological aspects of a system. This raises the question of what morphological computation might look like under different, less inclusive accounts of computation, such as the view that computation is essentially semantic. I investigate the possibilities for morphological computation under a particular version of the semantic view. First, I make a distinction between two kinds of role a given aspect might play in computations that a system performs: foreground role and background role. The foreground role of a computational system includes such things as rules, state, algorithm, program, bits, data, etc. But these can only function as foreground by virtue of other, background aspects of the same system: the aspects that enable the foreground to be brought forth, made stable/reidentifiable, and to have semantically coherent causal effect. I propose that this foreground/background distinction cross-cuts the morphological/non-morphological distinction. Specifically, morphological aspects of a system may play either role.

The Symposium will be chaired by Rob Lowe, and Gordana Dodig Crnkovic, and the other speakers include Christian Balkenius, Lorenzo Magnani, Yulia Sandamirskaya, Jordi Vallverdú, and John Spencer (and maybe Tom Ziemke and Marcin Schroeder?).

I’m also giving an invited talk the next day (Tuesday) as part of a plenary panel entitled: “What Would It Take For A Machine To Have Non-Reductive Consciousness?”  My talk is entitled “Computation and the Fate of Qualia”.  The other speakers are Piotr Bołtuć (moderator), Jack Copeland, Igor Aleksander, and Keith W. Miller.

Should be a fantastic few days — a shame I can’t stay for the full meeting, but I have to be back at Sussex in time for the Robot Opera Mini-Symposium on Thursday!

 

AI: The Future of Us — a fireside chat with Ron Chrisley and Stephen Upstone

As mentioned in a previous post, I was invited to speak at “AI: The Future of Us” at the British Museum earlier this month.  Rather than give a lecture, it was decided that I should have a “fireside chat” with Stephen Upstone, the CEO and founder of LoopMe, the AI company hosting the event.  We had fun, and got some good feedback, so we’re looking into doing something similar this Autumn — watch this space.

Our discussion was structured around the following questions/topics being posed to me:

  • My background (what I do, what is Cognitive Science, how did I start working in AI, etc.)
  • What is the definition of consciousness and at what point can we say an AI machine is conscious?
  • What are the ethical implications for AI? Will we ever reach the point at which we will need to treat AI like a human? And how do we define AI’s responsibility?
  • Where do you see AI 30 years from now? How do you think AI will revolutionise our lives? (looking at things like smart homes, healthcare, finance, saving the environment, etc.)
  • So on your view, how far away are we from creating a super intelligence that will be better than humans in every aspect from mental to physical and emotional abilities? (Will we reach a point when the line between human and machine becomes blurred?)
  • So is AI not a threat? As Stephen Hawking recently said in the Guardian “AI will be either the best or worst thing for humanity”. What do you think? Is AI something we don’t need to be worried about?

You can listen to our fireside chat here.

What philosophy can offer AI

https3a2f2fcdn-evbuc-com2fimages2f279452862f1213613672012f12foriginal

My piece on “What philosophy can offer AI” is now up at AI firm LoopMe’s blog. This is part of the run-up to my speaking at their event, “Artificial Intelligence: The Future of Us”, to be held at the British Museum next month.  Here’s what I wrote (the final gag is shamelessly stolen from Peter Sagal of NPR’s “Wait Wait… Don’t Tell Me!”):

Despite what you may have heard, philosophy at its best consists in rigorous thinking about important issues, and careful examination of the concepts we use to think about those issues.  Sometimes this analysis is achieved through considering potential exotic instances of an otherwise everyday concept, and considering whether the concept does indeed apply to that novel case — and if so, how.

In this respect, artificial intelligence (AI), of the actual or sci-fi/thought experiment variety, has given philosophers a lot to chew on, providing a wide range of detailed, fascinating instances to challenge some of our most dearly-held concepts:  not just “intelligence”, “mind”, and “knowledge”, but also “responsibility”, “emotion”, “consciousness”, and, ultimately, “human”.

But it’s a two-way street: Philosophy has a lot to offer AI too.

Examining these concepts allows the philosopher to notice inconsistency, inadequacy or incoherence in our thinking about mind, and the undesirable effects this can have on AI design.  Once the conceptual malady is diagnosed, the philosopher and AI designer can work together (they are sometimes the same person) to recommend revisions to our thinking and designs that remove the conceptual roadblocks to better performance.

This symbiosis is most clearly observed in the case of artificial general intelligence (AGI), the attempt to produce an artificial agent that is, like humans, capable of behaving intelligently in an unbounded number of domains and contexts

The clearest example of the requirement of philosophical expertise when doing AGI concerns machine consciousness and machine ethics: at what point does an AGI’s claim to mentality become real enough that we incur moral obligations toward it?  Is it at the same time as, or before, it reaches the point at which we would say it is conscious?  At when it has moral obligations of its own? And is it moral for us to get to the point where we have moral obligations to machines?  Should that even be AI’s goal?

These are important questions, and it is good that they are being discussed more even though the possibilities they consider aren’t really on the horizon.  

Less well-known is that philosophical sub-disciplines other than ethics have been, and will continue to be, crucial to progress in AGI.  

It’s not just the philosophers that say so; Quantum computation pioneer and Oxford physicist David Deutsch agrees: “The whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology”.  That “not” might overstate things a bit (I would soften it to “not only”), but it’s clear that Deutch’s vision of philosophy’s role in AI will not be limited to being a kind of ethics panel that assesses the “real work” done by others.

What’s more, philosophy’s relevance doesn’t just kick in once one starts working on AGI — which substantially increases its market share.  It’s an understatement to say that AGI is a subset of AI in general.  Nearly all, of the AI that is at work now providing relevant search results, classifying images, driving cars, and so on is not domain-independent AGI – it is technological, practical AI, that exploits the particularities of its domain, and relies on human support to augment its non-autonomy to produce a working system. But philosophical expertise can be of use even to this more practical, less Hollywood, kind of AI design.

The clearest point of connection is machine ethics.  

But here the questions are not the hypothetical ones about whether a (far-future) AI has moral obligations to us, or we to it.  Rather the questions will be more like this: 

– How should we trace our ethical obligations to each other when the causal link between us and some undesirable outcome for another, is mediated by a highly complex information process that involves machine learning and apparently autonomous decision-making?  

– Do our previous ethical intuitions about, e.g., product liability apply without modification, or do we need some new concepts to handle these novel levels of complexity and (at least apparent) technological autonomy?

As with AGI, the connection between philosophy and technological, practical AI is not limited to ethics.  For example, different philosophical conceptions of what it is to be intelligent suggest different kinds of designs for driverless cars.  Is intelligence a disembodied ability to process symbols?  Is it merely an ability to behave appropriately?  Or is it, at least in part, a skill or capacity to anticipate how one’s embodied sensations will be transformed by the actions one takes?  

Contemporary, sometimes technical, philosophical theories of cognition are a good place to start when considering what way of conceptualising the problem and solution will be best for a given AI system, especially in the case of design that has to be truly ground breaking to be competitive.

Of course, it’s not all sweetness and light. It is true that there has been some philosophical work that has obfuscated the issues around AI, thereby unnecessarily hindering progress. So, to my recommendation that philosophy play a key role in artificial intelligence, terms and conditions apply.  But don’t they always?