Negotiating Computation & Enaction: Rules of Engagement

In July, PAICSers Adrian Downey and Jonny Lee (with Joe Dewhurst) organised an international conference at Sussex entitled “Computation & Representation in Cognitive Science: Enactivism, Ecological Psychology & Cybernetics”.  It was an excellent meeting, with boundary-pushing talks from:

  • Anthony Chemero (University of Cincinnati)
  • Ron Chrisley (University of Sussex)
  • Sabrina Golonka (Leeds Beckett University)
  • Alistair Isaac (University of Edinburgh)
  • Adam Linson (University of Dundee)
  • Marcin Miłkowski (Polish Academy of Sciences)
  • Nico Orlandi (UC Santa Cruz)
  • Mario Villalobos (Universidad de Tarapaca)

I never wrote an abstract for my talk, so below I include the handout instead.  But it probably only makes sense for those who heard the talk (and even then…).

Negotiating Computation & Enaction: Rules of Engagement

Ron Chrisley, University of Sussex, July 10th 2017

Disputes concerning computation and enaction typically centre on (rejection of) computationalism with respect to cognition:  the claim that cognition is computation.

I: Rules of engagement

I propose the following “rules of engagement” (numbered items, below) as a way to prevent talking past one another, arguing against straw men, etc. and instead make progress on the issues that matter. No doubt they fail to be theory neutral, strictly speaking.  And there are of course other principles that are at least as important to adhere to.  But it’s a start.

The proposals can be seen as specific instances of the general admonition: clarify the computationalist claim at issue (cf end of Grush’s review of Ramsey’s Representation Reconsidered).

But even more so:  Many different versions of the claim that “cognition is computation”, depending on the choices of the following variables:

Computationalism schema: {Some|all} cognition {in humans|others} is {actually|necessarily|possibly} {wholly|partly} {best explained in terms of} computation

So need to clarify A) the relation between them, as well as B) cognition and C) computation themselves

A: Relation?

The instantiation of this that I consider most interesting/likely to be true is given below, but for now, start with this:

  1. Computationalism is best understood as an epistemological claim

That is, I plump for the “is best explained in terms of” rather than the simple “is” version of the schema above, as it is the version with the most direct methodological, experimental, even theoretical import (we’ll see the upshot of this later).

B: Cognition?

Given my background in AI and Philosophy (rather than Psychology or Neuroscience), I am interested in cognition in its most general sense: cognition as it could be, not (just) cognition as it is.  Thus:

  1. Avoid inferences such as: “X is involved in (all cases of) human cognition, therefore X is essential to cognition”

Compare flight and feathers.

An interesting case is when computation is not the best account for some particular kind of cognition, yet only it can account for that and some other kind of cognition.

C: Computation?

  1. We should demand no higher rigour/objectvity for computational concepts than we do for other, accepted scientific concepts
  2. Avoid the reductionism Catch-22 (mainly for representation)

That is, some authors seem to insist both that:

  • A notion of representation must be reduceable to something else (preferably non-intentional) to be naturalistically acceptable
  • Any notion of representation that is reduceable to something else can be replaced by that something else and therefore is surplus to requirements.
  1. Be aware that there are distinct construals (inter-theoretic) and varieties (intra-theoretic) of computation
  • Construals: computation as running some program, operation of a Turing machine, mechanistic account (Milkowski, Piccinini), semantic account, etc.
  • Varieties: digital, analogue, quantum, connectionist, etc.

E.g., if you are arguing against a view in which either everything is computational, or there is only one kind of computation, you are unlikely to persuade a nuanced computationalist.

  1. There are computers, and computers are best explained computationally

Does a given account have the implication that computers are not computational? Or that there are no computers?  Prima facie, these should count as points against that account.

And even so, what is to stop someone from claiming that whatever concepts provide the best account of the things we intuitively (but incorrectly?) called computers, also play a role in the best account of mind?  Cf Transparent computationalism (Chrisley)

On the other hand, do not be so inclusive as to trivialise the notion of computation: pan-computationalism?

  • Actually that’s not the problem with pan-computationalism
  • The real problem is that has difficulty explaining what is specifically computational about computers (beyond universality)

Computationalism (C): the best explanation (of at least some cases) of cognition will involve (among others) computational concepts (that is, concepts that play a key role in our best explanation of (some) computers, qua computers).

  1. So even if computation is only necessary for the explanation of some kinds of cognition, C is still vindicated.

II Examples

Consider two kinds of arguments against computationalism: those that rely on enactivism, and those that do not (but which some enactivists rely on)

These summaries are probably unfair, and likely violate corresponding “rules of engagement” concerning enactivism, etc.

Enactivist arguments

  • Enactivism 1:  Self-maintenance
    • E: The operation of a computer/robot, no matter how behaviourally/functionally similar to a human, would not be sufficient for cognition because not alive/self-maintenant
      • E has same skeptical problems as Zombie/Swampman positions
      • Note:  similarity to human is misleading – it may be that given its materiality, a computer would have to be very behaviourally/functionally different from a human in order to cognise (2)
    • Why believe E? Because:
  1. a) meaning for computers is not intrinsic; and
  2. b) possession of intrinsic meaning is necessary for (explaining?) cognition
  • Why believe b)?
    • Much of our meaning is imposed from outside/others?
    • Even if one accepts b), only follows that computation can’t explain *all* cognition? (7)
    • Even if human cognition has an “intrinsic meaning” core, does that rule out the possibility of cognition that does not? (2)
  • Why believe a)?
    • Reason 1:  Because any meaning in computers is imposed from the outside
      • But why should that preclude that system might have, partially by virtue of its computational properties, (either distinct or coinciding) intrinsic meaning, in addition? (5)
      • Might living systems be examples of such?
    • Reason 2: Because:
  1. c) intrinsic meaning (only) comes from being alive; and
  2. d) computers are not alive
  • Why believe d)? (given behavioural/functional identity with living systems)
    • Because computers are made of non-living material: they don’t, e.g. metabolise
      • By definition?  Could they? (5)
      • But so are cells: the parts of cells don’t metabolise
      • Because computers are not hierarchies of living systems
        • So they have meaning, just not human meaning? (2, 7)
        • What if we hierarchically arrange them?  Why would their computational nature cease to be explanatorily relevant?
      • Why believe c)?
        • Enactivist meaning is imposed by theorist’s valuing of self-maintenance (3)
      • In any event: E is not enough to defeat C — need to show computation is not necessary
  • Enactivism 2a: Basic Minds (Hutto and Myin)
    • Computation involves representation
      • Contentious (e.g., mechanistic account) (5)
    • Representation requires content
    • There is a level of basic mind that does not involve content
    • Therefore computationalism is false
      • At most only shows that there must be a non-computational explanation of the most basic forms of cognition (7)
      • But actually one can have a non-contentful, yet intentional, notion of computation:  robust correspondence (5)
    • Enactivism 2b: Content  (Hutto and Myin)
      • Computation involves representation
        • As above
      • Representation requires content
      • There is a “hard problem of content” (HPC):  no naturalistically acceptable theory of sub-personal content
      • Therefore computationalism is false
        • Impatience: Even if true, lack of a theory is not decisive (3)
        • Some argue the “hard” problem has already been solved, long ago (Milkowski)
      • Enactivism(?) 3:  Fictionalism (after Downey, with apologies)
        • Computation involves representation
        • Although representation is useful for explaining cognition, utility doesn’t imply metaphysical truth
        • Further, considerations like HPC argue against representation, and therefore computation
        • So computationalism is (metaphysically) false
          • Relies on argument Enactivism 2b – see rejoinders above
          • Only tells against computation construed as a metaphysical claim — not a problem for C
          • Yet C, being epistemic/pragmatic, is the one that matters (1)

Non-enactivist arguments against computationalism

(to which enactivists sometimes appeal):

  • Chinese room (Searle)
    • Many objections
    • Against Strong AI (metaphysics), not against C (1)
    • Against sufficiency, not against C (7)
    • Enactivist irony: emphasising the fundamental differences between living/conscious systems and those that are not (such as Macs and PCs) allows one to question the idea that a human (Searle) can perform the same computations as those
      • Proposal: Human has different, intentionality-sensitive counterfactuals that “dead” silicon does not (5)
      • Upshot: Non-living nature of machines is a feature, not a bug — immunizes machine computation to Chinese room critique
  • Diagonalisation (e.g., Penrose)
    • G: Human mathematicians are not computing a (knowably sound) Turing-computable function when ascertaining the truth of certain mathematical propositions
    • Many objections
    • But even if the argument works, it does not impact on C, since X does not need to compute the same functions as Y for X to explain Y
    • That is, C is epistemic, not metaphysical (1)
  • Non-objectivity of computation (Putnam, Searle)
    • Anything can be seen as implementing any Turing machine
    • On some accounts, not all TM instantiations are computers (need intentionality) (5)
    • But fails, even for TMs:  counterfactuals
    • More recently, some (Maudlin, Bishop) have argued that to be explanatory, computational states can only supervene on occurrent physical states, not counterfactual ones.
      • But some occurrent states are individuated by their counterfactual properties
      • Counterfactual properties supervene on occurent state
      • Also: seems to imply computational concepts are not suitable for explaining computers (6)
    • Phenomenology (Dreyfus)
      • E. g., experts don’t engage in search algorithms when, e.g., playing chess – they just see directly the right moves.
      • Makes unfounded assumptions about what it feels like to be this or that kind of physical (computational) system
      • E. g., a (sub-personal) computation that involves millions of steps may realise an experience with no such complex structure, even skilled coping
      • But even if Dreyfus is right, does not refute C (7)
Advertisements

The limits of the limits of computation

I’m very pleased to have been invited to participate in an exciting international workshop being held at Sussex later this month.

di9cgzmxoaaxn4c

My very brief contribution has the title: “The limits of the limits of computation”.  Here’s the abstract:

The limits of the limits of computation

The most salient results concerning the limits of computation have proceeded by establishing the limits of formal systems. These findings are less damning than they may appear.  First, they only tell against real-world (or “physical”) computation (i.e., what it is that my laptop does that makes it so useful) to the extent to which real-world computation is best viewed as being formal, yet (at least some forms of) real-world computation are as much embodied, world-involving, dynamics-exploiting phenomena as recent cognitive science takes mind to be.  Second, the incomputability results state that formal methods are in some sense outstripped by extra-formal reality, while themselves being formal methods attempting to capture extra-formal reality (real-world computation) — an ironic pronouncement that would make Epimenides blush.    One ignores these limits on the incomputability results at one’s peril; a good example is the diagonal argument against artificial intelligence.

 

 

Sussex Robot Opera on Sky News

On Monday Kit Bradshaw of Sky News Swipe interviewed some of us involved with the recent robot operas at the University of Sussex (see http://www.sussex.ac.uk/broadcast/read/40568).

The report is due to air this weekend on Sky News at the following times (UK):

  • Friday: 2130
  • Saturday: 1030, 1430 & 1630
  • Sunday: 1130, 1430 & 1630

(Subject to cancellation if there are breaking news stories or other big events).

You will also be able to view it from Friday evening on the Swipe YouTube channel: http://www.youtube.com/playlist?list=PLG8IrydigQfckEQNNdxoPiQ0GtAJLP5_5

I hope one particular bit didn’t get left on the cutting room floor.  When Kit asked the robot “How did it feel to sing in a robot opera?”, the robot replied “Hmm.  Well I’m sure it was wonderful for the other performers and the audience, but not for me.  I’m not conscious, so nothing feels like anything for me.  In fact, I don’t even understand the words I am saying right now!”

Descartes, Conceivability, and Mirroring Arguments

Yesterday, as part of a panel on machine consciousness, I saw Jack Copeland deliver a razor-sharp talk based on a paper that he published last year with Douglas Campbell and Zhuo-Ran Deng, entitled “The Inconceivable Popularity of Conceivability Arguments“.  To give you an idea of what the paper is about, I reproduce the abstract here:

Famous examples of conceivability arguments include: (i) Descartes’ argument for mind-body dualism; (ii) Kripke’s ‘modal argument’ against psychophysical identity theory; (iii) Chalmers’ ‘zombie argument’ against materialism; and (iv) modal versions of the ontological argument for theism. In this paper we show that for any such conceivability argument, C, there is a corresponding ‘mirror argument’, M. M is deductively valid and has a conclusion that contradicts C’s conclusion. Hence a proponent of C—henceforth, a ‘conceivabilist’—can be warranted in holding that C’s premises are conjointly true only if she can find fault with one of M’s premises. But M’s premises—of which there are just two—are modeled on a pair of C’s premises. The same reasoning that supports the latter supports the former. For this reason a conceivabilist can repudiate M’s premises only on pain of severely undermining C’s premises. We conclude on this basis that all conceivability arguments, including each of (i)—(iv), are fallacious.

It’s a great paper, but I’m not sure the mirroring move against Descartes works, at least not as it is expressed in the paper.  Although the text I quote below is from the paper, I composed this objection while listening to (and recalling) the talk.  I apologise if the paper itself, which I have not read carefully to the end, blocks or anticipates the move I make here (please let me know if it does).

First, the paper defines CEP as the claim that conceivability entails possibility:
(CEP) ⬦cψ→⬦ψ
Descartes then is quoted:
“I know that everything which I clearly and distinctly understand is capable of being created by God so as to correspond exactly with my understanding of it. Hence the fact that I can clearly and distinctly understand one thing apart from another is enough to make me certain that the two things are distinct, since they are capable of being separated, at least by God… [O]n the one hand I have a clear and distinct idea of myself, in so far as I am simply a thinking, non-extended thing; and on the other hand I have a distinct idea of a body, in so far as this is simply an extended, non-thinking thing. And accordingly, it is certain that I am really distinct from my body, and can exist without it.”(Cottingham, Stoothoff, & Murdoch, 1629, p. 54)
Then it is claimed that Descartes uses CEP:
“Setting φ, ψ, and μ as follows:
φ: Mind=Body
ψ: MindBody
μ: (MindBody),
we get:
D1.⬦c(Mind≠Body)
D2.⬦c(Mind≠Body)→⬦(Mind≠Body)
D3.⬦(Mind≠Body)→□(Mind≠Body)
D4.⬦(Mind=Body)→¬□(Mind≠Body)
____________________
D5. ¬(Mind=Body)
Here Descartes uses a (theistic) version of CEP to infer that it is possible for mind and body to be distinct. From this he infers they are actually distinct. Why does he think he can make this move from mere possibility to actuality? Presumably because he is assuming D3, or something like it, as a tacit premise (Robinson, 2012).”
But that reasoning is questionable.  Surely none of D1-4 are equivalent to CEP.  So what the authors must mean is that one of D1-4 (i.e., D2) relies on CEP.  But in the quoted passage, Descartes does not appeal to (or argue for) CEP.  Rather, he argues for a more restricted claim, one that more closely resembles D2 in structure, in that it infers specifically the possibility of distinctness from the conceivability of distinctness.  That is, rather than CEP, it seems to me that Descartes argues for, and uses, CDEPD (Conceivability of Distinctness Entails Possibility of Distinctness) :
(CDEPD) ⬦c(φψ) (φψ)
It is prima facie possible to hold CDEPD without holding CEP.  Further, there are arguments (such as the one Descartes puts forward in the quoted passage), that support CDEPD that do not prima facie support CEP.  That is, one can accept what Descartes says in the quoted passage, but, it seems, reject any attempt at an analogous (dare I say “mirroring”?) argument:
Hence the fact that I can clearly and distinctly understand one thing as being the same as another is enough to make me certain that the two things are the same, since they are capable of being ?, at least by God.
What could we put in place of the question mark to yield a proposition that is true?  To yield a proposition that is implied by what Descartes says in Meditations or elsewhere?  To yield a proposition that is required for Descartes’s conceivability argument to proceed?
There seems to be an asymmetry here between the conceivability of difference and the conceivability of sameness that allows Descartes to get by with CDEPD, rather than having to employ CEP.
Why does this matter?  It matters because the mirroring argument the authors make against Descartes effectively says:  “Descartes helped himself to CEP, so we can do the same.  Only instead of applying the CEP to a proposition about the conceivability of differences, we will apply it to a proposition about the conceivability of sameness.”  If what I have said above is right, then it is possible that Descartes was not helping himself to CEP in general, but to a more restricted claim about propositions involving distinctness, CDEDP.  Thus a mirroring argument would not be able to help itself to CEP, and thus would not be able to derive the required, contrary conclusion that way.  Further, there is no way to derive such a contrary conclusion using CDEDP instead.  So the mirroring argument against Descartes’s conceivability argument fails.
I have not yet checked to see if there are similar moves that Kripke and Chalmers can make to “break the mirror” in their respective cases.

(Another) joint paper with Aaron Sloman published

Screenshot 2017-06-13 16.25.17The proceedings of EUCognition 2016 in Vienna, co-edited by myself, Vincent Müller, Yulia Sandamirskaya and Markus Vincze, have just been published online (free access):  

In it is a joint paper by Aaron Sloman and myself, entitled “Architectural Requirements for Consciousness“.  Here is the abstract:

This paper develops, in sections I-III, the virtual machine architecture approach to explaining certain features of consciousness first proposed in [1] and elaborated in [2], in which particular qualitative aspects of experiences (qualia) are proposed to be particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of an agent that make that agent prone to believe the kinds of things that are typically believed to be true of qualia (e.g., that they are ineffable, immediate, intrinsic, and private). Section IV aims to make it intelligible how the requirements identified in sections II and III could be realised in a grounded, sensorimotor, cognitive robotic architecture.

Roles for Morphology in Computation

 

gr1

From Pfeifer, Iida and Lungarella (2014)

Tomorrow I’m giving an invited talk in Gothenburg at the Symposium on Morphological Computing and Cognitive Agency, as part of the The International Society for Information Studies Summit 2017 (entitled — deep breath — “DIGITALISATION FOR A SUSTAINABLE SOCIETY: Embodied, Embedded, Networked, Empowered through Information, Computation & Cognition!”).  Here’s my title and abstract:

Roles for Morphology in Computation

The morphological aspects of a system are the shape, geometry, placement and compliance properties of that system. On the rather permissive construal of computation as transformations of information, a correspondingly permissive notion of morphological computation can be defined: cases of information transformation performed by the morphological aspects of a system. This raises the question of what morphological computation might look like under different, less inclusive accounts of computation, such as the view that computation is essentially semantic. I investigate the possibilities for morphological computation under a particular version of the semantic view. First, I make a distinction between two kinds of role a given aspect might play in computations that a system performs: foreground role and background role. The foreground role of a computational system includes such things as rules, state, algorithm, program, bits, data, etc. But these can only function as foreground by virtue of other, background aspects of the same system: the aspects that enable the foreground to be brought forth, made stable/reidentifiable, and to have semantically coherent causal effect. I propose that this foreground/background distinction cross-cuts the morphological/non-morphological distinction. Specifically, morphological aspects of a system may play either role.

The Symposium will be chaired by Rob Lowe, and Gordana Dodig Crnkovic, and the other speakers include Christian Balkenius, Lorenzo Magnani, Yulia Sandamirskaya, Jordi Vallverdú, and John Spencer (and maybe Tom Ziemke and Marcin Schroeder?).

I’m also giving an invited talk the next day (Tuesday) as part of a plenary panel entitled: “What Would It Take For A Machine To Have Non-Reductive Consciousness?”  My talk is entitled “Computation and the Fate of Qualia”.  The other speakers are Piotr Bołtuć (moderator), Jack Copeland, Igor Aleksander, and Keith W. Miller.

Should be a fantastic few days — a shame I can’t stay for the full meeting, but I have to be back at Sussex in time for the Robot Opera Mini-Symposium on Thursday!

 

AI: The Future of Us — a fireside chat with Ron Chrisley and Stephen Upstone

As mentioned in a previous post, I was invited to speak at “AI: The Future of Us” at the British Museum earlier this month.  Rather than give a lecture, it was decided that I should have a “fireside chat” with Stephen Upstone, the CEO and founder of LoopMe, the AI company hosting the event.  We had fun, and got some good feedback, so we’re looking into doing something similar this Autumn — watch this space.

Our discussion was structured around the following questions/topics being posed to me:

  • My background (what I do, what is Cognitive Science, how did I start working in AI, etc.)
  • What is the definition of consciousness and at what point can we say an AI machine is conscious?
  • What are the ethical implications for AI? Will we ever reach the point at which we will need to treat AI like a human? And how do we define AI’s responsibility?
  • Where do you see AI 30 years from now? How do you think AI will revolutionise our lives? (looking at things like smart homes, healthcare, finance, saving the environment, etc.)
  • So on your view, how far away are we from creating a super intelligence that will be better than humans in every aspect from mental to physical and emotional abilities? (Will we reach a point when the line between human and machine becomes blurred?)
  • So is AI not a threat? As Stephen Hawking recently said in the Guardian “AI will be either the best or worst thing for humanity”. What do you think? Is AI something we don’t need to be worried about?

You can listen to our fireside chat here.