Negotiating Computation & Enaction: Rules of Engagement

In July, PAICSers Adrian Downey and Jonny Lee (with Joe Dewhurst) organised an international conference at Sussex entitled “Computation & Representation in Cognitive Science: Enactivism, Ecological Psychology & Cybernetics”.  It was an excellent meeting, with boundary-pushing talks from:

  • Anthony Chemero (University of Cincinnati)
  • Ron Chrisley (University of Sussex)
  • Sabrina Golonka (Leeds Beckett University)
  • Alistair Isaac (University of Edinburgh)
  • Adam Linson (University of Dundee)
  • Marcin Miłkowski (Polish Academy of Sciences)
  • Nico Orlandi (UC Santa Cruz)
  • Mario Villalobos (Universidad de Tarapaca)

I never wrote an abstract for my talk, so below I include the handout instead.  But it probably only makes sense for those who heard the talk (and even then…).

Negotiating Computation & Enaction: Rules of Engagement

Ron Chrisley, University of Sussex, July 10th 2017

Disputes concerning computation and enaction typically centre on (rejection of) computationalism with respect to cognition:  the claim that cognition is computation.

I: Rules of engagement

I propose the following “rules of engagement” (numbered items, below) as a way to prevent talking past one another, arguing against straw men, etc. and instead make progress on the issues that matter. No doubt they fail to be theory neutral, strictly speaking.  And there are of course other principles that are at least as important to adhere to.  But it’s a start.

The proposals can be seen as specific instances of the general admonition: clarify the computationalist claim at issue (cf end of Grush’s review of Ramsey’s Representation Reconsidered).

But even more so:  Many different versions of the claim that “cognition is computation”, depending on the choices of the following variables:

Computationalism schema: {Some|all} cognition {in humans|others} is {actually|necessarily|possibly} {wholly|partly} {best explained in terms of} computation

So need to clarify A) the relation between them, as well as B) cognition and C) computation themselves

A: Relation?

The instantiation of this that I consider most interesting/likely to be true is given below, but for now, start with this:

  1. Computationalism is best understood as an epistemological claim

That is, I plump for the “is best explained in terms of” rather than the simple “is” version of the schema above, as it is the version with the most direct methodological, experimental, even theoretical import (we’ll see the upshot of this later).

B: Cognition?

Given my background in AI and Philosophy (rather than Psychology or Neuroscience), I am interested in cognition in its most general sense: cognition as it could be, not (just) cognition as it is.  Thus:

  1. Avoid inferences such as: “X is involved in (all cases of) human cognition, therefore X is essential to cognition”

Compare flight and feathers.

An interesting case is when computation is not the best account for some particular kind of cognition, yet only it can account for that and some other kind of cognition.

C: Computation?

  1. We should demand no higher rigour/objectvity for computational concepts than we do for other, accepted scientific concepts
  2. Avoid the reductionism Catch-22 (mainly for representation)

That is, some authors seem to insist both that:

  • A notion of representation must be reduceable to something else (preferably non-intentional) to be naturalistically acceptable
  • Any notion of representation that is reduceable to something else can be replaced by that something else and therefore is surplus to requirements.
  1. Be aware that there are distinct construals (inter-theoretic) and varieties (intra-theoretic) of computation
  • Construals: computation as running some program, operation of a Turing machine, mechanistic account (Milkowski, Piccinini), semantic account, etc.
  • Varieties: digital, analogue, quantum, connectionist, etc.

E.g., if you are arguing against a view in which either everything is computational, or there is only one kind of computation, you are unlikely to persuade a nuanced computationalist.

  1. There are computers, and computers are best explained computationally

Does a given account have the implication that computers are not computational? Or that there are no computers?  Prima facie, these should count as points against that account.

And even so, what is to stop someone from claiming that whatever concepts provide the best account of the things we intuitively (but incorrectly?) called computers, also play a role in the best account of mind?  Cf Transparent computationalism (Chrisley)

On the other hand, do not be so inclusive as to trivialise the notion of computation: pan-computationalism?

  • Actually that’s not the problem with pan-computationalism
  • The real problem is that has difficulty explaining what is specifically computational about computers (beyond universality)

Computationalism (C): the best explanation (of at least some cases) of cognition will involve (among others) computational concepts (that is, concepts that play a key role in our best explanation of (some) computers, qua computers).

  1. So even if computation is only necessary for the explanation of some kinds of cognition, C is still vindicated.

II Examples

Consider two kinds of arguments against computationalism: those that rely on enactivism, and those that do not (but which some enactivists rely on)

These summaries are probably unfair, and likely violate corresponding “rules of engagement” concerning enactivism, etc.

Enactivist arguments

  • Enactivism 1:  Self-maintenance
    • E: The operation of a computer/robot, no matter how behaviourally/functionally similar to a human, would not be sufficient for cognition because not alive/self-maintenant
      • E has same skeptical problems as Zombie/Swampman positions
      • Note:  similarity to human is misleading – it may be that given its materiality, a computer would have to be very behaviourally/functionally different from a human in order to cognise (2)
    • Why believe E? Because:
  1. a) meaning for computers is not intrinsic; and
  2. b) possession of intrinsic meaning is necessary for (explaining?) cognition
  • Why believe b)?
    • Much of our meaning is imposed from outside/others?
    • Even if one accepts b), only follows that computation can’t explain *all* cognition? (7)
    • Even if human cognition has an “intrinsic meaning” core, does that rule out the possibility of cognition that does not? (2)
  • Why believe a)?
    • Reason 1:  Because any meaning in computers is imposed from the outside
      • But why should that preclude that system might have, partially by virtue of its computational properties, (either distinct or coinciding) intrinsic meaning, in addition? (5)
      • Might living systems be examples of such?
    • Reason 2: Because:
  1. c) intrinsic meaning (only) comes from being alive; and
  2. d) computers are not alive
  • Why believe d)? (given behavioural/functional identity with living systems)
    • Because computers are made of non-living material: they don’t, e.g. metabolise
      • By definition?  Could they? (5)
      • But so are cells: the parts of cells don’t metabolise
      • Because computers are not hierarchies of living systems
        • So they have meaning, just not human meaning? (2, 7)
        • What if we hierarchically arrange them?  Why would their computational nature cease to be explanatorily relevant?
      • Why believe c)?
        • Enactivist meaning is imposed by theorist’s valuing of self-maintenance (3)
      • In any event: E is not enough to defeat C — need to show computation is not necessary
  • Enactivism 2a: Basic Minds (Hutto and Myin)
    • Computation involves representation
      • Contentious (e.g., mechanistic account) (5)
    • Representation requires content
    • There is a level of basic mind that does not involve content
    • Therefore computationalism is false
      • At most only shows that there must be a non-computational explanation of the most basic forms of cognition (7)
      • But actually one can have a non-contentful, yet intentional, notion of computation:  robust correspondence (5)
    • Enactivism 2b: Content  (Hutto and Myin)
      • Computation involves representation
        • As above
      • Representation requires content
      • There is a “hard problem of content” (HPC):  no naturalistically acceptable theory of sub-personal content
      • Therefore computationalism is false
        • Impatience: Even if true, lack of a theory is not decisive (3)
        • Some argue the “hard” problem has already been solved, long ago (Milkowski)
      • Enactivism(?) 3:  Fictionalism (after Downey, with apologies)
        • Computation involves representation
        • Although representation is useful for explaining cognition, utility doesn’t imply metaphysical truth
        • Further, considerations like HPC argue against representation, and therefore computation
        • So computationalism is (metaphysically) false
          • Relies on argument Enactivism 2b – see rejoinders above
          • Only tells against computation construed as a metaphysical claim — not a problem for C
          • Yet C, being epistemic/pragmatic, is the one that matters (1)

Non-enactivist arguments against computationalism

(to which enactivists sometimes appeal):

  • Chinese room (Searle)
    • Many objections
    • Against Strong AI (metaphysics), not against C (1)
    • Against sufficiency, not against C (7)
    • Enactivist irony: emphasising the fundamental differences between living/conscious systems and those that are not (such as Macs and PCs) allows one to question the idea that a human (Searle) can perform the same computations as those
      • Proposal: Human has different, intentionality-sensitive counterfactuals that “dead” silicon does not (5)
      • Upshot: Non-living nature of machines is a feature, not a bug — immunizes machine computation to Chinese room critique
  • Diagonalisation (e.g., Penrose)
    • G: Human mathematicians are not computing a (knowably sound) Turing-computable function when ascertaining the truth of certain mathematical propositions
    • Many objections
    • But even if the argument works, it does not impact on C, since X does not need to compute the same functions as Y for X to explain Y
    • That is, C is epistemic, not metaphysical (1)
  • Non-objectivity of computation (Putnam, Searle)
    • Anything can be seen as implementing any Turing machine
    • On some accounts, not all TM instantiations are computers (need intentionality) (5)
    • But fails, even for TMs:  counterfactuals
    • More recently, some (Maudlin, Bishop) have argued that to be explanatory, computational states can only supervene on occurrent physical states, not counterfactual ones.
      • But some occurrent states are individuated by their counterfactual properties
      • Counterfactual properties supervene on occurent state
      • Also: seems to imply computational concepts are not suitable for explaining computers (6)
    • Phenomenology (Dreyfus)
      • E. g., experts don’t engage in search algorithms when, e.g., playing chess – they just see directly the right moves.
      • Makes unfounded assumptions about what it feels like to be this or that kind of physical (computational) system
      • E. g., a (sub-personal) computation that involves millions of steps may realise an experience with no such complex structure, even skilled coping
      • But even if Dreyfus is right, does not refute C (7)
Advertisements

The limits of the limits of computation

I’m very pleased to have been invited to participate in an exciting international workshop being held at Sussex later this month.

di9cgzmxoaaxn4c

My very brief contribution has the title: “The limits of the limits of computation”.  Here’s the abstract:

The limits of the limits of computation

The most salient results concerning the limits of computation have proceeded by establishing the limits of formal systems. These findings are less damning than they may appear.  First, they only tell against real-world (or “physical”) computation (i.e., what it is that my laptop does that makes it so useful) to the extent to which real-world computation is best viewed as being formal, yet (at least some forms of) real-world computation are as much embodied, world-involving, dynamics-exploiting phenomena as recent cognitive science takes mind to be.  Second, the incomputability results state that formal methods are in some sense outstripped by extra-formal reality, while themselves being formal methods attempting to capture extra-formal reality (real-world computation) — an ironic pronouncement that would make Epimenides blush.    One ignores these limits on the incomputability results at one’s peril; a good example is the diagonal argument against artificial intelligence.

 

 

Roles for Morphology in Computation

 

gr1

From Pfeifer, Iida and Lungarella (2014)

Tomorrow I’m giving an invited talk in Gothenburg at the Symposium on Morphological Computing and Cognitive Agency, as part of the The International Society for Information Studies Summit 2017 (entitled — deep breath — “DIGITALISATION FOR A SUSTAINABLE SOCIETY: Embodied, Embedded, Networked, Empowered through Information, Computation & Cognition!”).  Here’s my title and abstract:

Roles for Morphology in Computation

The morphological aspects of a system are the shape, geometry, placement and compliance properties of that system. On the rather permissive construal of computation as transformations of information, a correspondingly permissive notion of morphological computation can be defined: cases of information transformation performed by the morphological aspects of a system. This raises the question of what morphological computation might look like under different, less inclusive accounts of computation, such as the view that computation is essentially semantic. I investigate the possibilities for morphological computation under a particular version of the semantic view. First, I make a distinction between two kinds of role a given aspect might play in computations that a system performs: foreground role and background role. The foreground role of a computational system includes such things as rules, state, algorithm, program, bits, data, etc. But these can only function as foreground by virtue of other, background aspects of the same system: the aspects that enable the foreground to be brought forth, made stable/reidentifiable, and to have semantically coherent causal effect. I propose that this foreground/background distinction cross-cuts the morphological/non-morphological distinction. Specifically, morphological aspects of a system may play either role.

The Symposium will be chaired by Rob Lowe, and Gordana Dodig Crnkovic, and the other speakers include Christian Balkenius, Lorenzo Magnani, Yulia Sandamirskaya, Jordi Vallverdú, and John Spencer (and maybe Tom Ziemke and Marcin Schroeder?).

I’m also giving an invited talk the next day (Tuesday) as part of a plenary panel entitled: “What Would It Take For A Machine To Have Non-Reductive Consciousness?”  My talk is entitled “Computation and the Fate of Qualia”.  The other speakers are Piotr Bołtuć (moderator), Jack Copeland, Igor Aleksander, and Keith W. Miller.

Should be a fantastic few days — a shame I can’t stay for the full meeting, but I have to be back at Sussex in time for the Robot Opera Mini-Symposium on Thursday!

 

Functionalism, Revisionism, and Qualia

logoA paper by myself and Aaron Sloman, “Functionalism, Revisionism, and Qualia” has just been published in the APA Newsletter on Philosophy and Computing. (The whole issue looks fantastic – I’m looking forward to reading all of it, especially the other papers in the “Mind Robotics” section, and most especially the papers by Jun Tani and Riccardo Manzotti). Our contribution is a kind of follow-up to our 2003 paper “Virtual Machines and Consciousness”. There’s no abstract, so let me just list here a few of the more controversial things we claim (and in some cases, even argue for!):

  • Even if our concept of qualia is true of nothing, qualia might still exist (we’re looking at you, Dan Dennett!)
  • If qualia exist, they are physical – or at least their existence alone would not imply the falsity of physicalism (lots of people we’re looking at here )
  • We might not have qualia: The existence of qualia is an empirical matter.
  • Even if we don’t have qualia, it might be possible to build a robot that does!
  • The question of whether inverted qualia spectra are possible is, in a sense, incoherent.

If you get a chance to read it, I’d love to hear what you think.

Ron

The Embodied Nature of Computation

human-body-as-a-computer

 

The next E-Intentionality seminar will be held Wednesday, June 8th from 13:00 to 14:50 in Pevensey 1 1A3.  Ron Chrisley will speak on “The Embodied Nature of Computation” as a dry run of his talk at a symposium (“Embodied Cognition: Constructivist and Computationalist Perspectives”) at IACAP 2016 next week:

 

Although embodiment-based critiques of computation’s role in explaining mind have at times been overstated, there are important lessons from embodiment which computationalists would do well to learn. For example, orthodox schemes for individuating computations are individualist, atemporal, and anti-semantical (formal), but considering the role of the body in cognition suggests by analogy that — even to explain extant information processing systems unrelated to cognitive science and artificial intelligence contexts — computations should instead be characterised in terms that are world-involving, dynamical and intentional/meaningful. Further, the counterfactual-involving nature of computational state individuation implies that sameness of computation is not in general preserved when one substitutes a non-living computational component with a living, autonomous, free organism that merely intends to realise the same functional profile as component being replaced. Thus, contra computational orthodoxy, there is no sharp divide between the computational facts and what is usually thought of as the implementational facts, even for unambiguously computational systems. The implications of this point for some famous disputes concerning group minds, and strong AI, will be identified.

Image from digitalmediatheory.files.wordpress.com

The Mereological Constraint

 

voice-in-head

brainworldmagazine.com

E-Intentionality, February 26th 2016, Pevensey 2A11, 12:00-12:50

Ron Chrisley: The Mereological Constraint

I will discuss what I call the mereological constraint, which can be traced back at least as far as Putnam’s writings in the 1960s, and is the idea, roughly, that a mind cannot have another mind as a proper constituent.  I show that the implications (benefits?) of such a constraint, if true, would be far-ranging, allowing one to finesse the Chinese room and Chinese nation arguments against computationalism, reject certain notions of extended mind, reject most group minds, make a ruling on the modality of sensory substitution, etc.  But is the mereological conjecture true?  I will look at some possible arguments for the conjecture, including one that appeals to the fact that rationality must be grounded in the non-rational, and one that attempts to derive the constraint from a comparable one concerning the individuation of computational states.  I will also consider an objection to the conjecture, that argues that it would confer on us a priori knowledge of facts that are, intuitively, empirical.

Audio (28.5 mb, .mp3)