Robot Opera coverage in “Viva Lewes”

The September 2017 issue of Viva Lewes magazine features a two-page spread by Jacqui Bealing on the robot opera project that Evelyn Ficarra, Ed Hughes and I have been collaborating on (as detailed in earlier updates on this blog).  The article is available at:

For convenience, I include a copy of the article below.

Screen Shot 2017-10-05 at 14.00.01


Epistemic Consistency in Knowledge-Based Systems

IMG_6144 (1).jpg

Today I was informed that my extended abstract, “Epistemic Consistency in Knowledge-Based Systems”, has been accepted for presentation at PT-AI 2017 in Leeds in November. The text of the extended abstract is below.  The copy-paste job I’ve done here loses all the italics, etc.; the proper version is at:

Comments welcome, especially to similar work, papers I should cite, etc.

Epistemic Consistency in Knowledge-Based Systems (extended abstract)

Ron Chrisley
Centre for Cognitive Science,
Sackler Centre for Consciousness Science, and Department of Informatics
University of Sussex, Falmer, United Kingdom

1 Introduction

One common way of conceiving the knowledge-based systems approach to AI is as the attempt to give an artificial agent knowledge that P by putting a (typically lin- guaform) representation that means P into an epistemically privileged database (the agent’s knowledge base). That is, the approach typically assumes, either explicitly or implicitly, that the architecture of a knowledge-based system (including initial knowledge base, rules of inference, and perception/action systems) is such that the following sufficiency principle should be respected:

  • Knowledge Representation Sufficiency Principle (KRS Principle): if a sen- tence that means P is in the knowledge base of a KBS, then the KBS knows that P.

The KRS Principle is so strong that, although it might be able to be respected by KBSs that deal exclusively with a priori matters (e.g., theorem provers), most if not all empirical KBSs will, at least some of the time, fail to meet it. Nevertheless, it remains an ideal toward which KBS design might be thought to strive.

Accordingly, it is commonly acknowledged that knowledge bases for KBSs should be consistent, since classical rules of inference permit the addition of any sentence to an inconsistent KB. Accordingly, much effort has been spent on devis- ing tractable ways to ensure consistency or otherwise prevent inferential explosion.

2 Propositional epistemic consistency

However, it has not been appreciated that for certain kinds of KBSs, a further con- straint, which I call propositional epistemic consistency, must be met. To explain this constraint, some notions must be defined:

  • An epistemic KBS is one that can represent propositions attributing propositional knowledge to subjects (such as that expressed by “Dave knows the mission is a failure”).
  • An autoepistemic KBS is an epistemic KBS that is capable of representing, and therefore of attributing propositional knowledge to, itself (e.g., “HAL knows that Dave knows that the mission is a failure” in the case of the KBS HAL).

All autoepistemic systems (natural or artificial) suffer from epistemic blindspots (Sorensen, 1984):

  • A proposition P is an an epistemic blindspot for a KBS X if P is consistent, but the proposition that X knows that P is not consistent.

Thus, if an autoepistemic KBS is to respect the the KRS Principle, no epistemic blindspots (for that KBS) can appear in its knowledge base.

Despite this, it is of course not logically impossible that a sentence S expressing an epistemic blindspot for a KBS X may end up in X ’s KB. If this were to happen, it follows that X would not respect the KRS Principle. Worse, the fact that epistemic blindspots are consistent means that this possibility remains even if X has perfect, ideal methods of normal consistency maintenance. S being in X’s KB yields a kind of inconsistency distinct from normal inconsistency (since it can occur even when X’s KB, including S, is consistent). Accordingly, X’s KB being free of epistemic blindspots for X is a kind of consistency beyond consistency simpliciter; this is what I call propositional epistemic consistency. To ensure that a KBS respects the KRS Principle, then, it is not sufficient to ensure that its KB is consistent in the normal manner; one must also ensure that it is propositionally epistemically consistent.

Ensuring propositional epistemic consistency for a KBS X amounts to taking two precautions:

  1. Ensuring that there are no epistemic blindpsots for X in the initial KB;
  2. When any sentence S is about to be added to the KB (via inference, perception, etc), checking that S is not an epistemic blindspot for X.

Both steps involve checking that a given sentence is not an epistemic blindspot for a given system X. Beyond checking the consistency of S (and the consistency of S with the current KB), this amounts to checking whether it would be a contradiction to suppose that S is known by X . In turn, this amounts to expressing S in conjunctive normal form, where the first conjunct is the proposition P, and the second conjunct of is of the form ¬K(x,P), where x refers to X.

Unfortunately, this last condition implies that unlike for consistency simpliciter, checking for propositional epistemic consistency cannot proceed purely syntacti- cally. Simple consistency is a matter of what holds in all models, and is therefore an a priori matter independent of the state of affairs in the actual world. But whether or not an expression in fact refers to a given individual does depend on the state of affairs in the actual world, and cannot be determined via a priori means alone.

In the face of this apparent intractability, and the fact that it derives from a kind of unrestricted self-reference, one might be tempted to reduce propositional epistemic

consistency checking to simple consistency checking in a way parallel to the way Prior proposes for dealing with the paradox of the liar. Prior suggests that we under- stand each sentence to be implicitly asserting “this sentence is true”(Prior, 1976). This renders such sentences as “This sentence is not true” as straightforwardly false, and thus non-paradoxical. A parallel move would be to suggest that every KBS’s KB is implicitly asserting the negation of every epistemic blindspot for that KBS. This would render every epistemic blindspot for that KBS inconsistent with that KBS’s KB, allowing it to be excluded via simple consistency maintenance. But this is overkill: epistemic blindspots are not, in general, false. And the ones that are prob- lematic are so because they are true, so having their negations in the KB violates the KRS Principle.

3 Inferential epistemic consistency

There are similar, problematic interactions concerning inference. Consider inference G:

  1. HAL has made more than two inferences
  2. HAL has made fewer than four inferences
  3. If someone has made more than two inferences and fewer than four inferences, they have made three inferences
  4. Therefore, HAL has made three inferencesOn the face of it, G is a valid argument; the rules of inference it employs are valid

in that they guarantee the truth of the conclusion, given the truth of the premises. And such an analysis is correct (or at least seems so) for the case of you or I putting forward G, or making the inference it licenses. But the case of HAL carrying out this inference is another matter entirely. If HAL makes this inference, HAL comes to believe something false, since after the inference is made, HAL believes that HAL has made three inferences, when in fact HAL has made four. HAL’s KB would exhibit inferential epistemic inconsistency.

On the standard view, one makes an inference by first determining if the premises are true and the transitions from premise to conclusion are valid. If they are, then one should believe the conclusion. Unfortunately, such an approach would license HAL to make inference G.

Prompted by these considerations, and taking a more participatory view of in- ference, I propose that in when one is about to make an inference, in addition to checking for soundness and validity of an inference, one should consider the near- est possible world in which one carries out the inference. Only if the conclusion still follows validly from true premises in that world should one make the inference and believe the conclusion (in this world). On this view, HAL would not be entitled to make the inference in G, as its conclusion is false in the nearest possible world in which HAL makes the inference.

Notice that, like the epistemic blindspots considered earlier, the conclusion of G that HAL is not entitled to believe is, nevertheless, consistent: possibly true. The conclusion is not, however, a blindspot: the proposition that HAL knows the conclu- sion of G is not a contradiction. Nor is it just an inferential variation on an epistemic blindspot.

4 Conclusion

The primary conclusion of the foregoing is that designers of autoepistemic KBSs must supplement consistency checks with epistemic consistency checks of two kinds (propositional and inferential) in order to:

  • Respect the KRS Principle that underlies all KBS use;
  • Ensure the validity of inferences KBSs make about themselves;
  • Ensure consistency of KBS knowledge bases;
  • Prevent the introduction of false propositions into KBS knowledge bases.


  • Prior, A. (1976). Papers in logic and ethics. Duckworth.
  • Sorensen, R. (1984). Conditional blindspots and the knowledge squeeze: a solution to the prediction paradox. Australasian J. Phil., 62, 126-135.

The Future of Smart Living


Screen Shot 2017-09-30 at 17.36.58

Image from Culture Vulture Issue 09: Smart Living, MindShare: 2017.

I’ve just posted on LinkedIn a rare (for me!) piece of near-futurology:

This article is an expansion of “The Shift From Conscious To Unconscious Data” that I wrote earlier this year for Culture Vulture Issue 09: Smart Living, pp 48-49, MindShare.

For convenience, I’ve included the text here.

The future of smart living

The move to unconscious data and AI beyond deep learning will require substantial algorithmic – and ethical – innovation

In a way, the hype is right: the robots are here. It might not look like it, but they are. If we understand robots to be artificial agents that can, based on information they revceive from their environment, autonomously take action in the world, then robots are in our cars, homes, hospitals, schools, workplaces, and our own bodies, even if they don’t have the expected humanoid shape and size. And more are on the way. What will it be like to live in the near-future world full of sensing, adapting, and acting technologies? Will it be like things are now, but more so (whatever that might mean)? Or will it be qualitatively different?

There are several indications that the technological changes about to occur will result in qualitative shifts in the structure of our lives.

We can expect a dramatic increase in the quantity, kinds, and temporal resolution of the sensors our embedded smart technologies will use.

One example involves sensors. We can expect a dramatic increase in the quantity, kinds, and temporal resolution of the sensors our embedded smart technologies will use. And in many cases these sensors will be aimed directly at us, the users. Most significantly, we will see a shift from technologies that solely use symbolic, rational-level data that we consciously provide (our purchasing history, our stated preferences, the pages we “like”, etc.) to ones that use information about us that is even more revealing, despite (or because) it is unconscious/not under our control. It will start with extant, ubiquitous input devices used in novel ways (such as probing your emotional state or unexpressed preferences by monitoring the dynamics of your mouse trajectories over a web page), but will quickly move to an uptake and exploitation of sensors that more directly measure our bio-indicators, such as eye trackers, heart rate monitors, pupillometry, etc.

We can expect an initial phase of applications and systems that are designed to shift users into purchasing/adopting, becoming proficient with, and actively using these technologies: Entertainment will no doubt lead the way, but other uses of the collected data (perhaps buried in EULAs) will initially piggyback on them. Any intrusiveness or inconvenience of these sensors, initially tolerated for the sake of the game or interactive cinematic experience, will give way to familiarity and acceptance, allowing other applications to follow.

Any intrusiveness or inconvenience of these sensors, initially tolerated for the sake of the game or interactive cinematic experience, will give way to familiarity and acceptance, allowing other applications to follow.

The intimate, sub-rational, continuous, dynamic and temporally-precise data these sensors will provide will enable exquisitely precise user-modelling (or monitoring) of a kind previously unimaginable. This in turn will enable technologies that will be able (or at least seem) to understand our intentions and anticipate our needs and wants. Key issues will involve ownership/sharing/selling/anonymisation of this data, the technologies for and rights to shielding oneself from such sensing (e.g., in public spaces) and the related use of decoys (technologies designed to provide false readings to these sensors), and delimiting the boundaries of responsibility and informed consent in cases where technologies can side-step rational choice and directly manipulate preferences and attitudes.

The engine behind this embedded intelligence will be artificial intelligence. The recent (and pervasively covered) rise of machine learning has been mainly to with recent advances in two factors: 1) the enormous data sets the internet has created, and 2) blindingly fast hardware such as GPUs. We can continue to expect advances in 1) with the new kinds and quantities of data that the new sensors will provide. The second factor is hard to predict, with experts differing on whether we will continue to reap the benefits of Moore’s Law, and on whether quantum computation is capable of delivering on its theoretical promise anytime soon.

The algorithms exploiting these two factors of data and speed have typically been minor variations on and recombination of those developed in the 80s and 90s. Although quantum computation might (or might not) allow the increased hardware trend to continue, the addition of further kinds of data will allow novel technologies in all spheres that are exquisitely tuned to the user.

On the other hand, the increased quantity of data, especially its temporal resolution, will require advances in machine learning algorthims – expect a move beyond simple, feedforward architectures from the 90s to systems that develop expectations about what they will sense (and do), and that use these expectations as a way to manage information overload by attending only to the important parts of the data.

This will yield unprecedented degrees of dynamic integration between us and our technology. What is often neglected in thinking about the pros and cons of such technologies is the way we adapt to them. One of the most exciting prospects, but also unforeseen risks, and needs to be thought of carefully. In particular, new conceptual tools.

The sooner we can develop and deploy into society at large a machine ethics that locates responsibility with the correct humans, and not with the technologies themselves, the better.

Embedded, autonomous technology will lead to situations that, given our current legal and ethical systems, will appear ambiguous: who is to blame when something goes wrong involving a technology that has adapted to a user’s living patterns? Is it the user, for having a lifestyle that was too far outside of the “normal” lifestyles used in the dynamic technology’s testing and quality control? Or is the fault of the designer/manufacturer/retailer/provider/procurer of that technology, for not ensuring that the technology would yield safe results in a greater number of user situations, of for not providing clear guidelines to the user on what “normal” use is? Given this conundrum, the temptation will often be to blame neither, but blame the technology itself instead, especially if it is made to look humanoid, given a name, voice, “personality” etc. We might very well see a phase of cynical, gratuitous use of anthromorphism whose main function is misdirect potential blame by “scapegoating the robot”. The sooner we can develop and deploy into society at large a machine ethics that locates responsibility with the correct humans, and not with the technologies themselves, the better.

Machine consciousness at the Brighton Digital Festival


Next Tuesday I’ll be giving a brief talk on machine consciousness prior to a screening of the film Ex Machina, as part of the Brighton Digital Festival. Sackler colleagues Keisuke Suzuki and David Schwartzman will be giving consciousness-illuminating VR demos involving our Nao robots as well. The event is being organised in conjunction with the British Science Association.  More info at

Update, 4 October 2017:

Here are some photos of the event, courtesy of Amber John.  As you can see, the title I settled on was “Turing Tests and Machine Consciousness”.


Minds Online: The Interface between Web Science, Cognitive Science and the Philosophy of Mind


Long-time PAICSer and COGS person, Rob Clowes, has just published an impressive monograph, along with Paul Smart and Richard Heersmink, entitled Minds Online: The Interface between Web Science, Cognitive Science and the Philosophy of Mind.  From the email I received from the publisher today:

I am pleased to announce that Foundations and Trends in Web Science ( has published the following issue:

Volume 6, Issue 1-2
Minds Online: The Interface between Web Science, Cognitive Science and the Philosophy of Mind
By Paul Smart (University of Southampton, UK), Robert Clowes (Universidade Nova de Lisboa, Portugal) and

Richard Heersmink (Macquarie University, Australia)

Complimentary downloads of this article will be available until September 29th, so you should be able to access it directly using the link provided.

Here is the abstract:

Alongside existing research into the social, political and economic impacts of the Web, there is a need to study the Web from a cognitive and epistemic perspective. This is particularly so as new and emerging technologies alter the nature of our interactive engagements with the Web, transforming the extent to which our thoughts and actions are shaped by the online environment. Situated and ecological approaches to cognition are relevant to understanding the cognitive significance of the Web because of the emphasis they place on forces and factors that reside at the level of agent-world interactions. In particular, by adopting a situated or ecological approach to cognition, we are able to assess the significance of the Web from the perspective of research into embodied, extended, embedded, social and collective cognition. The results of this analysis help to reshape the interdisciplinary configuration of Web Science, expanding its theoretical and empirical remit to include the disciplines of both cognitive science and the philosophy of mind.

Minds Online: The Interface between Web Science, Cognitive Science and the Philosophy of Mind heeds the call of the early Web Science pioneers by expanding the interdisciplinary scope of Web Science, specifically, to accommodate the disciplines of cognitive science and the philosophy of mind. There is a substantial literature to support this expansionist agenda. Given the centrality of cognition to our species-specific capabilities, as well as the level of public and scientific interest in the Web, now is arguably an appropriate time to review this literature and explicate the nature of the linkages that connect the science of the Web with the sciences of the mind.

Self-listening for music generation

grays_lato_imgNext week Sussex will host the third and last workshop of the AHRC Network “Humanising Algorithmic Listening“.  At the end of the first day a few of us with some common interests will be speaking about our recent small project proposals, with the hope of finding some common ground.  Here’s what I’ll be talking about:

Self-listening for music generation

Although it may seem obvious that in order to create interesting music one must be capable of listening to music as music, the ability to listen is often omitted in the design of musical generative systems.  And for those few systems that can listen, the emphasis is almost exclusively on listening to others, e.g., for the purposes of interactive improvisation.  I’ll describe a project that aims to explore the role that a system’s listening to, and evaluating, that system’s own musical performance (as its own musical performance) can play in musical generative systems.  What kinds of aesthetic and creative possibilities are afforded by such a design? How does the role of self-listening change at different timescales? Can self-listening generative systems shed light on neglected aspects of human performance?  A three-component architecture for answering questions such as these will be presented.

The talk immediately before mine, Nicholas Ward & Tom Davis’s “A sense of being ‘listened to’”, focusses on an aspect of performance that my thinking on these issues has neglected.  Specifically, the role that X’s perception of Y’s responses to X’s output can/should play in regulating X’s performance, both in real-time and over longer time scales.  An important component of X perceiving Y’s’ responses as responses to X is X’s determining whether or not, in the case of auditory/musical output, Y is even listening to X in the first place.  When I say component, I only mean that in the most abstract sense — it need not be a separate explicit module or step, independent of processing other’s responses in general.  And many cases of auditory production are  ecologically constrained to make a given auditory source salient/dominant, so that questions like “what (auditory) source is that person responding to?” need not be asked.  But the more general point remains:  that responding to the responses of others should be a key component (even) in a robust self-listening system.

Negotiating Computation & Enaction: Rules of Engagement

In July, PAICSers Adrian Downey and Jonny Lee (with Joe Dewhurst) organised an international conference at Sussex entitled “Computation & Representation in Cognitive Science: Enactivism, Ecological Psychology & Cybernetics”.  It was an excellent meeting, with boundary-pushing talks from:

  • Anthony Chemero (University of Cincinnati)
  • Ron Chrisley (University of Sussex)
  • Sabrina Golonka (Leeds Beckett University)
  • Alistair Isaac (University of Edinburgh)
  • Adam Linson (University of Dundee)
  • Marcin Miłkowski (Polish Academy of Sciences)
  • Nico Orlandi (UC Santa Cruz)
  • Mario Villalobos (Universidad de Tarapaca)

I never wrote an abstract for my talk, so below I include the handout instead.  But it probably only makes sense for those who heard the talk (and even then…).

Negotiating Computation & Enaction: Rules of Engagement

Ron Chrisley, University of Sussex, July 10th 2017

Disputes concerning computation and enaction typically centre on (rejection of) computationalism with respect to cognition:  the claim that cognition is computation.

I: Rules of engagement

I propose the following “rules of engagement” (numbered items, below) as a way to prevent talking past one another, arguing against straw men, etc. and instead make progress on the issues that matter. No doubt they fail to be theory neutral, strictly speaking.  And there are of course other principles that are at least as important to adhere to.  But it’s a start.

The proposals can be seen as specific instances of the general admonition: clarify the computationalist claim at issue (cf end of Grush’s review of Ramsey’s Representation Reconsidered).

But even more so:  Many different versions of the claim that “cognition is computation”, depending on the choices of the following variables:

Computationalism schema: {Some|all} cognition {in humans|others} is {actually|necessarily|possibly} {wholly|partly} {best explained in terms of} computation

So need to clarify A) the relation between them, as well as B) cognition and C) computation themselves

A: Relation?

The instantiation of this that I consider most interesting/likely to be true is given below, but for now, start with this:

  1. Computationalism is best understood as an epistemological claim

That is, I plump for the “is best explained in terms of” rather than the simple “is” version of the schema above, as it is the version with the most direct methodological, experimental, even theoretical import (we’ll see the upshot of this later).

B: Cognition?

Given my background in AI and Philosophy (rather than Psychology or Neuroscience), I am interested in cognition in its most general sense: cognition as it could be, not (just) cognition as it is.  Thus:

  1. Avoid inferences such as: “X is involved in (all cases of) human cognition, therefore X is essential to cognition”

Compare flight and feathers.

An interesting case is when computation is not the best account for some particular kind of cognition, yet only it can account for that and some other kind of cognition.

C: Computation?

  1. We should demand no higher rigour/objectvity for computational concepts than we do for other, accepted scientific concepts
  2. Avoid the reductionism Catch-22 (mainly for representation)

That is, some authors seem to insist both that:

  • A notion of representation must be reduceable to something else (preferably non-intentional) to be naturalistically acceptable
  • Any notion of representation that is reduceable to something else can be replaced by that something else and therefore is surplus to requirements.
  1. Be aware that there are distinct construals (inter-theoretic) and varieties (intra-theoretic) of computation
  • Construals: computation as running some program, operation of a Turing machine, mechanistic account (Milkowski, Piccinini), semantic account, etc.
  • Varieties: digital, analogue, quantum, connectionist, etc.

E.g., if you are arguing against a view in which either everything is computational, or there is only one kind of computation, you are unlikely to persuade a nuanced computationalist.

  1. There are computers, and computers are best explained computationally

Does a given account have the implication that computers are not computational? Or that there are no computers?  Prima facie, these should count as points against that account.

And even so, what is to stop someone from claiming that whatever concepts provide the best account of the things we intuitively (but incorrectly?) called computers, also play a role in the best account of mind?  Cf Transparent computationalism (Chrisley)

On the other hand, do not be so inclusive as to trivialise the notion of computation: pan-computationalism?

  • Actually that’s not the problem with pan-computationalism
  • The real problem is that has difficulty explaining what is specifically computational about computers (beyond universality)

Computationalism (C): the best explanation (of at least some cases) of cognition will involve (among others) computational concepts (that is, concepts that play a key role in our best explanation of (some) computers, qua computers).

  1. So even if computation is only necessary for the explanation of some kinds of cognition, C is still vindicated.

II Examples

Consider two kinds of arguments against computationalism: those that rely on enactivism, and those that do not (but which some enactivists rely on)

These summaries are probably unfair, and likely violate corresponding “rules of engagement” concerning enactivism, etc.

Enactivist arguments

  • Enactivism 1:  Self-maintenance
    • E: The operation of a computer/robot, no matter how behaviourally/functionally similar to a human, would not be sufficient for cognition because not alive/self-maintenant
      • E has same skeptical problems as Zombie/Swampman positions
      • Note:  similarity to human is misleading – it may be that given its materiality, a computer would have to be very behaviourally/functionally different from a human in order to cognise (2)
    • Why believe E? Because:
  1. a) meaning for computers is not intrinsic; and
  2. b) possession of intrinsic meaning is necessary for (explaining?) cognition
  • Why believe b)?
    • Much of our meaning is imposed from outside/others?
    • Even if one accepts b), only follows that computation can’t explain *all* cognition? (7)
    • Even if human cognition has an “intrinsic meaning” core, does that rule out the possibility of cognition that does not? (2)
  • Why believe a)?
    • Reason 1:  Because any meaning in computers is imposed from the outside
      • But why should that preclude that system might have, partially by virtue of its computational properties, (either distinct or coinciding) intrinsic meaning, in addition? (5)
      • Might living systems be examples of such?
    • Reason 2: Because:
  1. c) intrinsic meaning (only) comes from being alive; and
  2. d) computers are not alive
  • Why believe d)? (given behavioural/functional identity with living systems)
    • Because computers are made of non-living material: they don’t, e.g. metabolise
      • By definition?  Could they? (5)
      • But so are cells: the parts of cells don’t metabolise
      • Because computers are not hierarchies of living systems
        • So they have meaning, just not human meaning? (2, 7)
        • What if we hierarchically arrange them?  Why would their computational nature cease to be explanatorily relevant?
      • Why believe c)?
        • Enactivist meaning is imposed by theorist’s valuing of self-maintenance (3)
      • In any event: E is not enough to defeat C — need to show computation is not necessary
  • Enactivism 2a: Basic Minds (Hutto and Myin)
    • Computation involves representation
      • Contentious (e.g., mechanistic account) (5)
    • Representation requires content
    • There is a level of basic mind that does not involve content
    • Therefore computationalism is false
      • At most only shows that there must be a non-computational explanation of the most basic forms of cognition (7)
      • But actually one can have a non-contentful, yet intentional, notion of computation:  robust correspondence (5)
    • Enactivism 2b: Content  (Hutto and Myin)
      • Computation involves representation
        • As above
      • Representation requires content
      • There is a “hard problem of content” (HPC):  no naturalistically acceptable theory of sub-personal content
      • Therefore computationalism is false
        • Impatience: Even if true, lack of a theory is not decisive (3)
        • Some argue the “hard” problem has already been solved, long ago (Milkowski)
      • Enactivism(?) 3:  Fictionalism (after Downey, with apologies)
        • Computation involves representation
        • Although representation is useful for explaining cognition, utility doesn’t imply metaphysical truth
        • Further, considerations like HPC argue against representation, and therefore computation
        • So computationalism is (metaphysically) false
          • Relies on argument Enactivism 2b – see rejoinders above
          • Only tells against computation construed as a metaphysical claim — not a problem for C
          • Yet C, being epistemic/pragmatic, is the one that matters (1)

Non-enactivist arguments against computationalism

(to which enactivists sometimes appeal):

  • Chinese room (Searle)
    • Many objections
    • Against Strong AI (metaphysics), not against C (1)
    • Against sufficiency, not against C (7)
    • Enactivist irony: emphasising the fundamental differences between living/conscious systems and those that are not (such as Macs and PCs) allows one to question the idea that a human (Searle) can perform the same computations as those
      • Proposal: Human has different, intentionality-sensitive counterfactuals that “dead” silicon does not (5)
      • Upshot: Non-living nature of machines is a feature, not a bug — immunizes machine computation to Chinese room critique
  • Diagonalisation (e.g., Penrose)
    • G: Human mathematicians are not computing a (knowably sound) Turing-computable function when ascertaining the truth of certain mathematical propositions
    • Many objections
    • But even if the argument works, it does not impact on C, since X does not need to compute the same functions as Y for X to explain Y
    • That is, C is epistemic, not metaphysical (1)
  • Non-objectivity of computation (Putnam, Searle)
    • Anything can be seen as implementing any Turing machine
    • On some accounts, not all TM instantiations are computers (need intentionality) (5)
    • But fails, even for TMs:  counterfactuals
    • More recently, some (Maudlin, Bishop) have argued that to be explanatory, computational states can only supervene on occurrent physical states, not counterfactual ones.
      • But some occurrent states are individuated by their counterfactual properties
      • Counterfactual properties supervene on occurent state
      • Also: seems to imply computational concepts are not suitable for explaining computers (6)
    • Phenomenology (Dreyfus)
      • E. g., experts don’t engage in search algorithms when, e.g., playing chess – they just see directly the right moves.
      • Makes unfounded assumptions about what it feels like to be this or that kind of physical (computational) system
      • E. g., a (sub-personal) computation that involves millions of steps may realise an experience with no such complex structure, even skilled coping
      • But even if Dreyfus is right, does not refute C (7)