Descartes, Conceivability, and Mirroring Arguments

Yesterday, as part of a panel on machine consciousness, I saw Jack Copeland deliver a razor-sharp talk based on a paper that he published last year with Douglas Campbell and Zhuo-Ran Deng, entitled “The Inconceivable Popularity of Conceivability Arguments“.  To give you an idea of what the paper is about, I reproduce the abstract here:

Famous examples of conceivability arguments include: (i) Descartes’ argument for mind-body dualism; (ii) Kripke’s ‘modal argument’ against psychophysical identity theory; (iii) Chalmers’ ‘zombie argument’ against materialism; and (iv) modal versions of the ontological argument for theism. In this paper we show that for any such conceivability argument, C, there is a corresponding ‘mirror argument’, M. M is deductively valid and has a conclusion that contradicts C’s conclusion. Hence a proponent of C—henceforth, a ‘conceivabilist’—can be warranted in holding that C’s premises are conjointly true only if she can find fault with one of M’s premises. But M’s premises—of which there are just two—are modeled on a pair of C’s premises. The same reasoning that supports the latter supports the former. For this reason a conceivabilist can repudiate M’s premises only on pain of severely undermining C’s premises. We conclude on this basis that all conceivability arguments, including each of (i)—(iv), are fallacious.

It’s a great paper, but I’m not sure the mirroring move against Descartes works, at least not as it is expressed in the paper.  Although the text I quote below is from the paper, I composed this objection while listening to (and recalling) the talk.  I apologise if the paper itself, which I have not read carefully to the end, blocks or anticipates the move I make here (please let me know if it does).

First, the paper defines CEP as the claim that conceivability entails possibility:
(CEP) ⬦cψ→⬦ψ
Descartes then is quoted:
“I know that everything which I clearly and distinctly understand is capable of being created by God so as to correspond exactly with my understanding of it. Hence the fact that I can clearly and distinctly understand one thing apart from another is enough to make me certain that the two things are distinct, since they are capable of being separated, at least by God… [O]n the one hand I have a clear and distinct idea of myself, in so far as I am simply a thinking, non-extended thing; and on the other hand I have a distinct idea of a body, in so far as this is simply an extended, non-thinking thing. And accordingly, it is certain that I am really distinct from my body, and can exist without it.”(Cottingham, Stoothoff, & Murdoch, 1629, p. 54)
Then it is claimed that Descartes uses CEP:
“Setting φ, ψ, and μ as follows:
φ: Mind=Body
ψ: MindBody
μ: (MindBody),
we get:
D1.⬦c(Mind≠Body)
D2.⬦c(Mind≠Body)→⬦(Mind≠Body)
D3.⬦(Mind≠Body)→□(Mind≠Body)
D4.⬦(Mind=Body)→¬□(Mind≠Body)
____________________
D5. ¬(Mind=Body)
Here Descartes uses a (theistic) version of CEP to infer that it is possible for mind and body to be distinct. From this he infers they are actually distinct. Why does he think he can make this move from mere possibility to actuality? Presumably because he is assuming D3, or something like it, as a tacit premise (Robinson, 2012).”
But that reasoning is questionable.  Surely none of D1-4 are equivalent to CEP.  So what the authors must mean is that one of D1-4 (i.e., D2) relies on CEP.  But in the quoted passage, Descartes does not appeal to (or argue for) CEP.  Rather, he argues for a more restricted claim, one that more closely resembles D2 in structure, in that it infers specifically the possibility of distinctness from the conceivability of distinctness.  That is, rather than CEP, it seems to me that Descartes argues for, and uses, CDEPD (Conceivability of Distinctness Entails Possibility of Distinctness) :
(CDEPD) ⬦c(φψ) (φψ)
It is prima facie possible to hold CDEPD without holding CEP.  Further, there are arguments (such as the one Descartes puts forward in the quoted passage), that support CDEPD that do not prima facie support CEP.  That is, one can accept what Descartes says in the quoted passage, but, it seems, reject any attempt at an analogous (dare I say “mirroring”?) argument:
Hence the fact that I can clearly and distinctly understand one thing as being the same as another is enough to make me certain that the two things are the same, since they are capable of being ?, at least by God.
What could we put in place of the question mark to yield a proposition that is true?  To yield a proposition that is implied by what Descartes says in Meditations or elsewhere?  To yield a proposition that is required for Descartes’s conceivability argument to proceed?
There seems to be an asymmetry here between the conceivability of difference and the conceivability of sameness that allows Descartes to get by with CDEPD, rather than having to employ CEP.
Why does this matter?  It matters because the mirroring argument the authors make against Descartes effectively says:  “Descartes helped himself to CEP, so we can do the same.  Only instead of applying the CEP to a proposition about the conceivability of differences, we will apply it to a proposition about the conceivability of sameness.”  If what I have said above is right, then it is possible that Descartes was not helping himself to CEP in general, but to a more restricted claim about propositions involving distinctness, CDEDP.  Thus a mirroring argument would not be able to help itself to CEP, and thus would not be able to derive the required, contrary conclusion that way.  Further, there is no way to derive such a contrary conclusion using CDEDP instead.  So the mirroring argument against Descartes’s conceivability argument fails.
I have not yet checked to see if there are similar moves that Kripke and Chalmers can make to “break the mirror” in their respective cases.

(Another) joint paper with Aaron Sloman published

Screenshot 2017-06-13 16.25.17The proceedings of EUCognition 2016 in Vienna, co-edited by myself, Vincent Müller, Yulia Sandamirskaya and Markus Vincze, have just been published online (free access):  

In it is a joint paper by Aaron Sloman and myself, entitled “Architectural Requirements for Consciousness“.  Here is the abstract:

This paper develops, in sections I-III, the virtual machine architecture approach to explaining certain features of consciousness first proposed in [1] and elaborated in [2], in which particular qualitative aspects of experiences (qualia) are proposed to be particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of an agent that make that agent prone to believe the kinds of things that are typically believed to be true of qualia (e.g., that they are ineffable, immediate, intrinsic, and private). Section IV aims to make it intelligible how the requirements identified in sections II and III could be realised in a grounded, sensorimotor, cognitive robotic architecture.

Roles for Morphology in Computation

 

gr1

From Pfeifer, Iida and Lungarella (2014)

Tomorrow I’m giving an invited talk in Gothenburg at the Symposium on Morphological Computing and Cognitive Agency, as part of the The International Society for Information Studies Summit 2017 (entitled — deep breath — “DIGITALISATION FOR A SUSTAINABLE SOCIETY: Embodied, Embedded, Networked, Empowered through Information, Computation & Cognition!”).  Here’s my title and abstract:

Roles for Morphology in Computation

The morphological aspects of a system are the shape, geometry, placement and compliance properties of that system. On the rather permissive construal of computation as transformations of information, a correspondingly permissive notion of morphological computation can be defined: cases of information transformation performed by the morphological aspects of a system. This raises the question of what morphological computation might look like under different, less inclusive accounts of computation, such as the view that computation is essentially semantic. I investigate the possibilities for morphological computation under a particular version of the semantic view. First, I make a distinction between two kinds of role a given aspect might play in computations that a system performs: foreground role and background role. The foreground role of a computational system includes such things as rules, state, algorithm, program, bits, data, etc. But these can only function as foreground by virtue of other, background aspects of the same system: the aspects that enable the foreground to be brought forth, made stable/reidentifiable, and to have semantically coherent causal effect. I propose that this foreground/background distinction cross-cuts the morphological/non-morphological distinction. Specifically, morphological aspects of a system may play either role.

The Symposium will be chaired by Rob Lowe, and Gordana Dodig Crnkovic, and the other speakers include Christian Balkenius, Lorenzo Magnani, Yulia Sandamirskaya, Jordi Vallverdú, and John Spencer (and maybe Tom Ziemke and Marcin Schroeder?).

I’m also giving an invited talk the next day (Tuesday) as part of a plenary panel entitled: “What Would It Take For A Machine To Have Non-Reductive Consciousness?”  My talk is entitled “Computation and the Fate of Qualia”.  The other speakers are Piotr Bołtuć (moderator), Jack Copeland, Igor Aleksander, and Keith W. Miller.

Should be a fantastic few days — a shame I can’t stay for the full meeting, but I have to be back at Sussex in time for the Robot Opera Mini-Symposium on Thursday!

 

Hands-on learning with social robots in schools

img_1347I’ve been working with student assistant Deepeka Khosla to design hands-on social robotics curricula for school students. We delivered three sessions for year 7 and 8 students on January 12th using AiBO and NAO robots, which involved some of the students doing some (very-limited) coding of the robots, and inspection of their program and sensory states, a basic form of increasing “transparency” of social robots.
A key component of making robots more intelligibile is the development of “roboliteracy”: a good understanding of what can and what cannot be (currently) done/expected to be done by social robots. Familiarity can be a key component of de-mystification/anxiety reduction.
img_4691Current plans are underway to develop a more advanced, coding-based 3-hour learning session for year 9 students, for delivery over 2017-1018, starting in May. This will be marketed exclusively to girls. During my recent visit to the UAE I was inspired by what I saw, and the reports I heard, concerning the strong representation of women and girls in robotics education in that part of the world. Just letting girls here know about that, showing them photos of female robotics teams from there, etc., might be an example of a way to make the course content match that marketing aim.
Any suggestions/examples concerning robot curriculum in schools would be very welcome!
Support for development and delivery of these sessions has been provided by the Widening Participation initiative at Sussex.

Functionalism, Revisionism, and Qualia

logoA paper by myself and Aaron Sloman, “Functionalism, Revisionism, and Qualia” has just been published in the APA Newsletter on Philosophy and Computing. (The whole issue looks fantastic – I’m looking forward to reading all of it, especially the other papers in the “Mind Robotics” section, and most especially the papers by Jun Tani and Riccardo Manzotti). Our contribution is a kind of follow-up to our 2003 paper “Virtual Machines and Consciousness”. There’s no abstract, so let me just list here a few of the more controversial things we claim (and in some cases, even argue for!):

  • Even if our concept of qualia is true of nothing, qualia might still exist (we’re looking at you, Dan Dennett!)
  • If qualia exist, they are physical – or at least their existence alone would not imply the falsity of physicalism (lots of people we’re looking at here )
  • We might not have qualia: The existence of qualia is an empirical matter.
  • Even if we don’t have qualia, it might be possible to build a robot that does!
  • The question of whether inverted qualia spectra are possible is, in a sense, incoherent.

If you get a chance to read it, I’d love to hear what you think.

Ron

CFP: Cognitive Robot Architectures

Recently I was appointed to the Editorial Board of the journal Cognitive Systems Research. We have just announced a call for submissions to a special issue that I am co-editing along with the other s13890417organisers of EUCognition2016.  Although we expect some authors of papers for that meeting to submit their papers for inclusion in this special issue, this is an open call: one need not attend EUCognition2016 to submit something for inclusion in the special issue.  The call, reproduced below, can also be found at:

http://www.journals.elsevier.com/cognitive-systems-research/call-for-papers/special-issue-on-cognitive-robot-architecture

Special Issue on Cognitive Robot Architectures


Research into cognitive systems is distinct from artificial intelligence in general in that it seeks to design complete artificial systems in ways that are informed by, or that attempt to explain, biological cognition. The emphasis is on systems that are autonomous, robust, flexible and self-improving in pursuing their goals in real environments.  This special issue of Cognitive Systems Research will feature recent work in this area that is pitched at the level of the cognitive architecture of such designs and systems.  Cognitive architectures are the underlying, relatively invariant structural and functional constraints that make possible cognitive processes such as perception, action, reasoning, learning and planning.  In particular, this issue will focus on cognitive architectures for robots that are designed either using insights from natural cognition, or to help explain natural cognition, or both.

Papers included in this issue will address such questions/debates as:
Continue reading

Architectural Requirements for Consciousness

I’ll be giving a talk at the EUCog2016 conference in Vienna this December, presenting joint work with Aaron Sloman.  Here is the extended abstract:

Architectural requirements for consciousness
Ron Chrisley and Aaron Sloman

This paper develops the virtual machine architecture approach to explaining certain features of consciousness first proposed in (Sloman and Chrisley 2003) and elaborated in (Chrisley and Sloman 2016), in which the particular qualitative aspects of experiences (qualia) are identified as being particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of agent A that make A prone to believe:

  1. That A is in a state S, the aspects of which are knowable by A directly, without further evidence (immediacy);
  2. That A’s knowledge of these aspects is of a kind such that only A could have such knowledge of those aspects (privacy);
  3. That these states have these aspects intrinsically, not by virtue of, e.g., their functional role (intrinsicness);
  4. That these aspects of S cannot be completely communicated to an agent that is not A (ineffability).

A crucial component of the explanation, which we call the Virtual Machine Functionalism (VMF) account of qualia, is that the propositions 1-4 need not be true in order for qualia to make A prone to believe those propositions. In fact, it is arguble that nothing could possibly render all of 1-4 true simultaneously. But this would not imply that there are no qualia, since qualia only require that agents that have them be prone to believe 1-4.

It is an open empirical question whether, in some or all humans, the properties underlying the dispositions to believe 1-4 have a unified structure that would render reference to them a useful move in providing a causal explanation of such beliefs. Thus, according to the VMF account of qualia, it is an open empirical question whether qualia exist in any given human. By the same token, however, it is an open engineering question whether, independently of the human case, it is possible or feasible to design an artificial system that a) is also prone to believe 1-4 and b) is so disposed because of a unified structure. This talk will: a) look at the requirements that must be in place for a system to believe 1-4, and b) sketch a design in which the propensities to believe 1-4 can be traced to a unified virtual machine structure, underwriting talk of such a system having qualia.

a) General requirements for believing 1-4:

These include those for being a system that can be said to have beliefs and propensities to believe. Further, having the propensities to believe 1-4 requires the possibility of having beliefs about oneself, one’s knowledge, possibility/impossibility, and other minds. At a minimum, such constraints require a cognitive architecture with reactive, deliberative and meta-management components (Anonymous1 and Anonymous2 2003), with at least two layers of meta-cognition: (i) detection and use of various states of internal VM components; and (ii) holding beliefs/theories about those components.

 

b) A qualia-supporting design:

  • A propensity to believe in immediacy (1) can be explained in part as the result of the meta-management layer of a deliberating/justifying but resource- bounded architecture needing a basis for terminating deliberation/justification in a way that doesn’t itself prompt further deliberation or justification.
  • A propensity to believe in privacy (2) can be explained in part as the result of a propensity to believe in immediacy (1), along with a policy of *normally* conceiving of the beliefs of others as making evidential and justificatory impact on one’s own beliefs. To permit the termination of deliberation and justification, some means must be found to discount, at some point, the relevance of others’ beliefs, and privacy provides prima facie rational grounds for doing this.
  • A propensity to believe in intrinsicness (3) can also be explained in part as the result of a propensity to believe in immediacy, since states having the relevant aspects non-intrinsically (i.e., by virtue of relational or systemic facts) would be difficult to rectify with the belief that one’s knowledge of these aspects does not require any (further) evidence.
  • An account of a propensity to believe in ineffability (4) requires some nuance, since unlike 1-3, 4 is in a sense true, given the causally indexical nature of some virtual machine states and their properties, as explained in (Anonymous2 and Anonymous1 2016). However, properly appreciating the truth of 4 requires philosophical sophistication, and so its truth alone cannot explain the conceptually primitive propensity to believe it; some alternative explanations will be offered.

 

References:

Sloman, A. and Chrisley, R. (2003) “Virtual Machines and Consciousness”. Journal of Consciousness Studies 10 (4-5), 133-172.

Chrisley, R. and Sloman, A. (2016, in press) “Functionalism, Revisionism and Qualia”. APA Newsletter on Philosophy and Computers 16 (1).