Simon Bowes: Natural Minds

Congratulations to Simon Bowes, who just passed his DPhil viva with (very) minor corrections.  The internal examiner was Blay Whitby, and the external examiner was Robin Hendry.

Natural Minds: Summary

My project is an empirically informed investigation of the philosophical problem of mental causation, and simultaneously a philosophical investigation of the status of cognitive scientific generalisations. If there is such a thing as mental causation (mental states making things happen qua mental states, not as unnecessary accompaniments of physical causes the full description of which require reference only to fundamental physical entities and their interactions), and if we can classify the mental states involved in these causes in a way useful for making predictions and giving explanations (that is, if there can be a science of mental states), then these states will be natural kinds, in a sense that it will be part of my task to spell out. The second part of my task is to say how the scientific statements made using these mental kinds are not susceptible to being reduced to statements about physical kinds, and in fact require the taking into account facts at many levels of explanation, including the biological and social levels. Lastly, I will be setting out the case for Virtual Machine Functionalism being the correct account of the relationship between cognitive states and the broader physical world.
A theme of this thesis is that although there may be something problematic with traditional accounts of natural kinds and representations when it comes to contemporary cognitive science, this is no reason for thinking that those terms are not useful; we should refine rather than eliminate them. Some may say that whether or not to retain the use of such technical terms is not much more than a matter of taste, as what is important is the understanding, not the name. So, many may prefer to lose the name in order to distance themselves from the details of the theories associated with them. I think, however, that something would be lost in our understanding if we reject these terms and the theoretical understanding they describe, something that perhaps was there before the wrong-headed theoretical details that came to be associated with the terms. The accumulations of mistaken theorising should be thrown out with the murky bathwater. Some terms that have been coined in the development of our understanding of the mind, such as ‘qualia’, should indeed be washed away, but others should be dried off and clothed anew.
Broadly speaking, my argument will be that to square the widely held but somewhat contradictory intuitions of Physicalism and Anti-reductionism regarding mental states will require rejection or modification of two other commonly held intuitions, namely Physical Causal Closure (some take this as necessary for Physicalism) and Supervenience (with Physical Realisation).
Another way of stating my aim is in terms of defending the intuitive distinction between metaphorical and literal uses of intentional vocabulary, such as ‘wanting’ and ‘trying’. I have been surprised in discussions with colleagues of a mathematical and scientific bent, who take a physicalistic stance on questions of consciousness, that this distinction is seen by them as purely verbal, that they see no meaningful difference between saying of a drop of water that it is ‘trying’ to get to the bottom of the window pane, and using that word to say of a person that he or she is trying to get to the top of the mountain. Much of what follows is an attempt to describe a metaphysics that is rigourously materialist and scientific, but in which the difference between the two cases has a natural place.
In brief, the difference lies in the notion of intentionality; the fact that in the case of my desiring something, there is a mental state ‘in’ me that has evolved for the purpose of directing my actions towards a state of affairs ‘out there’ in the world. Such states are things that can be scientifically studied, and a scientific account of human action would be incomplete if it did not refer to such states. In the case of the water drop, there are no similar states without which the scientific understanding of water droplet action would be incomplete. The temptation to elide the distinction between intentional and non-intentional descriptions is based on a simplistic belief that since all causation is physical, there is no meaningful distinction between the kinds of causes that the makes water drops drop, and those that make climbers climb. This results from the fact that it is sometimes felt that reference to such things as personal agency in intentional explanations of action is to allow in a disagreeable form of dualism. I disagree, and argue that a complete, physicalistic, scientific account of human behaviour must include reference to irreducible, mental kinds, such as beliefs and desires.
The form of the argument follows the content, having as it does Natural Kinds at its centre, and arguing as it does that Natural Kinds are at the centre of the web of concepts that form our understanding of mind and its place in the natural world. The starting point is simple folk explanations of human actions, for example, ‘He ate the apple because…’ followed by a set of conditions including combinations of beliefs and desires that together constitute sufficient reasons for the act of eating. Many would say such purported explanations are mere folk tales, fictions that mask our ignorance of the true story, which will, when we know how to tell it, have a much reduced cast of characters, an exclusive set of ‘purely’ physical types.
This is well trodden, and by now quite muddy, ground. Enter the various adherents of the new ‘embodied cognitive science’. However, it is not clear whose side they are on, whether they could tip the balance in favour of one side or the other, or indeed whether they will speak with one voice on this question. One of the aims of the present piece is to analyse what positions within the debate the embodiment theorist is most likely to take. The triangulation points for mapping this terrain will be arguments for the autonomy of sciences of the mind, where Natural Kinds will take centre stage, and counterarguments that rely on the notion of supervenience. An account of Natural Kinds, which I call the Topographical view, will be outlined, which, I claim, avoids the problems of other accounts, and is suitable for use in the generalisations constructed in embodied cognitive science. Following that, we will plug this account into the debate around the autonomy of special sciences in general, and the problem of mental causation in particular. Flowing from that the discussion will broaden out into an investigation of causation, including a refined understanding of causal closure and explanation. After applying the results of these discussions to our understanding of the supervenience relation, a defensible account of emergentism will be given. Next, we will move on to laying out a picture of the kinds of properties of mental states that may be referred to in explanations of rational action, namely, the representational contents of mental states. In order to understand the nature of these states, the feedback dynamics between hierarchically structured levels of cognition involved in their evolution will be emphasised, leading to a picture of embodied cognition that is broad and externalist as opposed to narrow and internalistic. Finally, we will look at consciousness, as a necessary accompaniment of true intentional action, as opposed to behaviour that intentional language may be used metaphorically of, showing that subjects of intentional actions are emergent from brain/body/world dynamics. We will finish with a look at the implications of the refined functionalist account defended herein for the metaphysical notions we started with. The conclusion drawn will be that we can indeed refer to genuine mental causes which ground the non-metaphorical use of intentional explanations. I will end by sketching some implications for the notion of free will.



Robot opera: Robert Gyorgyi interviews Ron Chrisley

Robert Gyorgyi, a Music student here at Sussex, recently interviewed me for his dissertation on robot opera.  He asked me about my recent collaborations, in which I programmed Nao robots to perform in operas composed for them.  Below is the transcript.


Interview with Dr Ron Chrisley, 20 April 2018, 12:00, University of Sussex

Bold text: Interviewer (Robert Gyorgyi), [R]: Dr Ron Chrisley

NB: The names ‘Ed’ and ‘Evelyn’ often come up within the interview. ‘Ed’ refers to Ed Hughes, the composer of Opposite of Familiarity (2017) and Evelyn to ‘Evelyn Ficarra’, composer of O, One (2017)

How did you hear about the project? Was it a sort of group brainstorming or was the idea proposed to you?

[R] -Evelyn approached me, then we had a meeting when she explained her vision to me.

These NAO robots are social robots designed to speak, not to sing. Was the assignment of their new task your main challenge? How did you do that? Continue reading

Machine Messiah: Lessons for AI in “Destination: Void”

Tomorrow is the first day of a two-day conference to be held at Jesus College, Cambridge on the topic: “Who’s afraid of the Super-Machine?  AI in Sci-Fi Film and Literature” (, hosted by the Science & Human Dimension division of the AI & The Future of Humanity Project.


I’m speaking on Friday:

Machine Messiah: Lessons for AI in Destination: Void

In Destination: Void (1965), Frank Herbert anticipates many current and future ethical, social and philosophical issues arising from humanity’s ambitions to create artificial consciousness.  Unlike with his well-known Dune millieu, which explicitly sidesteps such questions via the inclusion of a universally-respected taboo against artificial intelligence, the moon-based scientist protagonists in Destination: Void explicitly aim to create artificial consciousness, despite previous disastrous attempts.  A key aspect of their strategy is to relinquish direct control of the process of creation, instead designing combinations of resources (a cloned spaceship crew of scientists, engineers, psychiatrists and chaplains with interlocking personality types) and catalytic situations (a colonising space mission that is, unknown to the clone crew, doomed with scheduled crises ) that the moon-based scientists hope will impel the crew members to bring about, if not explicitly design, an artificial consciousness based in the ship’s computer.  As with Herbert’s other works, there is a strong emphasis on the messianic and the divine, but here it is in the context of a superhuman machine, and the ethics of building such.  I will aim to extract from Herbert’s incredibly prescient story several lessons, ranging from the practical to the theological, concerning artificial consciousness, including: the engineering of emergence and conceptual change; intelligent design and “Adam as the first AI”; the naturalisation of spiritual discourse; and the doctrine of the Imago Dei as a theological injunction to engage in artificial consciousness research.

Revisionism about Qualia: Prospects for a Reconciliation Between Physicalism and Qualia


On January 30th I’ll be presenting joint work with Aaron Sloman (“Revisionism about Qualia: Prospects for a Reconciliation Between Physicalism and Qualia”) at a conference in Antwerp on Dan Dennett’s work in philosophy of mind (sponsored by the Centre for Philosophical Psychology and European Network for Sensory Research).  Both Aaron and Dan will be in attendance.  I don’t have an abstract of our talk, but it will be based on a slimmed-down version of our 2016 paper (with some additions, hopefully taking into account some recent development’s in Dan’s position on qualia).

The official deadline for registration has passed, but if you are interested in attending perhaps Bence Nanay, the organiser, can still accommodate you?  Below please find the list of speakers and original calls for registration and papers.

Centre for Philosophical Psychology and European Network for Sensory Research

Call for registration 

Conference with Daniel Dennett on his work in philosophy of mind. January 30, 2018. 


  • Daniel Dennett (Tufts)
  • Elliot Carter (Toronto)
  • Ron Chrisley and Aaron Sloman (Sussex)
  • Krzysztof Dolega (Bochum)
  • Markus Eronen (Leuven)
  • Csaba Pleh (CEU)
  • Anna Strasser (Berlin)

This conference accompanies Dennett’s deliverance of the 7th Annual Marc Jannerod Lecture (the attendance of this public lecture is free). 

Registration (for the conference, not the public lecture): 100 Euros (including conference dinner – negotiable if you dont want conference dinner). Send an email to Nicolas Alzetta ( to register. Please register by December 21. 

Workshop with Daniel Dennett, January 30, 2018

Call for papers!

Daniel Dennett will give the Seventh Annual Marc Jeannerod Lecture (on empirically grounded philosophy of mind) in January 2018. To accompany this lecture, the University of Antwerp organizes a workshop on  Dennett’s philosophy of mind on January 30, 2018, where he will be present.

There are no parallel sections. Only blinded submissions are accepted.

Length: 3000 words. Single spaced!

Deadline: October 15, 2017. Papers should be sent to

Epistemic Consistency in Knowledge-Based Systems

IMG_6144 (1).jpg

Today I was informed that my extended abstract, “Epistemic Consistency in Knowledge-Based Systems”, has been accepted for presentation at PT-AI 2017 in Leeds in November. The text of the extended abstract is below.  The copy-paste job I’ve done here loses all the italics, etc.; the proper version is at:

Comments welcome, especially to similar work, papers I should cite, etc.

Epistemic Consistency in Knowledge-Based Systems (extended abstract)

Ron Chrisley
Centre for Cognitive Science,
Sackler Centre for Consciousness Science, and Department of Informatics
University of Sussex, Falmer, United Kingdom

1 Introduction

One common way of conceiving the knowledge-based systems approach to AI is as the attempt to give an artificial agent knowledge that P by putting a (typically lin- guaform) representation that means P into an epistemically privileged database (the agent’s knowledge base). That is, the approach typically assumes, either explicitly or implicitly, that the architecture of a knowledge-based system (including initial knowledge base, rules of inference, and perception/action systems) is such that the following sufficiency principle should be respected:

  • Knowledge Representation Sufficiency Principle (KRS Principle): if a sen- tence that means P is in the knowledge base of a KBS, then the KBS knows that P.

The KRS Principle is so strong that, although it might be able to be respected by KBSs that deal exclusively with a priori matters (e.g., theorem provers), most if not all empirical KBSs will, at least some of the time, fail to meet it. Nevertheless, it remains an ideal toward which KBS design might be thought to strive.

Accordingly, it is commonly acknowledged that knowledge bases for KBSs should be consistent, since classical rules of inference permit the addition of any sentence to an inconsistent KB. Accordingly, much effort has been spent on devis- ing tractable ways to ensure consistency or otherwise prevent inferential explosion.

2 Propositional epistemic consistency

However, it has not been appreciated that for certain kinds of KBSs, a further con- straint, which I call propositional epistemic consistency, must be met. To explain this constraint, some notions must be defined:

  • An epistemic KBS is one that can represent propositions attributing propositional knowledge to subjects (such as that expressed by “Dave knows the mission is a failure”).

Continue reading