Robot opera: Robert Gyorgyi interviews Ron Chrisley

Robert Gyorgyi, a Music student here at Sussex, recently interviewed me for his dissertation on robot opera.  He asked me about my recent collaborations, in which I programmed Nao robots to perform in operas composed for them.  Below is the transcript.


Interview with Dr Ron Chrisley, 20 April 2018, 12:00, University of Sussex

Bold text: Interviewer (Robert Gyorgyi), [R]: Dr Ron Chrisley

NB: The names ‘Ed’ and ‘Evelyn’ often come up within the interview. ‘Ed’ refers to Ed Hughes, the composer of Opposite of Familiarity (2017) and Evelyn to ‘Evelyn Ficarra’, composer of O, One (2017)

How did you hear about the project? Was it a sort of group brainstorming or was the idea proposed to you?

[R] -Evelyn approached me, then we had a meeting when she explained her vision to me.

These NAO robots are social robots designed to speak, not to sing. Was the assignment of their new task your main challenge? How did you do that? Continue reading


Robot Opera Mini-Symposium Video


Last June I participated in the Robot Opera Mini Symposium organised by the Centre for Research in Opera and Music Theatre (CROMT) at Sussex.  A video of all the talks, and the robot opera performances themselves, is available below.  My 17-minute talk can be found at 08:40 into the video.

Machine Messiah: Lessons for AI in “Destination: Void”

Tomorrow is the first day of a two-day conference to be held at Jesus College, Cambridge on the topic: “Who’s afraid of the Super-Machine?  AI in Sci-Fi Film and Literature” (, hosted by the Science & Human Dimension division of the AI & The Future of Humanity Project.


I’m speaking on Friday:

Machine Messiah: Lessons for AI in Destination: Void

In Destination: Void (1965), Frank Herbert anticipates many current and future ethical, social and philosophical issues arising from humanity’s ambitions to create artificial consciousness.  Unlike with his well-known Dune millieu, which explicitly sidesteps such questions via the inclusion of a universally-respected taboo against artificial intelligence, the moon-based scientist protagonists in Destination: Void explicitly aim to create artificial consciousness, despite previous disastrous attempts.  A key aspect of their strategy is to relinquish direct control of the process of creation, instead designing combinations of resources (a cloned spaceship crew of scientists, engineers, psychiatrists and chaplains with interlocking personality types) and catalytic situations (a colonising space mission that is, unknown to the clone crew, doomed with scheduled crises ) that the moon-based scientists hope will impel the crew members to bring about, if not explicitly design, an artificial consciousness based in the ship’s computer.  As with Herbert’s other works, there is a strong emphasis on the messianic and the divine, but here it is in the context of a superhuman machine, and the ethics of building such.  I will aim to extract from Herbert’s incredibly prescient story several lessons, ranging from the practical to the theological, concerning artificial consciousness, including: the engineering of emergence and conceptual change; intelligent design and “Adam as the first AI”; the naturalisation of spiritual discourse; and the doctrine of the Imago Dei as a theological injunction to engage in artificial consciousness research.

Revisionism about Qualia: Prospects for a Reconciliation Between Physicalism and Qualia


On January 30th I’ll be presenting joint work with Aaron Sloman (“Revisionism about Qualia: Prospects for a Reconciliation Between Physicalism and Qualia”) at a conference in Antwerp on Dan Dennett’s work in philosophy of mind (sponsored by the Centre for Philosophical Psychology and European Network for Sensory Research).  Both Aaron and Dan will be in attendance.  I don’t have an abstract of our talk, but it will be based on a slimmed-down version of our 2016 paper (with some additions, hopefully taking into account some recent development’s in Dan’s position on qualia).

The official deadline for registration has passed, but if you are interested in attending perhaps Bence Nanay, the organiser, can still accommodate you?  Below please find the list of speakers and original calls for registration and papers.

Centre for Philosophical Psychology and European Network for Sensory Research

Call for registration 

Conference with Daniel Dennett on his work in philosophy of mind. January 30, 2018. 


  • Daniel Dennett (Tufts)
  • Elliot Carter (Toronto)
  • Ron Chrisley and Aaron Sloman (Sussex)
  • Krzysztof Dolega (Bochum)
  • Markus Eronen (Leuven)
  • Csaba Pleh (CEU)
  • Anna Strasser (Berlin)

This conference accompanies Dennett’s deliverance of the 7th Annual Marc Jannerod Lecture (the attendance of this public lecture is free). 

Registration (for the conference, not the public lecture): 100 Euros (including conference dinner – negotiable if you dont want conference dinner). Send an email to Nicolas Alzetta ( to register. Please register by December 21. 

Workshop with Daniel Dennett, January 30, 2018

Call for papers!

Daniel Dennett will give the Seventh Annual Marc Jeannerod Lecture (on empirically grounded philosophy of mind) in January 2018. To accompany this lecture, the University of Antwerp organizes a workshop on  Dennett’s philosophy of mind on January 30, 2018, where he will be present.

There are no parallel sections. Only blinded submissions are accepted.

Length: 3000 words. Single spaced!

Deadline: October 15, 2017. Papers should be sent to

Robot Opera coverage in “Viva Lewes”

The September 2017 issue of Viva Lewes magazine features a two-page spread by Jacqui Bealing on the robot opera project that Evelyn Ficarra, Ed Hughes and I have been collaborating on (as detailed in earlier updates on this blog).  The article is available at:

For convenience, I include a copy of the article below.

Screen Shot 2017-10-05 at 14.00.01

Epistemic Consistency in Knowledge-Based Systems

IMG_6144 (1).jpg

Today I was informed that my extended abstract, “Epistemic Consistency in Knowledge-Based Systems”, has been accepted for presentation at PT-AI 2017 in Leeds in November. The text of the extended abstract is below.  The copy-paste job I’ve done here loses all the italics, etc.; the proper version is at:

Comments welcome, especially to similar work, papers I should cite, etc.

Epistemic Consistency in Knowledge-Based Systems (extended abstract)

Ron Chrisley
Centre for Cognitive Science,
Sackler Centre for Consciousness Science, and Department of Informatics
University of Sussex, Falmer, United Kingdom

1 Introduction

One common way of conceiving the knowledge-based systems approach to AI is as the attempt to give an artificial agent knowledge that P by putting a (typically lin- guaform) representation that means P into an epistemically privileged database (the agent’s knowledge base). That is, the approach typically assumes, either explicitly or implicitly, that the architecture of a knowledge-based system (including initial knowledge base, rules of inference, and perception/action systems) is such that the following sufficiency principle should be respected:

  • Knowledge Representation Sufficiency Principle (KRS Principle): if a sen- tence that means P is in the knowledge base of a KBS, then the KBS knows that P.

The KRS Principle is so strong that, although it might be able to be respected by KBSs that deal exclusively with a priori matters (e.g., theorem provers), most if not all empirical KBSs will, at least some of the time, fail to meet it. Nevertheless, it remains an ideal toward which KBS design might be thought to strive.

Accordingly, it is commonly acknowledged that knowledge bases for KBSs should be consistent, since classical rules of inference permit the addition of any sentence to an inconsistent KB. Accordingly, much effort has been spent on devis- ing tractable ways to ensure consistency or otherwise prevent inferential explosion.

2 Propositional epistemic consistency

However, it has not been appreciated that for certain kinds of KBSs, a further con- straint, which I call propositional epistemic consistency, must be met. To explain this constraint, some notions must be defined:

  • An epistemic KBS is one that can represent propositions attributing propositional knowledge to subjects (such as that expressed by “Dave knows the mission is a failure”).

Continue reading