Robert Gyorgyi, a Music student here at Sussex, recently interviewed me for his dissertation on robot opera. He asked me about my recent collaborations, in which I programmed Nao robots to perform in operas composed for them. Below is the transcript.
Interview with Dr Ron Chrisley, 20 April 2018, 12:00, University of Sussex
Bold text: Interviewer (Robert Gyorgyi), [R]: Dr Ron Chrisley
NB: The names ‘Ed’ and ‘Evelyn’ often come up within the interview. ‘Ed’ refers to Ed Hughes, the composer of Opposite of Familiarity (2017) and Evelyn to ‘Evelyn Ficarra’, composer of O, One (2017)
How did you hear about the project? Was it a sort of group brainstorming or was the idea proposed to you?
[R] -Evelyn approached me, then we had a meeting when she explained her vision to me.
These NAO robots are social robots designed to speak, not to sing. Was the assignment of their new task your main challenge? How did you do that? Continue reading
Last June I participated in the Robot Opera Mini Symposium organised by the Centre for Research in Opera and Music Theatre (CROMT) at Sussex. A video of all the talks, and the robot opera performances themselves, is available below. My 17-minute talk can be found at 08:40 into the video.
The September 2017 issue of Viva Lewes magazine features a two-page spread by Jacqui Bealing on the robot opera project that Evelyn Ficarra, Ed Hughes and I have been collaborating on (as detailed in earlier updates on this blog). The article is available at:
For convenience, I include a copy of the article below.
Next week Sussex will host the third and last workshop of the AHRC Network “Humanising Algorithmic Listening“. At the end of the first day a few of us with some common interests will be speaking about our recent small project proposals, with the hope of finding some common ground. Here’s what I’ll be talking about:
Self-listening for music generation
Although it may seem obvious that in order to create interesting music one must be capable of listening to music as music, the ability to listen is often omitted in the design of musical generative systems. And for those few systems that can listen, the emphasis is almost exclusively on listening to others, e.g., for the purposes of interactive improvisation. I’ll describe a project that aims to explore the role that a system’s listening to, and evaluating, that system’s own musical performance (as its own musical performance) can play in musical generative systems. What kinds of aesthetic and creative possibilities are afforded by such a design? How does the role of self-listening change at different timescales? Can self-listening generative systems shed light on neglected aspects of human performance? A three-component architecture for answering questions such as these will be presented.
The talk immediately before mine, Nicholas Ward & Tom Davis’s “A sense of being ‘listened to’”, focusses on an aspect of performance that my thinking on these issues has neglected. Specifically, the role that X’s perception of Y’s responses to X’s output can/should play in regulating X’s performance, both in real-time and over longer time scales. An important component of X perceiving Y’s’ responses as responses to X is X’s determining whether or not, in the case of auditory/musical output, Y is even listening to X in the first place. When I say component, I only mean that in the most abstract sense — it need not be a separate explicit module or step, independent of processing other’s responses in general. And many cases of auditory production are ecologically constrained to make a given auditory source salient/dominant, so that questions like “what (auditory) source is that person responding to?” need not be asked. But the more general point remains: that responding to the responses of others should be a key component (even) in a robust self-listening system.