I’m writing this from Zürich airport, on my way back to England after an excellent sojourn at the Dharma Sangha Zen Centre (www.dharma-sangha.de) on the German/Swiss frontier. I was there for a cosy meeting of the Society for Mind-Matter Research (www.mindmatter.de) on the topic of embodiment. My talk gave a brief overviews of six ways in which my research has investigated the role of embodiment in mind and computation. You can view my slides here: prezi.com/view/TLzIVu5YT
Robert Gyorgyi, a Music student here at Sussex, recently interviewed me for his dissertation on robot opera. He asked me about my recent collaborations, in which I programmed Nao robots to perform in operas composed for them. Below is the transcript.
Interview with Dr Ron Chrisley, 20 April 2018, 12:00, University of Sussex
Bold text: Interviewer (Robert Gyorgyi), [R]: Dr Ron Chrisley
NB: The names ‘Ed’ and ‘Evelyn’ often come up within the interview. ‘Ed’ refers to Ed Hughes, the composer of Opposite of Familiarity (2017) and Evelyn to ‘Evelyn Ficarra’, composer of O, One (2017)
How did you hear about the project? Was it a sort of group brainstorming or was the idea proposed to you?
[R] -Evelyn approached me, then we had a meeting when she explained her vision to me.
These NAO robots are social robots designed to speak, not to sing. Was the assignment of their new task your main challenge? How did you do that? Continue reading
Last June I participated in the Robot Opera Mini Symposium organised by the Centre for Research in Opera and Music Theatre (CROMT) at Sussex. A video of all the talks, and the robot opera performances themselves, is available below. My 17-minute talk can be found at 08:40 into the video.
The September 2017 issue of Viva Lewes magazine features a two-page spread by Jacqui Bealing on the robot opera project that Evelyn Ficarra, Ed Hughes and I have been collaborating on (as detailed in earlier updates on this blog). The article is available at:
For convenience, I include a copy of the article below.
Next Thursday, November 17th, at 13:00 I’ll be leading the E-Intentionality seminar in Freeman G22. I’ll be using this seminar as a dry run for the first part of my keynote lecture at the UAE Social Robotics meeting next week. It builds on work that I first presented at Tufts in 2014.
Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. I look at one approach to ethically designing robots, that of designing ethical robots – robots that are given a set of rules that are intended to encode an ethical system, and which are to be applied by the robot in the generation of its behaviour. I argue that this approach will in many cases obfuscate, rather than clarify, the lines of responsibility involved (resulting in “moral murk”), and can lead to ethically adverse situations. After giving an example of such cases, I offer an alternative approach to ethical design of robots, one that does not presuppose that notions of obligation and permission apply to the robot in question, thereby avoiding the problems of moral murk and ethical adversity.