I’m writing this from Zürich airport, on my way back to England after an excellent sojourn at the Dharma Sangha Zen Centre (www.dharma-sangha.de) on the German/Swiss frontier. I was there for a cosy meeting of the Society for Mind-Matter Research (www.mindmatter.de) on the topic of embodiment. My talk gave a brief overviews of six ways in which my research has investigated the role of embodiment in mind and computation. You can view my slides here: prezi.com/view/TLzIVu5YT
Robert Gyorgyi, a Music student here at Sussex, recently interviewed me for his dissertation on robot opera. He asked me about my recent collaborations, in which I programmed Nao robots to perform in operas composed for them. Below is the transcript.
Interview with Dr Ron Chrisley, 20 April 2018, 12:00, University of Sussex
Bold text: Interviewer (Robert Gyorgyi), [R]: Dr Ron Chrisley
NB: The names ‘Ed’ and ‘Evelyn’ often come up within the interview. ‘Ed’ refers to Ed Hughes, the composer of Opposite of Familiarity (2017) and Evelyn to ‘Evelyn Ficarra’, composer of O, One (2017)
How did you hear about the project? Was it a sort of group brainstorming or was the idea proposed to you?
[R] -Evelyn approached me, then we had a meeting when she explained her vision to me.
These NAO robots are social robots designed to speak, not to sing. Was the assignment of their new task your main challenge? How did you do that? Continue reading
Last June I participated in the Robot Opera Mini Symposium organised by the Centre for Research in Opera and Music Theatre (CROMT) at Sussex. A video of all the talks, and the robot opera performances themselves, is available below. My 17-minute talk can be found at 08:40 into the video.
The September 2017 issue of Viva Lewes magazine features a two-page spread by Jacqui Bealing on the robot opera project that Evelyn Ficarra, Ed Hughes and I have been collaborating on (as detailed in earlier updates on this blog). The article is available at:
For convenience, I include a copy of the article below.
Next Thursday, November 17th, at 13:00 I’ll be leading the E-Intentionality seminar in Freeman G22. I’ll be using this seminar as a dry run for the first part of my keynote lecture at the UAE Social Robotics meeting next week. It builds on work that I first presented at Tufts in 2014.
Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. I look at one approach to ethically designing robots, that of designing ethical robots – robots that are given a set of rules that are intended to encode an ethical system, and which are to be applied by the robot in the generation of its behaviour. I argue that this approach will in many cases obfuscate, rather than clarify, the lines of responsibility involved (resulting in “moral murk”), and can lead to ethically adverse situations. After giving an example of such cases, I offer an alternative approach to ethical design of robots, one that does not presuppose that notions of obligation and permission apply to the robot in question, thereby avoiding the problems of moral murk and ethical adversity.
I forgot to mention in the update posted earlier today that fellow PAICSer, Steve Torrance, will also be a keynote speaker at the 2nd Joint UAE Symposium on Social Robotics. Here are his title and abstract.
Artificial social agents in a world of conscious beings.
It is an important fact about each of us that we are conscious beings, and that the others we interact with in our social world are also conscious beings. Yet we are appear to be on the edge of a revolution in new social relationships – interactions and intimacies with a variety of non-conscious artificial social agents (ASAs) – both virtual and physical. Granted, we often behave, in the company of such ASAs as though they are conscious, and as though they are social beings. But in essence we still think of them, at least in our more reflective moments, as “tools” or “systems” – smart, and getting smarter, but lacking phenomenal awareness or real emotion.
In my talk I will discuss ways in which reflection on consciousness – both natural and (would-be) artificial – impacts on our intimate social relationships with robots. And I will propose some implications for human responsibilities in developing these technologies.
I will focus on two questions: (1) What would it take for an ASA to be conscious in a way that “matters”? (2) Can we talk of genuine social relationships or interactions with agents that have no consciousness?
On question (1), I will look at debates in the fields of machine consciousness and machine ethics, in order to examine the range of possible positions that may be taken. I will suggest that there is a close relation between thinking of a being as having a conscious phenomenology, and adopting a range of ethical attitudes towards that being. I will also discuss an important debate between those who take a “social-relational” approach to phenomenological and ethical attributions, and those who take an “objectivist” approach. I will offer ways to resolve that debate. This will help provide guidance, I hope, to those who are developing the technologies for smarter ASAs, which possibly may have stronger claims to be taken as conscious. On (2), I will suggest that, even for ASAs that are acknowledged not to be conscious, it is possible that there could be a range of ethical roles that they could come to occupy, in a way that would justify our talking of “artificial social agents” in a rich sense, one that would imply that they had both genuine ethico-social responsibilities and ethico-social entitlements.
The spread of ASAs – whether or not genuinely conscious, social or ethical – will impose heavy responsibilities upon technologists, and those they work with, to guide the social impacts of such agents in acceptable directions, as such agents increasingly inter-operate with us and with our lives. I will thus conclude by pointing to some issues of urgent social concern that are raised by the likely proliferation of ASAs in the coming years and decades.
Next month I’m giving a keynote address at the 2nd Joint UAE Symposium on Social Robotics, entitled: “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots”
Abstract: Advances in social robot design will be achieved hand-in-hand with increased clarity in our concepts of responsibility, folk psychology, and (machine) consciousness. 1) Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. 2) An intuitive understanding by human users of the (possibly quite alien) perceptual and cognitive predicament of robots will be essential to improving cooperation with them, as well as assisting diagnosis, robot training, and the design process itself. Synthetic phenomenology is the attempt to combine robot designs with assistive technologies such as virtual reality to make the experience-like states of cognitive robots understandable to users. 3) Making robot minds more like our own would be facilitated by finding designs that make robots susceptible to the same (mis-)conceptions concerning perception, experience and consciousness that humans have. Making a conscious-like robot will thus involve making robots that find it natural to believe that their inner states are private and non-material. In all three cases, improving robot-human interaction will be as much about an increased understanding of human responsibility, folk psychology and consciousness as it will be about technological breakthroughs in robot hardware and architecture.
Former PAICS researcher Tony Morse has just published, with Angelo Cangelosi, the lead article in the upcoming issue of Cognitive Science.
Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to “switch” between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills.
You can find the article available for early view here: http://onlinelibrary.wiley.com/doi/10.1111/cogs.12390/abstract?campaign=wolearlyview
Present or former PAICS members who would like to feature their recent research on this site should email me with the details.