Hands-on learning with social robots in schools

img_1347I’ve been working with student assistant Deepeka Khosla to design hands-on social robotics curricula for school students. We delivered three sessions for year 7 and 8 students on January 12th using AiBO and NAO robots, which involved some of the students doing some (very-limited) coding of the robots, and inspection of their program and sensory states, a basic form of increasing “transparency” of social robots.
A key component of making robots more intelligibile is the development of “roboliteracy”: a good understanding of what can and what cannot be (currently) done/expected to be done by social robots. Familiarity can be a key component of de-mystification/anxiety reduction.
img_4691Current plans are underway to develop a more advanced, coding-based 3-hour learning session for year 9 students, for delivery over 2017-1018, starting in May. This will be marketed exclusively to girls. During my recent visit to the UAE I was inspired by what I saw, and the reports I heard, concerning the strong representation of women and girls in robotics education in that part of the world. Just letting girls here know about that, showing them photos of female robotics teams from there, etc., might be an example of a way to make the course content match that marketing aim.
Any suggestions/examples concerning robot curriculum in schools would be very welcome!
Support for development and delivery of these sessions has been provided by the Widening Participation initiative at Sussex.

Ethically designing robots without designing ethical robots

robot_ethicsNext Thursday, November 17th, at 13:00 I’ll be leading the E-Intentionality seminar in Freeman G22. I’ll be using this seminar as a dry run for the first part of my keynote lecture at the UAE Social Robotics meeting next week. It builds on work that I first presented at Tufts in 2014.

Abstract:

Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. I look at one approach to ethically designing robots, that of designing ethical robots – robots that are given a set of rules that are intended to encode an ethical system, and which are to be applied by the robot in the generation of its behaviour. I argue that this approach will in many cases obfuscate, rather than clarify, the lines of responsibility involved (resulting in “moral murk”), and can lead to ethically adverse situations. After giving an example of such cases, I offer an alternative approach to ethical design of robots, one that does not presuppose that notions of obligation and permission apply to the robot in question, thereby avoiding the problems of moral murk and ethical adversity.

Artificial social agents in a world of conscious beings

I forgot to mention in the update posted earlier today that fellow PAICSer, Steve Torrance, will also be a keynote speaker at the 2nd Joint UAE Symposium on Social Robotics.  Here are his title and abstract.logo

Artificial social agents in a world of conscious beings.

Steve Torrance

Abstract

It is an important fact about each of us that we are conscious beings, and that the others we interact with in our social world are also conscious beings. Yet we are appear to be on the edge of a revolution in new social relationships – interactions and intimacies with a variety of non-conscious artificial social agents (ASAs) – both virtual and physical. Granted, we often behave, in the company of such ASAs as though they are conscious, and as though they are social beings. But in essence we still think of them, at least in our more reflective moments, as “tools” or “systems” – smart, and getting smarter, but lacking phenomenal awareness or real emotion.

In my talk I will discuss ways in which reflection on consciousness – both natural and (would-be) artificial – impacts on our intimate social relationships with robots. And I will propose some implications for human responsibilities in developing these technologies.

I will focus on two questions: (1) What would it take for an ASA to be conscious in a way that “matters”? (2) Can we talk of genuine social relationships or interactions with agents that have no consciousness?

On question (1), I will look at debates in the fields of machine consciousness and machine ethics, in order to examine the range of possible positions that may be taken. I will suggest that there is a close relation between thinking of a being as having a conscious phenomenology, and adopting a range of ethical attitudes towards that being. I will also discuss an important debate between those who take a “social-relational” approach to phenomenological and ethical attributions, and those who take an “objectivist” approach. I will offer ways to resolve that debate. This will help provide guidance, I hope, to those who are developing the technologies for smarter ASAs, which possibly may have stronger claims to be taken as conscious. On (2), I will suggest that, even for ASAs that are acknowledged not to be conscious, it is possible that there could be a range of ethical roles that they could come to occupy, in a way that would justify our talking of “artificial social agents” in a rich sense, one that would imply that they had both genuine ethico-social responsibilities and ethico-social entitlements.

The spread of ASAs – whether or not genuinely conscious, social or ethical – will impose heavy responsibilities upon technologists, and those they work with, to guide the social impacts of such agents in acceptable directions, as such agents increasingly inter-operate with us and with our lives. I will thus conclude by pointing to some issues of urgent social concern that are raised by the likely proliferation of ASAs in the coming years and decades.

Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots

Next month I’m giving a keynote address at the 2nd Joint UAE Symposium on Social Robotics, entitled: “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots”logo

Abstract: Advances in social robot design will be achieved hand-in-hand with increased clarity in our concepts of responsibility, folk psychology, and (machine) consciousness. 1) Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. 2) An intuitive understanding by human users of the (possibly quite alien) perceptual and cognitive predicament of robots will be essential to improving cooperation with them, as well as assisting diagnosis, robot training, and the design process itself. Synthetic phenomenology is the attempt to combine robot designs with assistive technologies such as virtual reality to make the experience-like states of cognitive robots understandable to users. 3) Making robot minds more like our own would be facilitated by finding designs that make robots susceptible to the same (mis-)conceptions concerning perception, experience and consciousness that humans have. Making a conscious-like robot will thus involve making robots that find it natural to believe that their inner states are private and non-material. In all three cases, improving robot-human interaction will be as much about an increased understanding of human responsibility, folk psychology and consciousness as it will be about technological breakthroughs in robot hardware and architecture.

Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development

cogs12390-fig-0004Former PAICS researcher Tony Morse has just published, with Angelo Cangelosi, the lead article in the upcoming issue of Cognitive Science.

Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development

Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to “switch” between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills.

You can find the article available for early view here:  http://onlinelibrary.wiley.com/doi/10.1111/cogs.12390/abstract?campaign=wolearlyview

Present or former PAICS members who would like to feature their recent research on this site should email me with the details.

Robot crime?

1041809723Yesterday I was interviewed by Radio Sputnik to comment on some recent claims about robot/AI crime.  They have made a transcription and recording of the interview available here.

Some highlights:

“We need to be worried about criminals using AI in three different ways. One is to evade detection: if one has some artificial intelligence technology, one might be able, for instance, to engage in certain kinds of financial crimes in a way that can be randomized in a particular way that avoids standard methods of crime detection. Or criminals could use computer programs to notice patterns in security systems that a human couldn’t notice, and find weaknesses that a human would find very hard to identify… And then finally a more common use might be of AI to just crack passwords and codes, and access accounts and data that people previously could leave secure. So these are just three examples of how AI would be a serious threat to security of people in general if it were in the hands of the wrong people.”

“I think it would be a tragedy if we let fear of remote possibilities of AI systems committing crimes, if that fear stopped us from investigating artificial intelligence as a positive technology that might help us solve some of the problems our world is facing now. I’m an optimist in that I think that AI as a technology can very well be used for good, and if we’re careful, can be of much more benefit than disadvantage.”

“I think that as long as legislators and law enforcement agencies understand what the possibilities are, and understand that the threat is humans committing crimes with AI rather than robots committing crimes, then I think we can head off any potential worries with the appropriate kinds of regulations and updating of our laws.”