Hands-on learning with social robots in schools

img_1347I’ve been working with student assistant Deepeka Khosla to design hands-on social robotics curricula for school students. We delivered three sessions for year 7 and 8 students on January 12th using AiBO and NAO robots, which involved some of the students doing some (very-limited) coding of the robots, and inspection of their program and sensory states, a basic form of increasing “transparency” of social robots.
A key component of making robots more intelligibile is the development of “roboliteracy”: a good understanding of what can and what cannot be (currently) done/expected to be done by social robots. Familiarity can be a key component of de-mystification/anxiety reduction.
img_4691Current plans are underway to develop a more advanced, coding-based 3-hour learning session for year 9 students, for delivery over 2017-1018, starting in May. This will be marketed exclusively to girls. During my recent visit to the UAE I was inspired by what I saw, and the reports I heard, concerning the strong representation of women and girls in robotics education in that part of the world. Just letting girls here know about that, showing them photos of female robotics teams from there, etc., might be an example of a way to make the course content match that marketing aim.
Any suggestions/examples concerning robot curriculum in schools would be very welcome!
Support for development and delivery of these sessions has been provided by the Widening Participation initiative at Sussex.

Functionalism, Revisionism, and Qualia

logoA paper by myself and Aaron Sloman, “Functionalism, Revisionism, and Qualia” has just been published in the APA Newsletter on Philosophy and Computing. (The whole issue looks fantastic – I’m looking forward to reading all of it, especially the other papers in the “Mind Robotics” section, and most especially the papers by Jun Tani and Riccardo Manzotti). Our contribution is a kind of follow-up to our 2003 paper “Virtual Machines and Consciousness”. There’s no abstract, so let me just list here a few of the more controversial things we claim (and in some cases, even argue for!):

  • Even if our concept of qualia is true of nothing, qualia might still exist (we’re looking at you, Dan Dennett!)
  • If qualia exist, they are physical – or at least their existence alone would not imply the falsity of physicalism (lots of people we’re looking at here )
  • We might not have qualia: The existence of qualia is an empirical matter.
  • Even if we don’t have qualia, it might be possible to build a robot that does!
  • The question of whether inverted qualia spectra are possible is, in a sense, incoherent.

If you get a chance to read it, I’d love to hear what you think.

Ron

CFP: Cognitive Robot Architectures

Recently I was appointed to the Editorial Board of the journal Cognitive Systems Research. We have just announced a call for submissions to a special issue that I am co-editing along with the other s13890417organisers of EUCognition2016.  Although we expect some authors of papers for that meeting to submit their papers for inclusion in this special issue, this is an open call: one need not attend EUCognition2016 to submit something for inclusion in the special issue.  The call, reproduced below, can also be found at:

http://www.journals.elsevier.com/cognitive-systems-research/call-for-papers/special-issue-on-cognitive-robot-architecture

Special Issue on Cognitive Robot Architectures


Research into cognitive systems is distinct from artificial intelligence in general in that it seeks to design complete artificial systems in ways that are informed by, or that attempt to explain, biological cognition. The emphasis is on systems that are autonomous, robust, flexible and self-improving in pursuing their goals in real environments.  This special issue of Cognitive Systems Research will feature recent work in this area that is pitched at the level of the cognitive architecture of such designs and systems.  Cognitive architectures are the underlying, relatively invariant structural and functional constraints that make possible cognitive processes such as perception, action, reasoning, learning and planning.  In particular, this issue will focus on cognitive architectures for robots that are designed either using insights from natural cognition, or to help explain natural cognition, or both.

Papers included in this issue will address such questions/debates as:
Continue reading

Architectural Requirements for Consciousness

I’ll be giving a talk at the EUCog2016 conference in Vienna this December, presenting joint work with Aaron Sloman.  Here is the extended abstract:

Architectural requirements for consciousness
Ron Chrisley and Aaron Sloman

This paper develops the virtual machine architecture approach to explaining certain features of consciousness first proposed in (Sloman and Chrisley 2003) and elaborated in (Chrisley and Sloman 2016), in which the particular qualitative aspects of experiences (qualia) are identified as being particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of agent A that make A prone to believe:

  1. That A is in a state S, the aspects of which are knowable by A directly, without further evidence (immediacy);
  2. That A’s knowledge of these aspects is of a kind such that only A could have such knowledge of those aspects (privacy);
  3. That these states have these aspects intrinsically, not by virtue of, e.g., their functional role (intrinsicness);
  4. That these aspects of S cannot be completely communicated to an agent that is not A (ineffability).

A crucial component of the explanation, which we call the Virtual Machine Functionalism (VMF) account of qualia, is that the propositions 1-4 need not be true in order for qualia to make A prone to believe those propositions. In fact, it is arguble that nothing could possibly render all of 1-4 true simultaneously. But this would not imply that there are no qualia, since qualia only require that agents that have them be prone to believe 1-4.

It is an open empirical question whether, in some or all humans, the properties underlying the dispositions to believe 1-4 have a unified structure that would render reference to them a useful move in providing a causal explanation of such beliefs. Thus, according to the VMF account of qualia, it is an open empirical question whether qualia exist in any given human. By the same token, however, it is an open engineering question whether, independently of the human case, it is possible or feasible to design an artificial system that a) is also prone to believe 1-4 and b) is so disposed because of a unified structure. This talk will: a) look at the requirements that must be in place for a system to believe 1-4, and b) sketch a design in which the propensities to believe 1-4 can be traced to a unified virtual machine structure, underwriting talk of such a system having qualia.

a) General requirements for believing 1-4:

These include those for being a system that can be said to have beliefs and propensities to believe. Further, having the propensities to believe 1-4 requires the possibility of having beliefs about oneself, one’s knowledge, possibility/impossibility, and other minds. At a minimum, such constraints require a cognitive architecture with reactive, deliberative and meta-management components (Anonymous1 and Anonymous2 2003), with at least two layers of meta-cognition: (i) detection and use of various states of internal VM components; and (ii) holding beliefs/theories about those components.

 

b) A qualia-supporting design:

  • A propensity to believe in immediacy (1) can be explained in part as the result of the meta-management layer of a deliberating/justifying but resource- bounded architecture needing a basis for terminating deliberation/justification in a way that doesn’t itself prompt further deliberation or justification.
  • A propensity to believe in privacy (2) can be explained in part as the result of a propensity to believe in immediacy (1), along with a policy of *normally* conceiving of the beliefs of others as making evidential and justificatory impact on one’s own beliefs. To permit the termination of deliberation and justification, some means must be found to discount, at some point, the relevance of others’ beliefs, and privacy provides prima facie rational grounds for doing this.
  • A propensity to believe in intrinsicness (3) can also be explained in part as the result of a propensity to believe in immediacy, since states having the relevant aspects non-intrinsically (i.e., by virtue of relational or systemic facts) would be difficult to rectify with the belief that one’s knowledge of these aspects does not require any (further) evidence.
  • An account of a propensity to believe in ineffability (4) requires some nuance, since unlike 1-3, 4 is in a sense true, given the causally indexical nature of some virtual machine states and their properties, as explained in (Anonymous2 and Anonymous1 2016). However, properly appreciating the truth of 4 requires philosophical sophistication, and so its truth alone cannot explain the conceptually primitive propensity to believe it; some alternative explanations will be offered.

 

References:

Sloman, A. and Chrisley, R. (2003) “Virtual Machines and Consciousness”. Journal of Consciousness Studies 10 (4-5), 133-172.

Chrisley, R. and Sloman, A. (2016, in press) “Functionalism, Revisionism and Qualia”. APA Newsletter on Philosophy and Computers 16 (1).

A Role for Introspection in Anthropic AI

Congratulations to Sam Freed, who yesterday passed his Ph.D. viva with minor corrections!  The examiners were Mike Wheeler and Blay Whitby.  Sam was co-supervised by myself and Chris Thornton, and Steve Torrance was on his research committee.sam-freed

A Role for Introspection in Anthropic AI

SUMMARY

The main thesis is that Introspection is recommended for the development of anthropic AI.

Human-like AI, distinct from rational AI, would suit robots for care for the elderly and for other tasks that require interaction with naïve humans. “Anthropic AI” is a sub-type of human-like AI, aiming for the pre-cultured, universal intelligence that is available to healthy humans regardless of time and civilisation. This is contrasted with western, modern, well-trained and adult intelligence that is often the focus of AI. Anthropic AI would pick up local cultures and habits, ignoring optimality. Introspection is recommended for the AI developer, as a source of ideas for designing an artificial mind, in the context of technology rather than science. Existing notions of introspection are analysed, and the aspiration for “clean” or “good” introspection is exposed as a mirage. Nonetheless, introspection is shown to be a legitimate source of ideas for AI using considerations of the contexts of discovery vs. justification. Moreover, introspection is shown to be a positively plausible basis for ideas for AI since if a teacher uses introspection to extract mental skills from themselves to transmit them to a student, an AI developer can also use introspection to uncover the human skills that they want to transfer to a computer. Methods and pitfalls of this approach are detailed, including the common error of polluting one’s introspection with highly-educated notions such as mathematical methods.

Examples are coded and run, showing promising learning behaviour. This is interpreted as a compromise between Classic AI and Dreyfus’s tradition. So far AI practitioners have largely ignored the subjective, while the Phenomenologists have not written code – this thesis bridges that gap. One of the examples is shown to have Gadamerian characteristics, as recommended by (Winograd & Flores, 1986). This serves also as a response to Dreyfus’s more recent publications critiquing AI (Dreyfus, 2007, 2012).

Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots

Next month I’m giving a keynote address at the 2nd Joint UAE Symposium on Social Robotics, entitled: “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots”logo

Abstract: Advances in social robot design will be achieved hand-in-hand with increased clarity in our concepts of responsibility, folk psychology, and (machine) consciousness. 1) Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. 2) An intuitive understanding by human users of the (possibly quite alien) perceptual and cognitive predicament of robots will be essential to improving cooperation with them, as well as assisting diagnosis, robot training, and the design process itself. Synthetic phenomenology is the attempt to combine robot designs with assistive technologies such as virtual reality to make the experience-like states of cognitive robots understandable to users. 3) Making robot minds more like our own would be facilitated by finding designs that make robots susceptible to the same (mis-)conceptions concerning perception, experience and consciousness that humans have. Making a conscious-like robot will thus involve making robots that find it natural to believe that their inner states are private and non-material. In all three cases, improving robot-human interaction will be as much about an increased understanding of human responsibility, folk psychology and consciousness as it will be about technological breakthroughs in robot hardware and architecture.

Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development

cogs12390-fig-0004Former PAICS researcher Tony Morse has just published, with Angelo Cangelosi, the lead article in the upcoming issue of Cognitive Science.

Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development

Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to “switch” between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills.

You can find the article available for early view here:  http://onlinelibrary.wiley.com/doi/10.1111/cogs.12390/abstract?campaign=wolearlyview

Present or former PAICS members who would like to feature their recent research on this site should email me with the details.