Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots

Next month I’m giving a keynote address at the 2nd Joint UAE Symposium on Social Robotics, entitled: “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots”logo

Abstract: Advances in social robot design will be achieved hand-in-hand with increased clarity in our concepts of responsibility, folk psychology, and (machine) consciousness. 1) Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. 2) An intuitive understanding by human users of the (possibly quite alien) perceptual and cognitive predicament of robots will be essential to improving cooperation with them, as well as assisting diagnosis, robot training, and the design process itself. Synthetic phenomenology is the attempt to combine robot designs with assistive technologies such as virtual reality to make the experience-like states of cognitive robots understandable to users. 3) Making robot minds more like our own would be facilitated by finding designs that make robots susceptible to the same (mis-)conceptions concerning perception, experience and consciousness that humans have. Making a conscious-like robot will thus involve making robots that find it natural to believe that their inner states are private and non-material. In all three cases, improving robot-human interaction will be as much about an increased understanding of human responsibility, folk psychology and consciousness as it will be about technological breakthroughs in robot hardware and architecture.

Machine consciousness: Moving beyond “Is it possible?”

The next E-Intentionality seminar will be held Monday, June 20th from 13:00 to 14:50 in Fulton 102.  Ron Chrisley will speak on “Machine consciousness:  Moving beyond “Is it possible?””2000px-hal9000-svg as a dry run of his talk at the “Mind, Selves & Technology” workshop later that week in Lisbon:

Philosophical contributions to the field of machine consciousness have been preoccupied with questions such as: Could a machine be conscious? Could a computer be conscious solely by virtue of running the right program?  How would we know if we achieved machine consciousness? etc.  I propose that this preoccupation constitutes a dereliction of philosophical duty. Philosophers do better at helping solve conceptual problems in machine consciousness (and do better at exploiting insights from machine consciousness to help solve conceptual problems in consciousness studies in general) once they replace those general questions, as fascinating as they are, with ones that a) reflect a broader understanding of what machine consciousness is or could be; and b) are better grounded in empirical machine consciousness research.