Prediction Machines

51zmr2bn5hhl-_sx329_bo1204203200_This Thursday, November 3rd, from 13:00-13:50 in Freeman G31, Simon McGregor will lead the CogPhi discussion of Chapter 1 (“Prediction Machines”) of Andy Clark’s Surfing Uncertainty: Prediction, Action and the Embodied Mind.  Have your comments and questions ready beforehand.  In fact, feel free to post them in advance, here, as comments on this post.

How we represent emotion in the face: processing the content of information from and to the environment

The next E-Intentionality meeting will be Thursday, October 27th  in Freeman G31. Please note that David has offered to take preliminary comments in advance via email (D.A.Booth@sussex.ac.uk).

ascii_emotions

fonzu.deviantart.com

David Booth – ‘How we represent emotion in the face: processing the content of information from and to the environment’


This talk briefly presents an experiment which illustrates the scientific theory that e
mbodied and acculturated systems (such as you and me) represent information in the environment by causally processing its content in mathematically determinate ways. Three colleagues stated the strengths of emotions they saw in sets of keyboard characters that (badly) mimicked mobile parts of the human face. The mechanisms by which they rated the emoticons are given by formulae constructed deductively from discrimination distances between the presented diagrams and the memory of their features on occasions when a face has signalled the named emotional reaction to a situation. Five of the basic formulae of this theory of a mind have structures corresponding to classic conscious psychological subfunctions such as perceiving, describing, reasoning, intending and ’emoting’, and one to unconscious mental processing. Each formula specifies the interactions among mental events which, on the evidence, generated my colleagues’ answers to my questions. The calculations are totally dependent on prior and current material and societal affordances but say nothing about the development or ongoing execution of the neural or linguistic mechanisms involved, any more than do attractors, connectionist statistics or list programs. Functional accounts calculate merely amounts of information or other probabilistic quantities. Distinguishing among contents is equivalent to causal processing. Hence the plurality of mental, cultural and material systems in persons may accommodate a causation monism.

Guessing Games and The Power of Prediction

The CogPhi reading group resumes next week.  CogPhi offers the chance to read through and discuss recent literature in the Philosophy of Artifical Intelligence and Cognitive51zmr2bn5hhl-_sx329_bo1204203200_ Science.  Each week a different member of the group leads the others through the chosen reading for that week. This term we’ll be working through Andy Clark’s new book on predictive processing, Surfing Uncertainty: Prediction, Action and the Embodied Mind.

CogPhi meets fortnightly, sharing the same time slot and room as E-Intentionality, which meets fortnightly in the alternate weeks. Although CogPhi announcements will be made on the E-Int mailing list, attendance at one  seminar series is not required for attendance at the other.  CogPhi announcements will also be made here.

Next week, October 20th, from 13:00-13:50 in Freeman G31, Jonny Lee will lead the discussion of the Introduction (“Guessing Games”) and Chapter 1 (“Prediction Machines”).  Have your comments and questions ready beforehand.  In fact, feel free to post them in advance, here, as comments on this post.

EDIT:  Jonny sent out the following message yesterday, the 19th:

It’s been brought to my attention that covering both the introduction and chapter 1 might be too much material for one meeting. As such, let’s say we’ll just stick to the introduction. If you’ve already read chapter 1, apologies, but you’ll be ahead of the game. On the other hand, if the amount of reading was putting you off, you’ve now only got 10 pages to get through!

 

A Role for Introspection in Anthropic AI

Congratulations to Sam Freed, who yesterday passed his Ph.D. viva with minor corrections!  The examiners were Mike Wheeler and Blay Whitby.  Sam was co-supervised by myself and Chris Thornton, and Steve Torrance was on his research committee.sam-freed

A Role for Introspection in Anthropic AI

SUMMARY

The main thesis is that Introspection is recommended for the development of anthropic AI.

Human-like AI, distinct from rational AI, would suit robots for care for the elderly and for other tasks that require interaction with naïve humans. “Anthropic AI” is a sub-type of human-like AI, aiming for the pre-cultured, universal intelligence that is available to healthy humans regardless of time and civilisation. This is contrasted with western, modern, well-trained and adult intelligence that is often the focus of AI. Anthropic AI would pick up local cultures and habits, ignoring optimality. Introspection is recommended for the AI developer, as a source of ideas for designing an artificial mind, in the context of technology rather than science. Existing notions of introspection are analysed, and the aspiration for “clean” or “good” introspection is exposed as a mirage. Nonetheless, introspection is shown to be a legitimate source of ideas for AI using considerations of the contexts of discovery vs. justification. Moreover, introspection is shown to be a positively plausible basis for ideas for AI since if a teacher uses introspection to extract mental skills from themselves to transmit them to a student, an AI developer can also use introspection to uncover the human skills that they want to transfer to a computer. Methods and pitfalls of this approach are detailed, including the common error of polluting one’s introspection with highly-educated notions such as mathematical methods.

Examples are coded and run, showing promising learning behaviour. This is interpreted as a compromise between Classic AI and Dreyfus’s tradition. So far AI practitioners have largely ignored the subjective, while the Phenomenologists have not written code – this thesis bridges that gap. One of the examples is shown to have Gadamerian characteristics, as recommended by (Winograd & Flores, 1986). This serves also as a response to Dreyfus’s more recent publications critiquing AI (Dreyfus, 2007, 2012).

The Two Dimensions of Representation: Function vs. Content

Dear all,

The first E-Int seminar of the term will be this Thursday, October 13th, 13:00-13:50 in the Freeman Centre, room FRE- G22.  Jonny Lee, our E-Int seminar organiser, will speak.

Jonny Lee: The Two Dimensions of Representation: Function vs. Content 

The concept of mental representation features heavily in scientific explanations of cognition. At the same time, there is no consensus amongst philosophers about which things (if any things) are mental representations, and in particular how we can account (if we can) for the semantic properties paradigmatic of ordinary representation. In this paper I will discuss a recent development in the literature which distinguishes between the ‘function’ and ‘content’ dimension of mental representation, in an attempt to cast light on what a complete account of mental representation must achieve. I will argue that though the distinction is useful, chiefly because it shows where past philosophical projects have erred, there remain three “worries” about prising apart function and content. In elucidating these worries, I point to the possibility of an alternative to a traditional, essentialist theory of content, one which says that content comes part and parcel of how we treat mechanisms as functioning as a representations.

 

Artificial social agents in a world of conscious beings

I forgot to mention in the update posted earlier today that fellow PAICSer, Steve Torrance, will also be a keynote speaker at the 2nd Joint UAE Symposium on Social Robotics.  Here are his title and abstract.logo

Artificial social agents in a world of conscious beings.

Steve Torrance

Abstract

It is an important fact about each of us that we are conscious beings, and that the others we interact with in our social world are also conscious beings. Yet we are appear to be on the edge of a revolution in new social relationships – interactions and intimacies with a variety of non-conscious artificial social agents (ASAs) – both virtual and physical. Granted, we often behave, in the company of such ASAs as though they are conscious, and as though they are social beings. But in essence we still think of them, at least in our more reflective moments, as “tools” or “systems” – smart, and getting smarter, but lacking phenomenal awareness or real emotion.

In my talk I will discuss ways in which reflection on consciousness – both natural and (would-be) artificial – impacts on our intimate social relationships with robots. And I will propose some implications for human responsibilities in developing these technologies.

I will focus on two questions: (1) What would it take for an ASA to be conscious in a way that “matters”? (2) Can we talk of genuine social relationships or interactions with agents that have no consciousness?

On question (1), I will look at debates in the fields of machine consciousness and machine ethics, in order to examine the range of possible positions that may be taken. I will suggest that there is a close relation between thinking of a being as having a conscious phenomenology, and adopting a range of ethical attitudes towards that being. I will also discuss an important debate between those who take a “social-relational” approach to phenomenological and ethical attributions, and those who take an “objectivist” approach. I will offer ways to resolve that debate. This will help provide guidance, I hope, to those who are developing the technologies for smarter ASAs, which possibly may have stronger claims to be taken as conscious. On (2), I will suggest that, even for ASAs that are acknowledged not to be conscious, it is possible that there could be a range of ethical roles that they could come to occupy, in a way that would justify our talking of “artificial social agents” in a rich sense, one that would imply that they had both genuine ethico-social responsibilities and ethico-social entitlements.

The spread of ASAs – whether or not genuinely conscious, social or ethical – will impose heavy responsibilities upon technologists, and those they work with, to guide the social impacts of such agents in acceptable directions, as such agents increasingly inter-operate with us and with our lives. I will thus conclude by pointing to some issues of urgent social concern that are raised by the likely proliferation of ASAs in the coming years and decades.

Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots

Next month I’m giving a keynote address at the 2nd Joint UAE Symposium on Social Robotics, entitled: “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots”logo

Abstract: Advances in social robot design will be achieved hand-in-hand with increased clarity in our concepts of responsibility, folk psychology, and (machine) consciousness. 1) Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. 2) An intuitive understanding by human users of the (possibly quite alien) perceptual and cognitive predicament of robots will be essential to improving cooperation with them, as well as assisting diagnosis, robot training, and the design process itself. Synthetic phenomenology is the attempt to combine robot designs with assistive technologies such as virtual reality to make the experience-like states of cognitive robots understandable to users. 3) Making robot minds more like our own would be facilitated by finding designs that make robots susceptible to the same (mis-)conceptions concerning perception, experience and consciousness that humans have. Making a conscious-like robot will thus involve making robots that find it natural to believe that their inner states are private and non-material. In all three cases, improving robot-human interaction will be as much about an increased understanding of human responsibility, folk psychology and consciousness as it will be about technological breakthroughs in robot hardware and architecture.