CFP: Cognitive Robot Architectures

Recently I was appointed to the Editorial Board of the journal Cognitive Systems Research. We have just announced a call for submissions to a special issue that I am co-editing along with the other s13890417organisers of EUCognition2016.  Although we expect some authors of papers for that meeting to submit their papers for inclusion in this special issue, this is an open call: one need not attend EUCognition2016 to submit something for inclusion in the special issue.  The call, reproduced below, can also be found at:

http://www.journals.elsevier.com/cognitive-systems-research/call-for-papers/special-issue-on-cognitive-robot-architecture

Special Issue on Cognitive Robot Architectures


Research into cognitive systems is distinct from artificial intelligence in general in that it seeks to design complete artificial systems in ways that are informed by, or that attempt to explain, biological cognition. The emphasis is on systems that are autonomous, robust, flexible and self-improving in pursuing their goals in real environments.  This special issue of Cognitive Systems Research will feature recent work in this area that is pitched at the level of the cognitive architecture of such designs and systems.  Cognitive architectures are the underlying, relatively invariant structural and functional constraints that make possible cognitive processes such as perception, action, reasoning, learning and planning.  In particular, this issue will focus on cognitive architectures for robots that are designed either using insights from natural cognition, or to help explain natural cognition, or both.

Papers included in this issue will address such questions/debates as:
Continue reading

Move Over, Truth: An Instrumental Metaphysics

The next E-Intentionality seminar will be 13:00-13:50 Thursday, November 10th 2016 in room Freeman G22 (not G31 like all the EI/CogPhi meetings so far this term).  Simon McGregor will present his research:

Move Over, Truth: An Instrumental Metaphysics
Most analytic philosophers are wedded to a realist metaphysics in which what matters is the truth or otherwise of philosophical assertions. I will argue for an utterly different metaphysical mode of thought, which focuses on reflective cognitive practice in the context of one’s lived concerns. This perspective understands rationality in terms of experienced instrumental justification, even for cognitive practices such as forming truth judgements.

Architectural Requirements for Consciousness

I’ll be giving a talk at the EUCog2016 conference in Vienna this December, presenting joint work with Aaron Sloman.  Here is the extended abstract:

Architectural requirements for consciousness
Ron Chrisley and Aaron Sloman

This paper develops the virtual machine architecture approach to explaining certain features of consciousness first proposed in (Sloman and Chrisley 2003) and elaborated in (Chrisley and Sloman 2016), in which the particular qualitative aspects of experiences (qualia) are identified as being particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of agent A that make A prone to believe:

  1. That A is in a state S, the aspects of which are knowable by A directly, without further evidence (immediacy);
  2. That A’s knowledge of these aspects is of a kind such that only A could have such knowledge of those aspects (privacy);
  3. That these states have these aspects intrinsically, not by virtue of, e.g., their functional role (intrinsicness);
  4. That these aspects of S cannot be completely communicated to an agent that is not A (ineffability).

A crucial component of the explanation, which we call the Virtual Machine Functionalism (VMF) account of qualia, is that the propositions 1-4 need not be true in order for qualia to make A prone to believe those propositions. In fact, it is arguble that nothing could possibly render all of 1-4 true simultaneously. But this would not imply that there are no qualia, since qualia only require that agents that have them be prone to believe 1-4.

It is an open empirical question whether, in some or all humans, the properties underlying the dispositions to believe 1-4 have a unified structure that would render reference to them a useful move in providing a causal explanation of such beliefs. Thus, according to the VMF account of qualia, it is an open empirical question whether qualia exist in any given human. By the same token, however, it is an open engineering question whether, independently of the human case, it is possible or feasible to design an artificial system that a) is also prone to believe 1-4 and b) is so disposed because of a unified structure. This talk will: a) look at the requirements that must be in place for a system to believe 1-4, and b) sketch a design in which the propensities to believe 1-4 can be traced to a unified virtual machine structure, underwriting talk of such a system having qualia.

a) General requirements for believing 1-4:

These include those for being a system that can be said to have beliefs and propensities to believe. Further, having the propensities to believe 1-4 requires the possibility of having beliefs about oneself, one’s knowledge, possibility/impossibility, and other minds. At a minimum, such constraints require a cognitive architecture with reactive, deliberative and meta-management components (Anonymous1 and Anonymous2 2003), with at least two layers of meta-cognition: (i) detection and use of various states of internal VM components; and (ii) holding beliefs/theories about those components.

 

b) A qualia-supporting design:

  • A propensity to believe in immediacy (1) can be explained in part as the result of the meta-management layer of a deliberating/justifying but resource- bounded architecture needing a basis for terminating deliberation/justification in a way that doesn’t itself prompt further deliberation or justification.
  • A propensity to believe in privacy (2) can be explained in part as the result of a propensity to believe in immediacy (1), along with a policy of *normally* conceiving of the beliefs of others as making evidential and justificatory impact on one’s own beliefs. To permit the termination of deliberation and justification, some means must be found to discount, at some point, the relevance of others’ beliefs, and privacy provides prima facie rational grounds for doing this.
  • A propensity to believe in intrinsicness (3) can also be explained in part as the result of a propensity to believe in immediacy, since states having the relevant aspects non-intrinsically (i.e., by virtue of relational or systemic facts) would be difficult to rectify with the belief that one’s knowledge of these aspects does not require any (further) evidence.
  • An account of a propensity to believe in ineffability (4) requires some nuance, since unlike 1-3, 4 is in a sense true, given the causally indexical nature of some virtual machine states and their properties, as explained in (Anonymous2 and Anonymous1 2016). However, properly appreciating the truth of 4 requires philosophical sophistication, and so its truth alone cannot explain the conceptually primitive propensity to believe it; some alternative explanations will be offered.

 

References:

Sloman, A. and Chrisley, R. (2003) “Virtual Machines and Consciousness”. Journal of Consciousness Studies 10 (4-5), 133-172.

Chrisley, R. and Sloman, A. (2016, in press) “Functionalism, Revisionism and Qualia”. APA Newsletter on Philosophy and Computers 16 (1).

Prediction Machines

51zmr2bn5hhl-_sx329_bo1204203200_This Thursday, November 3rd, from 13:00-13:50 in Freeman G31, Simon McGregor will lead the CogPhi discussion of Chapter 1 (“Prediction Machines”) of Andy Clark’s Surfing Uncertainty: Prediction, Action and the Embodied Mind.  Have your comments and questions ready beforehand.  In fact, feel free to post them in advance, here, as comments on this post.

How we represent emotion in the face: processing the content of information from and to the environment

The next E-Intentionality meeting will be Thursday, October 27th  in Freeman G31. Please note that David has offered to take preliminary comments in advance via email (D.A.Booth@sussex.ac.uk).

ascii_emotions

fonzu.deviantart.com

David Booth – ‘How we represent emotion in the face: processing the content of information from and to the environment’


This talk briefly presents an experiment which illustrates the scientific theory that e
mbodied and acculturated systems (such as you and me) represent information in the environment by causally processing its content in mathematically determinate ways. Three colleagues stated the strengths of emotions they saw in sets of keyboard characters that (badly) mimicked mobile parts of the human face. The mechanisms by which they rated the emoticons are given by formulae constructed deductively from discrimination distances between the presented diagrams and the memory of their features on occasions when a face has signalled the named emotional reaction to a situation. Five of the basic formulae of this theory of a mind have structures corresponding to classic conscious psychological subfunctions such as perceiving, describing, reasoning, intending and ’emoting’, and one to unconscious mental processing. Each formula specifies the interactions among mental events which, on the evidence, generated my colleagues’ answers to my questions. The calculations are totally dependent on prior and current material and societal affordances but say nothing about the development or ongoing execution of the neural or linguistic mechanisms involved, any more than do attractors, connectionist statistics or list programs. Functional accounts calculate merely amounts of information or other probabilistic quantities. Distinguishing among contents is equivalent to causal processing. Hence the plurality of mental, cultural and material systems in persons may accommodate a causation monism.

Guessing Games and The Power of Prediction

The CogPhi reading group resumes next week.  CogPhi offers the chance to read through and discuss recent literature in the Philosophy of Artifical Intelligence and Cognitive51zmr2bn5hhl-_sx329_bo1204203200_ Science.  Each week a different member of the group leads the others through the chosen reading for that week. This term we’ll be working through Andy Clark’s new book on predictive processing, Surfing Uncertainty: Prediction, Action and the Embodied Mind.

CogPhi meets fortnightly, sharing the same time slot and room as E-Intentionality, which meets fortnightly in the alternate weeks. Although CogPhi announcements will be made on the E-Int mailing list, attendance at one  seminar series is not required for attendance at the other.  CogPhi announcements will also be made here.

Next week, October 20th, from 13:00-13:50 in Freeman G31, Jonny Lee will lead the discussion of the Introduction (“Guessing Games”) and Chapter 1 (“Prediction Machines”).  Have your comments and questions ready beforehand.  In fact, feel free to post them in advance, here, as comments on this post.

EDIT:  Jonny sent out the following message yesterday, the 19th:

It’s been brought to my attention that covering both the introduction and chapter 1 might be too much material for one meeting. As such, let’s say we’ll just stick to the introduction. If you’ve already read chapter 1, apologies, but you’ll be ahead of the game. On the other hand, if the amount of reading was putting you off, you’ve now only got 10 pages to get through!

 

A Role for Introspection in Anthropic AI

Congratulations to Sam Freed, who yesterday passed his Ph.D. viva with minor corrections!  The examiners were Mike Wheeler and Blay Whitby.  Sam was co-supervised by myself and Chris Thornton, and Steve Torrance was on his research committee.sam-freed

A Role for Introspection in Anthropic AI

SUMMARY

The main thesis is that Introspection is recommended for the development of anthropic AI.

Human-like AI, distinct from rational AI, would suit robots for care for the elderly and for other tasks that require interaction with naïve humans. “Anthropic AI” is a sub-type of human-like AI, aiming for the pre-cultured, universal intelligence that is available to healthy humans regardless of time and civilisation. This is contrasted with western, modern, well-trained and adult intelligence that is often the focus of AI. Anthropic AI would pick up local cultures and habits, ignoring optimality. Introspection is recommended for the AI developer, as a source of ideas for designing an artificial mind, in the context of technology rather than science. Existing notions of introspection are analysed, and the aspiration for “clean” or “good” introspection is exposed as a mirage. Nonetheless, introspection is shown to be a legitimate source of ideas for AI using considerations of the contexts of discovery vs. justification. Moreover, introspection is shown to be a positively plausible basis for ideas for AI since if a teacher uses introspection to extract mental skills from themselves to transmit them to a student, an AI developer can also use introspection to uncover the human skills that they want to transfer to a computer. Methods and pitfalls of this approach are detailed, including the common error of polluting one’s introspection with highly-educated notions such as mathematical methods.

Examples are coded and run, showing promising learning behaviour. This is interpreted as a compromise between Classic AI and Dreyfus’s tradition. So far AI practitioners have largely ignored the subjective, while the Phenomenologists have not written code – this thesis bridges that gap. One of the examples is shown to have Gadamerian characteristics, as recommended by (Winograd & Flores, 1986). This serves also as a response to Dreyfus’s more recent publications critiquing AI (Dreyfus, 2007, 2012).