(Another) joint paper with Aaron Sloman published

Screenshot 2017-06-13 16.25.17The proceedings of EUCognition 2016 in Vienna, co-edited by myself, Vincent Müller, Yulia Sandamirskaya and Markus Vincze, have just been published online (free access):  

In it is a joint paper by Aaron Sloman and myself, entitled “Architectural Requirements for Consciousness“.  Here is the abstract:

This paper develops, in sections I-III, the virtual machine architecture approach to explaining certain features of consciousness first proposed in [1] and elaborated in [2], in which particular qualitative aspects of experiences (qualia) are proposed to be particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of an agent that make that agent prone to believe the kinds of things that are typically believed to be true of qualia (e.g., that they are ineffable, immediate, intrinsic, and private). Section IV aims to make it intelligible how the requirements identified in sections II and III could be realised in a grounded, sensorimotor, cognitive robotic architecture.

The existence of qualia does not entail dualism

Our next E-Intentionality seminar is this Thurnaossday, December 1st, at 13:00 in Freeman
G22.  This will be a dry run of a talk I’ll be giving
as part of EUCognition2016, entitled “Architectural Requirements for Consciousness”.  You can read the abstract here, along with an extended clarificatory discussion prompted by David Booth’s comments.

A Role for Introspection in Anthropic AI

Congratulations to Sam Freed, who yesterday passed his Ph.D. viva with minor corrections!  The examiners were Mike Wheeler and Blay Whitby.  Sam was co-supervised by myself and Chris Thornton, and Steve Torrance was on his research committee.sam-freed

A Role for Introspection in Anthropic AI

SUMMARY

The main thesis is that Introspection is recommended for the development of anthropic AI.

Human-like AI, distinct from rational AI, would suit robots for care for the elderly and for other tasks that require interaction with naïve humans. “Anthropic AI” is a sub-type of human-like AI, aiming for the pre-cultured, universal intelligence that is available to healthy humans regardless of time and civilisation. This is contrasted with western, modern, well-trained and adult intelligence that is often the focus of AI. Anthropic AI would pick up local cultures and habits, ignoring optimality. Introspection is recommended for the AI developer, as a source of ideas for designing an artificial mind, in the context of technology rather than science. Existing notions of introspection are analysed, and the aspiration for “clean” or “good” introspection is exposed as a mirage. Nonetheless, introspection is shown to be a legitimate source of ideas for AI using considerations of the contexts of discovery vs. justification. Moreover, introspection is shown to be a positively plausible basis for ideas for AI since if a teacher uses introspection to extract mental skills from themselves to transmit them to a student, an AI developer can also use introspection to uncover the human skills that they want to transfer to a computer. Methods and pitfalls of this approach are detailed, including the common error of polluting one’s introspection with highly-educated notions such as mathematical methods.

Examples are coded and run, showing promising learning behaviour. This is interpreted as a compromise between Classic AI and Dreyfus’s tradition. So far AI practitioners have largely ignored the subjective, while the Phenomenologists have not written code – this thesis bridges that gap. One of the examples is shown to have Gadamerian characteristics, as recommended by (Winograd & Flores, 1986). This serves also as a response to Dreyfus’s more recent publications critiquing AI (Dreyfus, 2007, 2012).