Ron Chrisley will be speaking on “Varieties of depiction for synthetic phenomenology”
4:30 p.m., 26 April 2007
Pevensey I 1A1
Not all research in machine consciousness aims to instantiate phenomenal states in artifacts. For example, there is work that uses artifacts that do not themselves have phenomenal states, merely to simulate or model organisms that do. Nevertheless, one might refer to all of these pursuits – instantiating, simulating or modeling phenomenal states in an artifact – as “synthetic phenomenality”[1]. But there is another way in which artificial agents (be they simulated or real) may play a crucial role in understanding or creating consciousness: “synthetic phenomenology”. Explanations involving specific experiential events require a means of specifying the contents of experience; not all of them can be specified linguistically. One alternative, at least for the case of visual experience, is to use depictions that either evoke or refer to the content of the experience. Practical considerations concerning the generation and integration of such depictions argue in favour of a synthetic approach: the generation of depictions through the use of an embodied, perceiving and acting agent, either virtual or real. Synthetic phenomenology, then, is the attempt to use the states, interactions and capacities of an artificial agent for the purpose of specifying the content of experience. This talk discusses work with Joel Parthemore on using a robot to specify the non-conceptual content of the visual experience of an (hypothetical) organism that the robot models.
I gave a talk on this topic to E-Intentionality last Autumn; the talk today will focus on new developments and findings since then.
[1] Thanks to Rob Clowes for suggesting the term “synthetic phenomenality”.