Functionalism, Revisionism, and Qualia

logoA paper by myself and Aaron Sloman, “Functionalism, Revisionism, and Qualia” has just been published in the APA Newsletter on Philosophy and Computing. (The whole issue looks fantastic – I’m looking forward to reading all of it, especially the other papers in the “Mind Robotics” section, and most especially the papers by Jun Tani and Riccardo Manzotti). Our contribution is a kind of follow-up to our 2003 paper “Virtual Machines and Consciousness”. There’s no abstract, so let me just list here a few of the more controversial things we claim (and in some cases, even argue for!):

  • Even if our concept of qualia is true of nothing, qualia might still exist (we’re looking at you, Dan Dennett!)
  • If qualia exist, they are physical – or at least their existence alone would not imply the falsity of physicalism (lots of people we’re looking at here )
  • We might not have qualia: The existence of qualia is an empirical matter.
  • Even if we don’t have qualia, it might be possible to build a robot that does!
  • The question of whether inverted qualia spectra are possible is, in a sense, incoherent.

If you get a chance to read it, I’d love to hear what you think.

Ron

The existence of qualia does not entail dualism

Our next E-Intentionality seminar is this Thurnaossday, December 1st, at 13:00 in Freeman
G22.  This will be a dry run of a talk I’ll be giving
as part of EUCognition2016, entitled “Architectural Requirements for Consciousness”.  You can read the abstract here, along with an extended clarificatory discussion prompted by David Booth’s comments.

Architectural Requirements for Consciousness

I’ll be giving a talk at the EUCog2016 conference in Vienna this December, presenting joint work with Aaron Sloman.  Here is the extended abstract:

Architectural requirements for consciousness
Ron Chrisley and Aaron Sloman

This paper develops the virtual machine architecture approach to explaining certain features of consciousness first proposed in (Sloman and Chrisley 2003) and elaborated in (Chrisley and Sloman 2016), in which the particular qualitative aspects of experiences (qualia) are identified as being particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of agent A that make A prone to believe:

  1. That A is in a state S, the aspects of which are knowable by A directly, without further evidence (immediacy);
  2. That A’s knowledge of these aspects is of a kind such that only A could have such knowledge of those aspects (privacy);
  3. That these states have these aspects intrinsically, not by virtue of, e.g., their functional role (intrinsicness);
  4. That these aspects of S cannot be completely communicated to an agent that is not A (ineffability).

A crucial component of the explanation, which we call the Virtual Machine Functionalism (VMF) account of qualia, is that the propositions 1-4 need not be true in order for qualia to make A prone to believe those propositions. In fact, it is arguble that nothing could possibly render all of 1-4 true simultaneously. But this would not imply that there are no qualia, since qualia only require that agents that have them be prone to believe 1-4.

It is an open empirical question whether, in some or all humans, the properties underlying the dispositions to believe 1-4 have a unified structure that would render reference to them a useful move in providing a causal explanation of such beliefs. Thus, according to the VMF account of qualia, it is an open empirical question whether qualia exist in any given human. By the same token, however, it is an open engineering question whether, independently of the human case, it is possible or feasible to design an artificial system that a) is also prone to believe 1-4 and b) is so disposed because of a unified structure. This talk will: a) look at the requirements that must be in place for a system to believe 1-4, and b) sketch a design in which the propensities to believe 1-4 can be traced to a unified virtual machine structure, underwriting talk of such a system having qualia.

a) General requirements for believing 1-4:

These include those for being a system that can be said to have beliefs and propensities to believe. Further, having the propensities to believe 1-4 requires the possibility of having beliefs about oneself, one’s knowledge, possibility/impossibility, and other minds. At a minimum, such constraints require a cognitive architecture with reactive, deliberative and meta-management components (Anonymous1 and Anonymous2 2003), with at least two layers of meta-cognition: (i) detection and use of various states of internal VM components; and (ii) holding beliefs/theories about those components.

 

b) A qualia-supporting design:

  • A propensity to believe in immediacy (1) can be explained in part as the result of the meta-management layer of a deliberating/justifying but resource- bounded architecture needing a basis for terminating deliberation/justification in a way that doesn’t itself prompt further deliberation or justification.
  • A propensity to believe in privacy (2) can be explained in part as the result of a propensity to believe in immediacy (1), along with a policy of *normally* conceiving of the beliefs of others as making evidential and justificatory impact on one’s own beliefs. To permit the termination of deliberation and justification, some means must be found to discount, at some point, the relevance of others’ beliefs, and privacy provides prima facie rational grounds for doing this.
  • A propensity to believe in intrinsicness (3) can also be explained in part as the result of a propensity to believe in immediacy, since states having the relevant aspects non-intrinsically (i.e., by virtue of relational or systemic facts) would be difficult to rectify with the belief that one’s knowledge of these aspects does not require any (further) evidence.
  • An account of a propensity to believe in ineffability (4) requires some nuance, since unlike 1-3, 4 is in a sense true, given the causally indexical nature of some virtual machine states and their properties, as explained in (Anonymous2 and Anonymous1 2016). However, properly appreciating the truth of 4 requires philosophical sophistication, and so its truth alone cannot explain the conceptually primitive propensity to believe it; some alternative explanations will be offered.

 

References:

Sloman, A. and Chrisley, R. (2003) “Virtual Machines and Consciousness”. Journal of Consciousness Studies 10 (4-5), 133-172.

Chrisley, R. and Sloman, A. (2016, in press) “Functionalism, Revisionism and Qualia”. APA Newsletter on Philosophy and Computers 16 (1).

Artificial social agents in a world of conscious beings

I forgot to mention in the update posted earlier today that fellow PAICSer, Steve Torrance, will also be a keynote speaker at the 2nd Joint UAE Symposium on Social Robotics.  Here are his title and abstract.logo

Artificial social agents in a world of conscious beings.

Steve Torrance

Abstract

It is an important fact about each of us that we are conscious beings, and that the others we interact with in our social world are also conscious beings. Yet we are appear to be on the edge of a revolution in new social relationships – interactions and intimacies with a variety of non-conscious artificial social agents (ASAs) – both virtual and physical. Granted, we often behave, in the company of such ASAs as though they are conscious, and as though they are social beings. But in essence we still think of them, at least in our more reflective moments, as “tools” or “systems” – smart, and getting smarter, but lacking phenomenal awareness or real emotion.

In my talk I will discuss ways in which reflection on consciousness – both natural and (would-be) artificial – impacts on our intimate social relationships with robots. And I will propose some implications for human responsibilities in developing these technologies.

I will focus on two questions: (1) What would it take for an ASA to be conscious in a way that “matters”? (2) Can we talk of genuine social relationships or interactions with agents that have no consciousness?

On question (1), I will look at debates in the fields of machine consciousness and machine ethics, in order to examine the range of possible positions that may be taken. I will suggest that there is a close relation between thinking of a being as having a conscious phenomenology, and adopting a range of ethical attitudes towards that being. I will also discuss an important debate between those who take a “social-relational” approach to phenomenological and ethical attributions, and those who take an “objectivist” approach. I will offer ways to resolve that debate. This will help provide guidance, I hope, to those who are developing the technologies for smarter ASAs, which possibly may have stronger claims to be taken as conscious. On (2), I will suggest that, even for ASAs that are acknowledged not to be conscious, it is possible that there could be a range of ethical roles that they could come to occupy, in a way that would justify our talking of “artificial social agents” in a rich sense, one that would imply that they had both genuine ethico-social responsibilities and ethico-social entitlements.

The spread of ASAs – whether or not genuinely conscious, social or ethical – will impose heavy responsibilities upon technologists, and those they work with, to guide the social impacts of such agents in acceptable directions, as such agents increasingly inter-operate with us and with our lives. I will thus conclude by pointing to some issues of urgent social concern that are raised by the likely proliferation of ASAs in the coming years and decades.

Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots

Next month I’m giving a keynote address at the 2nd Joint UAE Symposium on Social Robotics, entitled: “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots”logo

Abstract: Advances in social robot design will be achieved hand-in-hand with increased clarity in our concepts of responsibility, folk psychology, and (machine) consciousness. 1) Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. 2) An intuitive understanding by human users of the (possibly quite alien) perceptual and cognitive predicament of robots will be essential to improving cooperation with them, as well as assisting diagnosis, robot training, and the design process itself. Synthetic phenomenology is the attempt to combine robot designs with assistive technologies such as virtual reality to make the experience-like states of cognitive robots understandable to users. 3) Making robot minds more like our own would be facilitated by finding designs that make robots susceptible to the same (mis-)conceptions concerning perception, experience and consciousness that humans have. Making a conscious-like robot will thus involve making robots that find it natural to believe that their inner states are private and non-material. In all three cases, improving robot-human interaction will be as much about an increased understanding of human responsibility, folk psychology and consciousness as it will be about technological breakthroughs in robot hardware and architecture.

Machine consciousness: Moving beyond “Is it possible?”

The next E-Intentionality seminar will be held Monday, June 20th from 13:00 to 14:50 in Fulton 102.  Ron Chrisley will speak on “Machine consciousness:  Moving beyond “Is it possible?””2000px-hal9000-svg as a dry run of his talk at the “Mind, Selves & Technology” workshop later that week in Lisbon:

Philosophical contributions to the field of machine consciousness have been preoccupied with questions such as: Could a machine be conscious? Could a computer be conscious solely by virtue of running the right program?  How would we know if we achieved machine consciousness? etc.  I propose that this preoccupation constitutes a dereliction of philosophical duty. Philosophers do better at helping solve conceptual problems in machine consciousness (and do better at exploiting insights from machine consciousness to help solve conceptual problems in consciousness studies in general) once they replace those general questions, as fascinating as they are, with ones that a) reflect a broader understanding of what machine consciousness is or could be; and b) are better grounded in empirical machine consciousness research.