Architectural Requirements for Consciousness

I’ll be giving a talk at the EUCog2016 conference in Vienna this December, presenting joint work with Aaron Sloman.  Here is the extended abstract:

Architectural requirements for consciousness
Ron Chrisley and Aaron Sloman

This paper develops the virtual machine architecture approach to explaining certain features of consciousness first proposed in (Sloman and Chrisley 2003) and elaborated in (Chrisley and Sloman 2016), in which the particular qualitative aspects of experiences (qualia) are identified as being particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of agent A that make A prone to believe:

  1. That A is in a state S, the aspects of which are knowable by A directly, without further evidence (immediacy);
  2. That A’s knowledge of these aspects is of a kind such that only A could have such knowledge of those aspects (privacy);
  3. That these states have these aspects intrinsically, not by virtue of, e.g., their functional role (intrinsicness);
  4. That these aspects of S cannot be completely communicated to an agent that is not A (ineffability).

A crucial component of the explanation, which we call the Virtual Machine Functionalism (VMF) account of qualia, is that the propositions 1-4 need not be true in order for qualia to make A prone to believe those propositions. In fact, it is arguble that nothing could possibly render all of 1-4 true simultaneously. But this would not imply that there are no qualia, since qualia only require that agents that have them be prone to believe 1-4.

It is an open empirical question whether, in some or all humans, the properties underlying the dispositions to believe 1-4 have a unified structure that would render reference to them a useful move in providing a causal explanation of such beliefs. Thus, according to the VMF account of qualia, it is an open empirical question whether qualia exist in any given human. By the same token, however, it is an open engineering question whether, independently of the human case, it is possible or feasible to design an artificial system that a) is also prone to believe 1-4 and b) is so disposed because of a unified structure. This talk will: a) look at the requirements that must be in place for a system to believe 1-4, and b) sketch a design in which the propensities to believe 1-4 can be traced to a unified virtual machine structure, underwriting talk of such a system having qualia.

a) General requirements for believing 1-4:

These include those for being a system that can be said to have beliefs and propensities to believe. Further, having the propensities to believe 1-4 requires the possibility of having beliefs about oneself, one’s knowledge, possibility/impossibility, and other minds. At a minimum, such constraints require a cognitive architecture with reactive, deliberative and meta-management components (Anonymous1 and Anonymous2 2003), with at least two layers of meta-cognition: (i) detection and use of various states of internal VM components; and (ii) holding beliefs/theories about those components.

 

b) A qualia-supporting design:

  • A propensity to believe in immediacy (1) can be explained in part as the result of the meta-management layer of a deliberating/justifying but resource- bounded architecture needing a basis for terminating deliberation/justification in a way that doesn’t itself prompt further deliberation or justification.
  • A propensity to believe in privacy (2) can be explained in part as the result of a propensity to believe in immediacy (1), along with a policy of *normally* conceiving of the beliefs of others as making evidential and justificatory impact on one’s own beliefs. To permit the termination of deliberation and justification, some means must be found to discount, at some point, the relevance of others’ beliefs, and privacy provides prima facie rational grounds for doing this.
  • A propensity to believe in intrinsicness (3) can also be explained in part as the result of a propensity to believe in immediacy, since states having the relevant aspects non-intrinsically (i.e., by virtue of relational or systemic facts) would be difficult to rectify with the belief that one’s knowledge of these aspects does not require any (further) evidence.
  • An account of a propensity to believe in ineffability (4) requires some nuance, since unlike 1-3, 4 is in a sense true, given the causally indexical nature of some virtual machine states and their properties, as explained in (Anonymous2 and Anonymous1 2016). However, properly appreciating the truth of 4 requires philosophical sophistication, and so its truth alone cannot explain the conceptually primitive propensity to believe it; some alternative explanations will be offered.

 

References:

Sloman, A. and Chrisley, R. (2003) “Virtual Machines and Consciousness”. Journal of Consciousness Studies 10 (4-5), 133-172.

Chrisley, R. and Sloman, A. (2016, in press) “Functionalism, Revisionism and Qualia”. APA Newsletter on Philosophy and Computers 16 (1).

Advertisements

26 thoughts on “Architectural Requirements for Consciousness

  1. Ron & Aaron, your analysis of qualia is very attractive. I like the effort to formulate proposals in terms that [software] engineering scientists might understand. A good number of psychologists down the years have thought that this is what theory in fundamental psychological science should be doing.

    My comment is in the form of questions. I think they could be used to construct some arguments but that is beyond my capacity, even if it were appropriate to this medium.

    1. Do your requirements distinguish qualia from (say) allegiances, or even opinions?

    2. Can the concept of ownership be explicated well enough to address these issues? If so, have far does it cover the problem? People are prone to believe that subjective experiences are their own. (Empirically, this sense of ownership can be illusory, e.g. the belief in non-ownership of voices in ‘auditory hallucinations’.)

    3a. You describe a number of partial or directional interdependencies among the four requirements. Is it possible that they reduce to a single requirement (excluding ineffability perhaps: see below)? My bet would be on privacy, understood as what the agent takes the experience to be like (“as if”), regardless of beliefs about how things are (which is public).

    3b. You specify ‘immediacy’ two ways: (i) direct knowledge; (ii) without further evidence. I’ve never understood what epistemic “direct”ness could be. Is not the requirement a state / belief / view without regard to any evidence at all? The 19th century Introspectionists struggled mightily to exclude any beliefs about the world from their accounts of the contents of consciousness (“stimulus error”).

    4. Why do you give such prominence to the evidential role of others’ beliefs merely? What about the actual (public) evidence, regardless of what other particular people believe (if they even know about some evidence)?

    5. Is it true that aspects of qualia cannot be communicated completely? Yes, artists are thought often to be dissatisfied with their expression of their experience (if that is what they consider they are doing). Yet have no artists ever been satisfied with the completeness of their communication? What an agent can never transfer (at all) to another agent is ownership of the qualia. Both sender and receiver might be satisfied with the re-creation of the experience – prone to believe it is identical in content.

    – David B (at US)

    P.S. I’ve now ticked the notification box.

  2. David,

    Thank you for your thought-provoking questions.

    “1. Do your requirements distinguish qualia from (say) allegiances, or even opinions?”

    Just to be clear: on our model, qualia are not beliefs, or even tendencies to believe. If they exist, they are aspects of a subject’s virtual machine architecture that explain the tendency of the subject to believe certain things (1-4) about its experience. That said, I think explaining tendencies to hold allegiances or opinions similar to 1-4 might also qualify something as being the referent of “qualia”.

    “2. Can the concept of ownership be explicated well enough to address these issues? If so, have far does it cover the problem? People are prone to believe that subjective experiences are their own. (Empirically, this sense of ownership can be illusory, e.g. the belief in non-ownership of voices in ‘auditory hallucinations’.)”

    I’m not sure what “these issues” or “the problem” refer to, but yes, ownership is important. I wonder if it might fall out of 1-4? Stating the facts here is tricky: subjects are not prone to believe that all subjective experiences are their own, just some of them. Which ones? It’s tempting to say “theirs”, but isn’t that circular? I’ll need to think more on this.

    “3a. You describe a number of partial or directional interdependencies among the four requirements. Is it possible that they reduce to a single requirement (excluding ineffability perhaps: see below)? My bet would be on privacy, understood as what the agent takes the experience to be like (“as if”), regardless of beliefs about how things are (which is public).”

    Yes, you could be right here.

    “3b. You specify ‘immediacy’ two ways: (i) direct knowledge; (ii) without further evidence. I’ve never understood what epistemic “direct”ness could be. Is not the requirement a state / belief / view without regard to any evidence at all? The 19th century Introspectionists struggled mightily to exclude any beliefs about the world from their accounts of the contents of consciousness (“stimulus error”).”

    I see your point, but I suppose I was trying to avoid ruling out the possibility of am agent believing that some of its states are epistemically self-justifying. To believe in these is to believe that one’s knowledge of them is based on evidence, but the evidence is nothing more than being in the state itself.

    “4. Why do you give such prominence to the evidential role of others’ beliefs merely? What about the actual (public) evidence, regardless of what other particular people believe (if they even know about some evidence)?”

    That prominence is given in the account of the belief in privacy, which is a contrast between one’s own epistemic relation to one’s states and others’ (possible) epistemic relations to those same states. I’m not sure I see how to explain privacy without that prominence.

    “5. Is it true that aspects of qualia cannot be communicated completely? Yes, artists are thought often to be dissatisfied with their expression of their experience (if that is what they consider they are doing). Yet have no artists ever been satisfied with the completeness of their communication? What an agent can never transfer (at all) to another agent is ownership of the qualia. Both sender and receiver might be satisfied with the re-creation of the experience – prone to believe it is identical in content.”

    Of course, they might be satisfied, in some pragmatic or conventional sense. They may even believe as you say. But there will still be a tendency, if they are asked things like “might it not be that the qualia of your two functionally similar experiences are inverted, or at least different?”, to say yes, and to believe that there is a possible difference in experience for which there is no possibility of establishing or ruling out.

    There is another kind of ineffability, having to do with the richness of experience outstripping our concepts of it, which we did not attempt to address.

    Ron

    • Thank you, Ron, for editing my comment nicely into on your response in PAICS.

      We may be agreed about the ‘centrality’ of privacy to distinguishing qualia from other states. However, I’m beginning to be worried about the role of “evidence”. I shall try to be a bit more specific, briefly, following my numbering as you do.

      1. Yes, I mean that (just) the private aspect of an allegiance has/is qualia. If so, that seems a lot broader than an account of states like experiencing [traditionally] the redness of a patch, regardless of how red it would be identified as being in normal contexts of lighting etc.

      2, 3a, 3b, 4. Are you doing more than offering requirements for distinguishing My Privacy from Another’s Privacy? Are you explicating ‘belief’ [in your “prone to believe”] as, in part, having “evidence”? Isn’t there ‘non-rational’ belief? Aren’t there non-empirical beliefs, e.g. that killing is wrong or even that a sunset is beautiful? I’d myself would be as happy with “prone to treat qualia [?the experienced content of mental states]” as (e.g.) my own private impressions! / ideas! / what it’s like to be me! (! = a quote from the classics).

      5. If instances of a category of qualia end up being partly incommunicable, does that constitute those instances being qualia? Would they cease to be my private experiences if they were completely communicable? Part of the point that I have in mind is that deciding on success or otherwise of communication of qualia can only be two people each commenting on [a subset] of their own qualia (and memories of them?), not consulting any evidence. [That sharing has to be a lot more difficult to execute than treating an aspect of one’s state as a quale.]

      – David

      • David,

        Thanks again for your comments. I think we are quickly getting into details that outstrip what can be said in an abstract. As it is, you are in the unenviable position of trying to infer what Aaron and I are proposing from some rather elliptical, simplistic remarks. If you like, I can send you the uncorrected proofs of our paper, which would give you a (hopefully) clearer idea of how our account works. But I will do my best to respond to your comments without requiring you to read the paper.

        “We may be agreed about the ‘centrality’ of privacy to distinguishing qualia from other states. However, I’m beginning to be worried about the role of “evidence”. I shall try to be a bit more specific, briefly, following my numbering as you do.

        1. Yes, I mean that (just) the private aspect of an allegiance has/is qualia.”

        That is an interesting suggestion, but it is not what Aaron and I are proposing. Qualia are whatever features of virtual machine states and architectures make it tempting/likely/advantageous(?) for an agent to ascribe to themselves states that have properties that are private, immediate, intrinsic and ineffable. It is unlikely that an allegiancebelief, or an aspect of an allegiance/belief, plays this role. I suppose if it did, we would just want to look at the properties of the system that make *that* allegiance/belief tempting/likely/advantageous, and explain qualia in term of those properties. Further, we do not want to explain the ascription of, e.g., privacy to one’s states in term of privacy; the whole point is to understand that self-ascription without assuming that what is being self-ascribed is true, exists, etc. We are thus engaging in Dennettian heterophenomenology, as I understand it.

        ” If so, that seems a lot broader than an account of states like experiencing [traditionally] the redness of a patch, regardless of how red it would be identified as being in normal contexts of lighting etc.”

        Yes, if we were doing what you mention, we would be attempting to account for “cognitive phenomenology”, I suppose. But we aren’t doing that, I don’t think.

        “2, 3a, 3b, 4. Are you doing more than offering requirements for distinguishing My Privacy from Another’s Privacy?”

        I suppose we might be doing that, but we are doing more, since what we are really doing is looking at requirements for belief in My Privacy, which might be present long before a belief in Another’s Privacy.

        “Are you explicating ‘belief’ [in your “prone to believe”] as, in part, having “evidence”? Isn’t there ‘non-rational’ belief? Aren’t there non-empirical beliefs, e.g. that killing is wrong or even that a sunset is beautiful? I’d myself would be as happy with “prone to treat qualia [?the experienced content of mental states]” as (e.g.) my own private impressions! / ideas! / what it’s like to be me! (! = a quote from the classics).”

        We are not committed to any precise kind of causal route that must mediate between the features of a virtual machine and the tendency of a subject realising that virtual machine to ascribe qualia to itself. There could be some brute causal link, but that would be very mysterious. It would be more explanatory to show how the tendency to self-ascribe arises as a byproduct of mechanisms, policies, etc. that are independently valuable/adaptive/natural.

        “5. If instances of a category of qualia end up being partly incommunicable, does that constitute those instances being qualia?”

        If they are already qualia, then how can how they “end up” constituting them as being qualia?

        On our account, I suppose partial incommunicability could be a cause of a belief in absolute incommunicability, although that is not the possibility we explore in our paper.

        ” Would they cease to be my private experiences if they were completely communicable?”

        They would not cease to be qualia, even if they were completely communicable, as long they were still the same kind of thing that has caused/usually tends to cause self-ascription of (incommunicable) qualia.

        ” Part of the point that I have in mind is that deciding on success or otherwise of communication of qualia can only be two people each commenting on [a subset] of their own qualia (and memories of them?), not consulting any evidence. [That sharing has to be a lot more difficult to execute than treating an aspect of one’s state as a quale.]”

        I’m not sure I agree, but I’m not sure I understand either. Perhaps you could make your point another way?

        Ron

  3. Thank you for your patience, Ron.

    I’m sure we’re all engaged in the enterprise of attempting to explain scientifically how things seem to an agent. I don’t know enough of Dennett’s writings to be sure of his position but I’m very suspicious of any view which formulates what is experienced as proneness to assert propositions, such as [instances of] “Only I know what I’m experiencing”, or even just “I am experiencing X”. [If it’s not “know that” but “know” = acquainted with, then I’m not sure what “Only I’m acquainted with (?experiencing) what I’m experiencing” adds to the analysis (of privacy).]

    Would it do violence to your approach to replace “A’s propensity / proneness to believe that A is in a state S” …” by “A’s capacity to be in a state S”?

    My contributions to ‘heterophenomenology’ start from the position that the contents of consciousness can only be expressed, not asserted. Such expressible ‘what it’s like’ can be explained along the lines of use of memory to generate a [vivid, convincing?, personalised] analogy of my own, to which evidence is not relevant and of which there need be no function nor any incommunicable residue.

    A plausible subjective correlate for intending to act loyally or disloyally could be a feeling of loyalty. What matters is the construct used to have the experience, not the observable achievement that may (or may not) go along with that quale. There may be another explanation than an act of allegiance or even a loyal act, such as a rousing speech by a claimant on allegiance.

    Best regards. – David

  4. I’m sure we’re all engaged in the enterprise of attempting to explain scientifically how things seem to an agent. I don’t know enough of Dennett’s writings to be sure of his position but I’m very suspicious of any view which formulates what is experienced as proneness to assert propositions, such as [instances of] “Only I know what I’m experiencing”, or even just “I am experiencing X”. [If it’s not “know that” but “know” = acquainted with, then I’m not sure what “Only I’m acquainted with (?experiencing) what I’m experiencing” adds to the analysis (of privacy).]

    There are two common ways of understanding “what is experienced”. One way is opaque/on the level of sense: what is experienced is the way that I am experiencing, say when I am seeing an apple. Another understanding is transparent/referential: it is the thing that I am having the experience of (e.g., the apple itself), which may have properties that are not reflected in my experience.

    So there are two questions, corresponding to the two understandings: Are Aaron and I saying that what is experienced (opaque sense) is the proneness to assert a proposition? No. Certainly not in the case of non-introspective experience, such as seeing an apple. The way one experiences the apple is as red, as round, as edible, as in reach, etc. What about the case of introspective experience? Since most people do not hold our view of what qualia are, it is unlikely that their experience in such moments presents qualia as that which makes one prone to believe that one is in a private, immediate, intrinsic, ineffable state. Even Aaron and I, as authors of the theory, may have difficulty it experiencing qualia that way.

    So what about the transparent understanding? Is the thing that I am related to, when I have a non-introspective experiences, a proneness to believe/assert? No, it is an apple. What about introspective experience? Our theory says that *if* there are qualia, then they are the *features of my architecture* that make me more likely to believe/assert that I am in a private, immediate, intrinsic, ineffable state. So the proneness is not “what is experienced”; it is some feature or features of my virtual machine. This fits with the claim that experiences are features of a virtual machine: that’s why being in intentional states that have them as an object count as introspection.

    So perhaps you still have suspicions, but they should be different from the ones you mentioned above.

    Would it do violence to your approach to replace “A’s propensity / proneness to believe that A is in a state S” …” by “capacity to be in a state S”?

    Absolutely it would. The whole point of our approach is to explain why A believes that they are in S without agreeing with A that they are in S.

    My contributions to ‘heterophenomenology’ starts from the position that the contents of consciousness can only be expressed, not asserted. Such expressible ‘what it’s like’ can be explained along the lines of use of memory to generate a [vivid, convincing?, personalised] analogy of my own, to which evidence is not relevant and of which there need be no function nor any incommunicable residue.

    If I understand that right, then there is little there for Aaron and I to disagree with. But we are adding to accounts such as yours an explanation of (the belief in) qualia.

    A plausible subjective correlate for intending to act loyally or disloyally could be a feeling of loyalty. What matters is the construct used to have the experience, not the observable achievement that may (or may not) go along with that quale. There may be another explanation than an act of allegiance or even a loyal act, such as a rousing speech by a claimant on allegiance.

    I am not sure what to say about that.

    The dialectic situation Aaron and I are in is this:

    • Q: How to explain the mind?
    • A: Non-dualistic cognitive science
    • Q: But how to explain consciousness?
    • A: The same: non-dualistic cognitive science
    • Q: That solves the easy problems, but qualia raise the Hard Problem. How do you solve That?
    • A: We use the same science to explain why people tend to think that there are private, intrinsic, ineffable, immediate features of experience (that qualia exist).
    • Q: Ah, that’s Dennett’s schtick: he argues that qualia can’t exist. But he’s obviously wrong
    • A: Actually, we disagree with Dennett. Qualia might exist, even if there are no private, intrinsic, ineffable, immediate features of experience, if there is something that we were talking about when we falsely believed that that thing was private, intrinsic, ineffable and immediate.
    • Q: Oh. I hadn’t thought of that.
  5. We/I have two problems in your response, Ron.

    The first part seems not to recognise that I was intending to paraphrase your use of “believe” and “know” in your explanation, not to imply that you were claiming that the agent uses propositions in qualia. I wrote:-
    “I’m very suspicious of any view which formulates what is experienced as proneness to assert propositions, such as [instances of] “Only I know what I’m experiencing”, or even just “I am experiencing X”. [If it’s not “know that” but “know” = acquainted with, then I’m not sure what “Only I’m acquainted with (?experiencing) what I’m experiencing” adds to the analysis (of privacy).] ”
    That was meant to build on the opening paragraph of your abstract, “[qualia are] states of agent A that make A prone to believe … [t]hat A is in a state S,….” [and you go on to state your first requirement].
    I don’t see that A necessarily believes or knows anything about that state: the agent A simply experiences the qualia (maybe?) when in that state, if the requirements of the theory you propose are all met, regardless of whether the A believes the qualia to be private etc.

    [I have some difficulty fitting that first half of your reply to part of your final Answer, however. “Qualia might exist, even if there are no private, intrinsic, ineffable, immediate features of experience,” is fine but you go on to say, “[Qualia might exist] … if there is something that we were talking about when we falsely believed that that thing was private, intrinsic, ineffable and immediate.” Perhaps those “we”s are the scientists, not the experiencers.]

    The dialectic in your second half may or may not indicate other sorts of trouble.

    (a) I see no reason to doubt that we (human beings), and maybe some future active virtual machines, have subjective experiences, regardless of philosophers’ accounts of them or scientists’ explanations of them. Are you making a point in the philosophy of scientific explanations in the case of phenomenology, rather more than outlining an architecture specific to qualia?

    (b) What does “non-dualist” mean? – Denying the existence of a Cartesian substance of introspectables in defence of the position that there is only material substance? I can’t see how qualia, or any other states of the human mind or an active virtual machine, can be nothing but physical, because social traditions are inherent in such mental states – acquired by education and programming respectively. (I’ve yet to see a physicalist account of social causation that goes beyond a sketchily structured re-assertion of a physicalist position.) Redness is a social construct (expressed verbally).

    – David

  6. David, you wrote:

    The first part seems not to recognise that I was intending to paraphrase your use of “believe” and “know” in your explanation, not to imply that you were claiming that the agent uses propositions in qualia. I wrote:-

    “I’m very suspicious of any view which formulates what is experienced as proneness to assert propositions, such as [instances of] “Only I know what I’m experiencing”, or even just “I am experiencing X”. [If it’s not “know that” but “know” = acquainted with, then I’m not sure what “Only I’m acquainted with (?experiencing) what I’m experiencing” adds to the analysis (of privacy).] ”

    That was meant to build on the opening paragraph of your abstract, “[qualia are] states of agent A that make A prone to believe … [t]hat A is in a state S,….” [and you go on to state your first requirement].
    I don’t see that A necessarily believes or knows anything about that state: the agent A simply experiences the qualia (maybe?) when in that state, if the requirements of the theory you propose are all met, regardless of whether the A believes the qualia to be private etc.

    We agree: Nothing about our account requires A to believe or know anything about the state it is in in order for A to be in a state with qualia. It only has to be in a state (or be running a virtual machine with certain features) that explains why the kinds of beliefs I enumerated are compelling.

    [I have some difficulty fitting that first half of your reply to part of your final Answer, however. “Qualia might exist, even if there are no private, intrinsic, ineffable, immediate features of experience,” is fine but you go on to say, “[Qualia might exist] … if there is something that we were talking about when we falsely believed that that thing was private, intrinsic, ineffable and immediate.” Perhaps those “we”s are the scientists, not the experiencers.]

    Let me rephrase it to make it clearer: “Qualia might exist if there is something X that explains why people who speak/think of qualia are prone to say/believe that qualia (and therefore, extensionally, X) are private, intrinsic, ineffable and immediate.”

    The dialectic in your second half may or may not indicate other sorts of trouble.

    (a) I see no reason to doubt that we (human beings), and maybe some future active virtual machines, have subjective experiences, regardless of philosophers’ accounts of them or scientists’ explanations of them. Are you making a point in the philosophy of scientific explanations in the case of phenomenology, rather more than outlining an architecture specific to qualia?

    No, nothing in our account casts any doubt on the fact that we have subjective experiences. We are only suggesting (like Dennett) that some of our beliefs about those experiences may be false, and (unlike Dennett) that qualia might exist despite those beliefs being false.

    (b) What does “non-dualist” mean? – Denying the existence of a Cartesian substance of introspectables in defence of the position that there is only material substance? I can’t see how qualia, or any other states of the human mind or an active virtual machine, can be nothing but physical, because social traditions are inherent in such mental states – acquired by education and programming respectively. (I’ve yet to see a physicalist account of social causation that goes beyond a sketchily structured re-assertion of a physicalist position.) Redness is a social construct (expressed verbally).

    Then it seems that you might not be the principal target of our dialectical move. Our main argument is against those who think there is a Hard Problem that prevents a (virtual machine) functionalist explanation of qualia. On the other hand, other physicalists have claimed the same before, but we were not always persuaded by their arguments, despite agreeing with their conclusion.

    • Ron, your title in this PAICS thread for your E-Int seminar this Thursday – “The existence of qualia does not entail dualism” – brings the ‘dialectic’ (above) to the fore.

      It appears from your phrase “other physicalists” in the last paragraph above that you regard your “nondualist” argument from features of a running virtual machine triggered by qualia as being within physicalist monism.

      The idea of a somehow purely physical functioning or architecture which causes beliefs (true or false) then becomes the key problem, rather than the contents of those beliefs that the believers take to be about qualia. I have every confidence in your assurance that your and Aaron’s argument is better than earlier ones but I could not agree that an uneducated or unprogrammed dynamic structure of silicon or carbon can have (or cause) beliefs.

      Some acculturated and embodied systems which have beliefs might be clever enough also to have qualia (“as if”s). To determine that, we need to characterise qualia themselves, if/when they occur – as certain sorts of state-to-state processes presumably, not substances or properties, nor (I’d say) functions. Processes that monitor processes and other architecture you, Aaron and others have long discussed would be important. This however is mental causation built on and distinct from social causation as well as physical causation. Claiming the word cognitive for functioning neurons or computers does not negate mental performance of systems having such ‘brains’ nor the social as well as eco-genomic basis of cognition / mentation.

      Have a good E-Int and conference talk. I’m sorry but Thursdays at lunchtime are always very difficult for me. – David

      • David,

        You say:

        “The idea of a somehow purely physical functioning or architecture which causes beliefs (true or false) then becomes the key problem, rather than the contents of those beliefs that the believers take to be about qualia.”

        Yes, I had been assuming that I was trying to persuade someone who thinks that qualia, not beliefs, are the Hard Problem for physicalism. I can’t promise I will be able to find the time to do your view justice, but anyway: what are your arguments for the inadequacy of physicalism for an account of belief?

        “I could not agree that an uneducated or unprogrammed dynamic structure of silicon or carbon can have (or cause) beliefs.”

        But an educated or programmed dynamic structure of silicon or carbon can?

        “To determine that, we need to characterise qualia themselves, if/when they occur – as certain sorts of state-to-state processes presumably, not substances or properties, nor (I’d say) functions.”

        Aaron corrects me every time I try to limit the possible referents of “qualia” to states or properties, since he is completely open-minded about what they might be, including processes. But if you have a good argument for why qualia *must* be processes rather than states or properties, I’d be very interested.

        “Processes that monitor processes and other architecture you, Aaron and others have long discussed would be important. This however is mental causation built on and distinct from social causation as well as physical causation.”

        Of course, I agree that mental causation is distinct from physical causation in the sense that not all physical causation is mental causation. But you seem to mean distinct in a stronger sense, such that mental causation cannot be explained by, nor be seen as a special case of, physical causation. What are your arguments for that view?

        Best,

        Ron

      • Thank you, Ron, for the opportunity to explain my position a bit more through this online written medium, in addition to contributions to previous f2f meetings and my formal documents.

        You ask:-
        “… what are your arguments for the inadequacy of physicalism for an account of belief?” [1]
        (Quoting my words ‘…’), “[Can] ‘an educated or programmed dynamic structure of silicon or carbon’ …’have (or cause) beliefs’?” [2]
        “What are your arguments for th[e] view [that] mental causation cannot be explained by, nor be seen as a special case of, physical causation?” [3]

        The basis of my answer to the question shared by 1-3 is that entertaining a belief, having a quale, and other mental processes, depend on both material processes (physical causation; ’embodiment’, particularly in natural or engineered brains and interfaces) and societal processes (social causation; ‘acculturation’, especially in traditions of communication such as facial movements and words). Adequately interactive building and operating of the distinct physical brain, body and ecology and the distinct societal languages, relationships and institutions creates and sustains the distinct mental asserting of propositions, (access to and) acquaintance with contents of consciousness, intending, perceiving, emoting etcetera by an individual human being and any animal or machine in which sufficient capacities emerge.

        This I take to be a systems triism within a causal monism, covering questions of science and history but not issues of evaluation. It implies a material-mental dualism (and another societal-mental dualism) but not Cartesian dualism nor the Hard Problem. Being conscious of X is a particular type of mentation.

        You also ask for an “… argument for why qualia *must* be processes rather than states or properties …”. I’m not entirely clear whether a quale is a mental process or a state generated by a mental process. With that caveat, I take a process to be a succession of one state by another, and social, physical and mental processes to be causal processes (each within its closed system). That is, each state of one of these causal systems is subject to influence from some other state(s) of that system and liable to influence some other state(s) of that system.

        I take Aaron to be insisting that a programmed computer can have a virtual machine in its memory (virtually!), even with the power off, but only an actively processing virtual machine may be influenced by and have quale, believe propositions and form responsible intentions.

        – David

      • Perhaps qualia may be understood as sensory-emotional experiences arising from mental processes (sensory organs and associated processing areas in the brain together with the amygdala). As such qualia may fail to present with the more rational clarity associated with experiences resulting from processing involving the cerebral cortex. Beliefs/belief states lack the sensorily evidenced or rationally evidenced transparency and relative reliability of knowledge states, hence (perhaps) may lend themselves to modelling using fuzzy/defeasible logic/ontologies. (?)

  7. Pingback: The existence of qualia does not entail dualism | PAICS

  8. The first thing that springs to mind is ‘encapsulation’ and variation in the design of individual machines – re privacy and to some extent ineffability. Regarding belief states – perhaps a set of (adaptible) core beliefs accompanied by an internal abstract argumentation system with encapsulated biases/preferences might be useful.

    • Sarah,

      Thanks for your comments. I’d have to know more about the specific kind of encapsulation being proposed before evaluating its role in these matters. And I’m not sure that an argumentation system is required – one mere architecture-based dispositions to believe (perhaps what you are calling biases/preference) might be enough for qualia on their own.

      In your other comment you contrast qualia with the rational, but at least some people believe in cognitive phenomenology, which might scupper a clean division. Also, some beliefs may be just as “sensorily evidenced” or transparent, so that might not be a sharp distinction either.

      Ron

  9. David,

    We’re far off the topic of my abstract, but while we’re here…

    You say:

    “The basis of my answer to the question shared by 1-3 is that entertaining a belief, having a quale, and other mental processes, depend on both material processes (physical causation; ’embodiment’, particularly in natural or engineered brains and interfaces) and societal processes (social causation; ‘acculturation’, especially in traditions of communication such as facial movements and words). ”

    So now I ask, why do you believe that societal processes are not, at root, physically explicable?

    I any case, you seem to agree that there is no Hard Problem for understanding how beliefs can exist in a physical world. Some others (not you) who agree with this think that there *is* a Hard Problem for qualia. Aaron and I are trying to solve that Hard Problem by appealing to belief, for which those people agree there is no Hard Problem.

    As for the process issue, I have no strong feelings here, but it seems to me that it could be the case that only a process can have qualia, without qualia being processes. For example, they might be properties of processes, or perhaps even states that can only arise out of processes, or properties of such states, etc.

    Ron

    • Ron, you say that your and Aaron’s abstract is set in a dialectic between physicalists and Cartesian dualists. [I do understand the appropriateness of your invocation of beliefs to tackle the supposition of a Hard Problem.] There are two implications.

      1. A questioning of (eliminative) physicalism is right on the topic of your abstract (and the title of your E-Int seminar about it).

      2. There is such a thing as a [set of] process[es] of a dialectic, or indeed of a plain dialogue between parties or individuals within them. I take these processes to be non-physically (and non-mentally) causal. For example, agreement on meanings influences the exchange of ideas (and vice versa) – for example, agreeing on what is agreed and what is disagreed (and what is unclear) affects subsequent discussion.

      “Explicable” needs to be tied down. There are many types of explanation.

      What sort of physical explanation would you offer for the sense of a proposition, an effect of a vote in parliament, a process of being convicted of a crime, the [state? or process? of] academic dominance of physicalism …? [That of course *is* off your abstract!]

      Have a good seminar. – David

  10. David,

    Think of my talk as asserting and arguing for this conditional: if one can give a physicalist explanation of belief, then one can give a physicalist explanation of qualia. My talk will have little of interest in it for those who deny the antecedent.

    You answered my questions with more questions. So I take it that your reason for rejecting physicalism is that you think the questions you asked me have not been and cannot be answered?

    Let’s move from explanation back to metaphysics. I believe that if you fix the physical facts (to be safe, let’s not just fix the current ones, but the physical history too), you thereby fix the social and mental facts. Do you deny this? (I think you already answered this at IDS one week, but I forget now what you said.)

    Ron

    • Ron

      I’m taking your questions seriously. In whatever sense of explicability you intend, please point me to attempts to explain social causality physically.

      Unfortunately, there are also at least two senses of ‘facts’, e.g. (1) the reference of true propositions and (2) occurrent states of affairs.

      (Re 1) Do you believe that true propositions about matter fix true propositions about society? If so, can you sketch an example or point me to an extended discussion of that claim?

      (Re 2) As we’ve just touched on, states of affairs run into trouble with a causal process ontology: any societal state of affairs subsists only as a consequence and as a source of causal social processes. Has anyone tried to fix (some) ongoing social causality by physical causality?

      (3) If you hold to another account of facts, please point me to a social and natural scientifically informed exposition of how physical facts fix social facts.

      Please note that, like most/all sociologists and anthropologists but unlike a good number of psychologists, I do not think that social processes can be reduced to (individuals’) mental processes.

      – David.

      • Ok, you are again replying to my questions with questions. I will try to break the impasse, even though I have tried to stress that my talk is not meant to be an argument for physicalism tout court, only that *IF* you believe that the Easy Problems of consciousness can be solved physicalistically, then there is a way to solve the Hard Problem physicalistically.

        Please note that I am at this point not advocating reductionism, only monism. Once monism is in place, we can then consider what explanatory modes might be relevant to explaining the mental in terms of the physical.

        I’ll need an explanation from you why the distinction between the reference of true propositions and occurrent states of affairs might matter here. Until then, you’ll have to excuse me if I don’t respect that distinction.

        “(Re 1) Do you believe that true propositions about matter fix true propositions about society?”

        I already said:

        “I believe that if you fix the physical facts (to be safe, let’s not just fix the current ones, but the physical history too), you thereby fix the social and mental facts. Do you deny this?”

        Feel free to translate this to either:

        “I believe that if you fix the truth value of physical propositions (to be safe, let’s not just fix the ones about the present, but the ones about the past too), you thereby fix the the truth value of social and mental propositions. Do you deny this?”

        or

        “I believe that if you fix the occurrent physical state of affairs (to be safe, let’s also fix the previously occurrent physical states of affairs), you thereby fix the the occurrent social and mental states of affairs. Do you deny this?”

        Or you could explain to me why one might want to assent to one but deny the other.

        But in any case, it would be nice if you could answer some of my questions before asking me more. If I’m not precise enough, you could go ahead and make the distinctions that would resolve the ambiguity and then answer the (revised) questions.

        “If so, can you sketch an example or point me to an extended discussion of that claim?”

        The reasons for monism remain the same, it seems to me, no matter which alternative to monism is being considered. So I would argue for supervenience of the social on the physical for the same reasons I would argue for supervenience of the mental on the physical. Have a look at https://plato.stanford.edu/entries/supervenience/ . This passage can be found there:

        “One important example is the supervenience of the mental on the physical. Just about everyone, even a Cartesian dualist, believes some version of this supervenience claim.”

        Do you? If so, which version? Or are you an outlier?

        “Has anyone tried to fix (some) ongoing social causality by physical causality?”

        The short answer is: Yes, in thought experiments. Less flippantly: the notion of “fix” I am using is not something that anyone could do in practicality, perhaps not even God? It is a metaphor used to help express an ontological relation.

        By the way, do you believe that physical events can cause social (and mental) events? Or are you a non-interactionist dualist (pluralist)?

        “If you hold to another account of facts, please point me to a social and natural scientifically informed exposition of how physical facts fix social facts.”

        As I said before, the reasons for believing in supervenience are general, and have nothing to do with the mental or the social in particular. In particular, the general consensus is that if the mental or social do not at least supervene on the physical, then there can be no cases of mental-physical causation or social-physical causation, and thus no explanation of why these realms evolve in a coherent manner.

        It’s OK if we disagree on these matters, David.

        Ron

      • Thank you for your patience, Ron. I’m trying to reply to your questions without putting words into your mouth.

        I thought you had supervenience in mind and drafted part of a reply using that term but deleted that sentence or two as you had not used it.

        I have of course read around the concept quite a bit. As I said in a small discussion in which we were a part, I have great difficulty in finding more in the various uses of that term than a confession of faith in physicalism (to which I recall you saying something like “You may be right in that”). Therefore I have grasped no content sufficient to deny (or assert) in my criticisms of physicalist monism, physicalist epistemology or empirical method and theory (science) presupposing physicalism.

        Similarly, as I have explained in this exchange (and in more detail in seminars etc), my monism is causal[ist], with each human person (at least) involving / involved in (at least) three distinct types of causality – societal, material and mental. In this sense of causation, each type of causal process can only affect and be affected by a causal process of the same type, i.e. in that sense is in a closed system. There I agree with physicalist monists about purely physical causation and with some radical social constructionists about purely societal causation (if they believe in social causality).

        Something I have long asserted which you might regard as relevant is that a (natural or engineered) entity sufficiently acculturated into a society is entirely composed of social causation, sufficiently embodied within a material universe is entirely composed of physical causation, and sufficiently developed from social and material information is entirely composed of mental causation. That position may well both deny and assert both versions of your question about the facts (or their explicability). This need not be a contradiction, working within that systems-pluralist version of a monism of causes, rather than of substances, properties, events, facts or non-causal processes.

        Regarding your further “by the way”, I am not of course a non-interactionist among society, matter and mind. A mind exchanges highly specific contents of information with both society and matter. In that sense, both mind and society and also mind and matter influence each other. Furthermore both mechanistic and counterfactual conditional analyses of causation seem to me to encompass those exchanges of information. However this has to be a different category of causation from either societal events influencing societal events or physical events influencing physical events, and also from mental events influencing mental events. Talk of upward and downward causation fails to make distinctions that seem to me to be unavoidable.

        – David

  11. David,

    So you are claiming that you cannot “grasp the content” of these statements?

    Fixing the truth value of all physical propositions about the past and present thereby fixes the the truth value of all social and mental propositions.

    Fixing the occurrent, and all previous, physical states of affairs thereby fixes the occurrent social and mental states of affairs.

    Well, that’s better than denying them, I suppose! Insofar as you don’t deny them, perhaps we have finally reached the common ground which I was seeking, in order to frame our differences.

    But that will have to be done some other time now.

    Ron

    • I’m glad you have perhaps found some common ground for other discussion between us, Ron.

      Just to make sure that we and any readers are clear, I do of course understand your plain English! Inability to “grasp the content” is of course a polite assertion that, from my causal monist and systems pluralist position, the statements seem meaningless (in a purely philosophical sense of that term which a Verificationist, in a different way, might have applied to statements of value judgments). So I cannot legitimately assert or deny them.

      Given what (I believe) we now know about causation in mental, social and material systems and about how mental causation relates to both material and social causation, I can’t imagine how so many theoretical mechanisms would operate together to fix these truths or not.

      Yes, let’s leave it at that now.

      All the best. – David

  12. Hi Ron,

    In your talk, a crucial move was the use of a causal theory of reference. When I questioned this, you replied that the reference of the term ‘qualia’ could be fixed by a form of introspective ostensive definition. I should have pushed you on that point more, I realised later, since it is not clear to me that it works: is it not vulnerable to objections along the lines of the private language argument?

    Best,

    Simon

    • Simon, good point, and an odd coincidence – I myself brought up the Private Language Arguments in a similar context when I was speaking in Vienna, just the day before you posted your comment.

      I think Wittgenstein might be right that reference cannot be secured by inner ostension toward a private particular, whatever that might mean. But that causes no problems for Aaron and me – we are assuming as much. On our account, there may be interoceptively-guided ostension toward a public particular/feature (such as some aspect of a virtual machine), and it is this public particular/feature that is (falsely) believed to be private.

      Does that quell the worry?

      • Ron – & Simon

        Maybe I’m beginning to understand why the later Wittgenstein has been accused of being Behaviourist, in the sense of denying the existence of qualia. He argues in effect that the contents of consciousness are public empirical processes, which is my view (‘clever’ achievements using shared constructs in analogy) and what I believe what you, Ron, and Aaron are arguing. None of us denies the existence of consciousness – the reverse, we are explaining the phenomenon. Wittgenstein and Chrisley/Sloman are agnostic on the existence of qualia. I’ve never thought of them as other than subject to quantifiers at most, as I’m so wary of the hypostatic fallacy. (Consciousness is not central to the (empirical) existence of non-physical and non-social mental causation in systems where it may or may not involves qualia, depending on what they could be!)
        P.S. I’ve seen the recent argument that a physicalist account of qualia implies pan-psychism. To my mind, that’s a reductio ad a’m of such an account. In my view, any account of minds as causal systems must be *restricted* to acculturated (“socially intelligent”) as well as embodied (“physical” – materially ‘intelligent’ indeed) entities, natural or engineered.

        – David

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s