What is represented in a representation?

9780262516471…and by what mechanism were those contents generated?

Nicola Yuill (nicolay@sussex.ac.uk) is coordinating a reading group on embodied cognition, currently meeting weekly to discuss Tony Chemero’s book Radical Embodied Cognitive Science.

Some people who could not get to the meetings have been circulating comments via the email list, but it was decided that the PAICS blog might be a better home for this discussion.  To start things off, below please find a comment from Simon McGregor, David Booth’s reply, etc.

From: Simon McGregor McGill [londonien@gmail.com]
Sent: 11 February 2016 18:37
Subject: Re: Embodied Cog Reading Group – 18th Feb

Hi all,

I have read (in moderate depth) one of the papers Chemero cites in Chapter V: the Stephen et al. paper on the gear-solving task (Case 8, p93). In case I can’t attend the seminar on Thursday 18th, I thought I should summarise my thoughts on it.

My view is that it fails to support Chemero’s argument that Stephen et al. “explain what cognitive psychologists typically refer to as a change in representation… without using representations as part of the explanation.” The authors’ proposed account, even if correct, is only an explanation in a very general sense: it explains why particular statistical patterns are observed prior to a (particular sort of) qualitative change in behaviour. However, it does not explain in what circumstances such changes will eventually occur, or what the changes will look like if they do happen.

Here’s a link to the paper online.

In a nutshell, the researchers propose that the adoption of a new cognitive strategy towards a task should be understood as a phase shift in a thermodynamically open system, driven by an increase in entropy beyond the current dissipative structure’s capacity to dissipate. Empirically, they found a way to predict the emergence of a new task-solving strategy during a particular trial (as measured by observer coding from videotapes), from the movements of participants’ fingers during previous trials.

Their argument is that emergence of the new strategy can be understood as a shift to a new attractor (i.e. a phase change), and that this is generally preceded in dynamical systems by a peak in entropy (i.e. a dramatic increase, followed by a dramatic fall). They produced estimates of entropy during a trial, attractor strength during a trial, and maximum entropy in recent trials. They then found that these variables could be used to predict emergence of the new strategy in the next trial.

For technical reasons, although I would like their explanation to be correct, I am also not sure how strongly this paper provides evidential support for it. Specifically, I need to be convinced that the (thermodynamic) entropy in their “explanation” is the same thing as the entropy they measure in their experiment, and I also want to see more intelligible summaries of their data.

Warm regards,

Simon

 

From: David Booth   11 February 2016 22:31
To: Simon McGregor and EmbCog RG email list

If I understand you correctly, Simon, you are arguing that Stephen et al. are not really “explaining” a qualitative change in performance (even of the limited sort they consider) because they give no account of what performance changes to what – only at best that there has been some change.Your comments on Stephen & co look entirely plausible to me but I have not read their paper and so I should not comment on that content (sic!) of your email.

What I can offer is the comment that what you say ‘must’ be true – because entropy, P values, Bayesian optimisation and all other probabilistic measures of mere amount of information can never give an account of a particular performance (a piece of

“behaviour” as you and many others call it), i.e. of the informational content of what the *performer* represents (not the brain or any part of it, or of the mind, especially not a representing homunculus).Shannon first hammered home this argument: the bit measures only how much info. is in a message (or memory), not the meaning, content or structure of the message (or of its residue in the receiver).  Working out what a message means requires measurements of sufficient specifics of other output and input processed to provide a usable context

for receiver and sender (including when they are the same info-transferring system).  [I’m told that this is also an account of quantum entanglement and de-encryption.]I’m very sorry that I’ve been unable to attend any of the discussions so far.  – David.

David A. Booth
School of Psychology, University of Sussex
Profile: http://www.sussex.ac.uk/profiles/335100
Email: D.A.Booth@sussex.ac.uk

 

From Simon McGregor

Hi David,

I suspect we disagree fundamentally about the extent to which Shannon’s framework of information theory can underwrite a theory of mental content for embodied agents; I don’t really want to go into it via email, because I currently have about forty pages of draft material on the topic!

My criticism of Chemero is more neutral (and arguably more fundamental): even if we treat a person as a purely mechanical physical system, Stephen et al. don’t give a very meaningful explanatory account of how that system behaves (in the physical-sciences sense of “behaviour”, i.e. the way a system’s state changes over time).

Suppose I am studying volcanoes, and want to understand better why they erupt. An account similar to Stephen et al.’s would essentially say, “a phase change is induced by too much input entropy, and can will be preceded by certain statistical changes in observable statistics”. Even if that’s true, it doesn’t tell me why the phase change should be an eruption, rather than some other behaviour; hence, it doesn’t seem like an explanation of the phenomenon of eruption. It’s just a very general (putative) explanation of a big change of some unspecified sort.

Warm regards,

Simon

P.S. I was dubious about Stephen et al.’s discussion of criticality, so I spoke last night to a mathematical colleague who works on systems in critical states. He confirmed that he would not expect power law exponents to peak prior to a phase transition, as reported by Stephen et al., and would expect any such finding to be a statistical artefact.

 

From David Booth
Hi, Simon

We agree that Chemero is inferring too much from the statistics but appear to disagree about what is important (including to Chemero) about what at least his stats leave out. (My argument is general and fundamental but presumably in a different way from yours.)

You define behaviour as state change. I regard it as performance [of a system on an environment].

The phenomenon of volcano’s eruption is not (merely) a change in state of the volcano: it is a change in the volcano’s effects on the physical environment – and, granting the inadequacy of your analogy (or reduction) to events in (describable) human history, the eruption’s status also in human society. This whole story about the eruption can be regarded as the volcano’s performance without a whisper of anthropomorphism (cp. a car’s performance on the road or racetrack).

Best wishes for your 40 pages. I know the feeling.

– David

P.S. Your colleague’s mathematical criticism of Chemero’s stats is interesting but seems to bear no relation to the difficulties with power functions that I face (down) in the performance of an embodied and acculturated system when changing output/input relations to its bio-social niche. – DAB

 

From David Leavens
To EmbCog RG

Dear Colleagues,

Please let me know when the next meeting is, and I’ll put it in my diary. By way of distant input to the meeting, considering how many new people we have, my concern is not so much whether it is reasonable to postulate representations as components in a psychological process model (although I don’t see the necessity in many quotidian contexts), but, rather, what might constitute unambiguous evidence for such representational components in organisms that do not speak (human infants, other animals).

Cheers,

Dave

 

From: Simon McGregor McGill [londonien@gmail.com]
Sent: 18 February 2016 13:28
To: David Leavens

Hi David,

You seem to be assuming that the representational stance is a scientific hypothesis (roughly speaking, that it is evidenceable and falsifiable). I am not sure this is a good way to think about representations. Instead, I see them as part and parcel of a particular explanatory language, whose value (as with all explanatory tools) depends not only on the phenomenon it attempts to explain, but also the explanatory purposes and cognitive resources of the theorist.

For many theorists, it is useful to summarise patterns in rat locomotion by invoking the idea that the firing states of place cells and grid cells in a rat’s hippocampus “represent” (some of) the rat’s belief about its position in a familiar environment. We can probably agree on what empirical consequences this scientific hypothesis implies, even if we have some metaphysical objection to the way it is phrased.

In my opinion, the relationship described by “represent” here is important enough to deserve a succinct vocabulary term, and technical usages of ordinary language terms often stretch their meaning much further than in this case!

Warm regards,

Simon

 

From David Booth 18 February 2016 16:23

Hi there (or not there this week!) all EmCog RGers,
Just in case of confusion, there’s a Leavens on Nicola’s list who signs himself “Dave” and a Booth who signs himself “David”.

Hi, Simon (from David B)

We agree that neural representation is not an empirically testable hypothesis, as you reply to Dave L. However, we disagree about your account of the example of hippocampal place cells. I see no justification for introducing the notion of that place recognition (locating) network *representing* the rat’s position (or the rat’s multimodal configuration). A theorist says nothing more by letting “a rat’s hippocampus “represent” (some of) the rat’s belief about its position in a familiar environment” than by stating that “a rat’s hippocampus processes its position in a familiar environment” – in the sense of “processing” the information content that the whole *animal* can represent, i.e. has extracted from past and present locations.

You may think my verbal hygiene is over-sensitivity to slippery slopes for cognitive neuroscience theorists, down to homunculi, infinite regresses etc. Yet the scientist’s job differs from the philosopher’s sorting out of viable conceptual options: among other things, neuroscience, psychology and cultural science have to explain HOW (by what mechanisms) a bio-social system manages to represent its niche and identity. A bunch of “representing” subsystems just doesn’t cut it. That’s why I want the vector state of a learnt multimodal configural ‘stimulus’ as a theoretical candidate for allocentric position.

I’m quite amazed that you add the term belief as well. I’d be happy to argue the toss over whether it’s useful or even true to say that rats have beliefs but, as I illustrate above, the idea adds nothing in this example. (Granted too, cognitive scientists need mental mechanisms, with their neural and cultural bases, for achievements such as believing something.)

(I hope that this is not too far off Chemero to substitute for a little of this week’s missed/postponed discussion!).

Best regards.

– David

 

From Simon McGregor 19 February 2016 10:36
To David Booth

There’s quite a lot of argument [above], but the brunt of the disagreement can be summarised as follows:

* Scientifically speaking (not metaphysically), what *harm* do you believe the notion of representation causes to an explanation? Will it produce false empirical predictions, and if so, how?
* What is your rationale for believing that this harm will continue, even if the notion of representation comes to have a well-thought-out and clearly-defined scientific meaning?

We agree that neural representation is not an empirically testable hypothesis, as you reply to Dave L. However, we disagree about your account of the example of hippocampal place cells. I see no justification for introducing the notion of that place recognition (locating) network *representing* the rat’s position (or the rat’s multimodal configuration). A theorist says nothing more by letting “a rat’s hippocampus “represent” (some of) the rat’s belief about its position in a familiar environment” than by stating that “a rat’s hippocampus processes its position in a familiar environment” – in the sense of “processing” the information content that the whole *animal* can represent, i.e. has extracted from past and present locations.

To my mind, the lexical terms used in scientific discourse are of relatively minor import, provided that they are unambiguous in context. The primary question is whether the concept they express is valuable scientifically. We agree that it is useful to have a concept that can relate the state of the rat’s brain (in this case, specifically the state of the hippocampus) to some feature of its umwelt (in this case, its location within a familiar environment) in a particular way. I use the lexical term “represent” to refer to this phenomenon; you prefer the lexical term “process” (which I think is even worse than “represent”, for several reasons that I will address later).

You may think my verbal hygiene is over-sensitivity to slippery slopes for cognitive neuroscience theorists, down to homunculi, infinite regresses etc. Yet the scientist’s job differs from the philosopher’s sorting out of viable conceptual options: among other things, neuroscience, psychology and cultural science have to explain HOW (by what mechanisms) a bio-social system manages to represent its niche and identity.

Yes, and:

1) the core of scientific research is the substance, not the wording, of this explanation;
2) the job of cognitive science as a whole is not to construct one single canonical explanation, but a whole panoply of different explanations at different levels that fit different explanatory needs.

Scientists are entitled to use terms in a semi-technical sense as part of the process of developing scientifically rigorous concepts. Eventually those terms take on a precise technical meaning that is defined in textbooks, and naive understandings are knocked out of undergraduates early on. The important thing is the development of the concept, not the word.

Alchemists used the word “aqua” (water) to mean a clear solvent. Nitric acid was aqua fortis; a mixture of nitric acid and hydrochloric acid was aqua regia. We now understand that there are more fundamental dimensions of similarity between different liquids, and reserve the word “water” for H20 in liquid form. Similarly, many of the things scientists call “water” in everyday usage would be described as “an aqueous solution” in a chemical journal, because they aren’t pure H20. It’s no big deal that the technical meaning differs from the everyday or historical meanings; Gricean implicature disambiguates.

At present, cognitive scientists are often naive about the relationship they describe by “representation”, ignoring relevant context by paying insufficient attention to embodied, situated and complex emergent considerations. The problem isn’t a lexical term; it’s a scientifically blinkered way of thinking and an immature theory of cognition.

A bunch of “representing” subsystems just doesn’t cut it.

For your explanatory needs, that may be so; I am not sure you have the authority to speak for everyone.

That’s why I want the vector state of a learnt multimodal configural ‘stimulus’ as a theoretical candidate for allocentric position.

I find this sentence as confusing as any that use the term “representation”. Personally, I am interested in a particular sort of systematic empirical relationship between two sets of physical variables: some in the rat hippocampus and some expressing the physical position of the rat. This sort of relationship should be familiar to any cognitive scientist; it is the one generally referred to “representation”. While the importance of both normativity and embodiment for a rigorous scientific version of the concept is not well-recognised within cognitive science, it is reasonably well captured by Millikan’s excellent teleological theory (at least as Chemero describes it). You may not find this thing a useful thing to talk about, but your aims are presumably different from mine.

It’s also worth pointing out that in your anti-representational zeal, you have completely elided the nature of your candidate relationship between the vector state and the position. We don’t need candidates for the location of a rat, since we already have a mature scientific theory regarding the position of objects relative to other objects. We need candidates for physical variables that have the “representation”-like relationship between the rat’s physical internal state and physical features of the rat’s umwelt (given some assumptions about the rat’s goals).

I’m quite amazed that you add the term belief as well. I’d be happy to argue the toss over whether it’s useful or even true to say that rats have beliefs but, as I illustrate above, the idea adds nothing in this example. (Granted too, cognitive scientists need mental mechanisms, with their neural and cultural bases, for achievements such as believing something.)

I find your argument unconvincing. Arguably, what you’ve illustrated is that clear sentences about the “representation”-like relationship are difficult to construct without using the verb “represent” 😉

The notion of belief is important because the external behaviour of the rat (in certain environments over particular timescales) depends not on the actual location of the rat relative to other environmental features, but on the state of the rat’s brain.
Ordinarily, the rat will navigate the familiar maze in a way that is appropriate to the goal of finding the cheese. Suppose we – from the rat’s point of view, undetectably and unprecedentedly – rotate the rest of the maze around the chamber the rat is in, changing its relative position within the maze. We would intuitively predict that the rat will initially behave in a way that would be appropriate in the unrotated maze, but is no longer appropriate in the rotated maze; the rat will eventually change its behaviour, but only when unexpected information reaches the rat. Furthermore, this intuition can be rigorously justified from physical first principles: there is a causal Markov blanket around the rat because the Universe is spatially local.

In ordinary language, we would say that the rat wrongly thinks (or believes) it is in a particular relative position within the maze. I prefer to use “belief” when referring to the whole agent, and “representation” when referring to physical variables or notional subpersonal cognitive variables.

When you unnecessarily *omit* the notions of belief and representation, you end up with sentences like “a rat’s hippocampus processes its position in a familiar environment”. The phrasing of this sentence carries several very unfortunate implications:

1) the hippocampus directly processes the position of the rat (I would say the hippocampus processes the rat’s *sensory information* about the rat’s location);
2) our best empirical understanding of the relation between the rat’s place / grid cells and its position is in terms of the *dynamical processing* done by those cells (I would say it’s in terms of their *firing rate* – something more elegantly modelled as a state variable;

It also makes use of the contentious word “processing” in an avowedly non-standard (and not clearly defined) sense. I personally don’t object to the concept of processing, but I do think it’s even woollier than “representation”. Incidentally, even the idea of processing information relies on some intensional (in the philosophical sense) relationship between the internals of a system and its external environment: it doesn’t seem like a big leap from that to representations.

By the way, there is an implied teleological context to my amended “processing” sentence: the hippocampus processes sensory information about the rat’s location, in order for the rat to achieve locomotory goals.

Warm regards,

Simon

 

From David Booth 22 February 2016 17:37
To Simon McGregor

Thank you very much Simon for your careful and courteous response to my further comment.

We agree (well enough) on so much, may I go straight to the key points about “processing”? – and just to the empirical content of, the example of hippocampal [information-]processing of position.

You end:

“…”a rat’s hippocampus processes its position in a familiar environment”. The phrasing of this sentence carries several very unfortunate implications:
“1) the hippocampus directly processes the position of the rat (I would say the hippocampus processes the rat’s *sensory information* about the rat’s location);
“2) our best empirical understanding of the relation between the rat’s place / grid cells and its position is in terms of the *dynamical processing* done by those cells (I would say it’s in terms of their *firing rate* – something more elegantly modelled as a state variable;

“It also makes use of the contentious word “processing” in an avowedly non-standard (and not clearly defined) sense. I personally don’t object to the concept of processing, but I do think it’s even woollier than “representation”. Incidentally, even the idea of processing information relies on some intensional (in the philosophical sense) relationship between the internals of a system and its external environment: it doesn’t seem like a big leap from that to representations.

“By the way, there is an implied teleological context to my amended “processing” sentence: the hippocampus processes sensory information about the rat’s location, in order for the rat to achieve locomotory goals.”

Working backwards through your points, this reference to goals is not teleological, in the sense of anthropomorphic purpose. Precisely what hippocampal activity contributes to the rat’s life is control of its translocatory *acts*. One of the basic sorts of formulae in the theory of mental mechanisms I illustrated another part of, I indeed call an *intentional* type of process. This is central to Chemero and many others now taking up “Actionism” – e.g., perception serves action, which updates perception enabling further action – for the rat to get ‘there’ or to whatever it wants (and believes is there). These are mental processes dependent on the rat’s expertise with the aspects of our physical universe and so the expert(!) language of folk psychology is first choice for labelling mechanisms subserving perception, belief, verbal or graphic representation etc.

Next upwards in your points, precisely the nature of mental processes (unlike sub-quantum information or uninterpreted security intelligence) is intensionality – relating the “internals” to the “externals” of a system, as you put it. I’ve inserted those quotes to indicate that I would not agree to a boundary at the skin or at the membrane around the central and peripheral nervous system. The ‘boundary’ is between [the system’s] representING and the representED – and in the cases of the content of consciousness and of entertained counterfactual conditionals, what is represented is not in the environment at all.

Your 1) and 2) at the top of the quote bring us to the heart of the matter.

1) You, not I, have introduced the term “directly”. That was J.J. Gibson’s mistake. I say that (like the rest of the brain where there are adaptive synapses) the HC processes currently sensed information (yes, of course) but also sensed information retained from past exposures to that environment (‘familiar’). Claiming that perception is an internal process “directly” representing(?) the affordances in the external environment added nothing in Gibson’s case except philosophical confusion.

The present and past sensed information precisely is ([intensionally] about) the position of the rat relative to other objects in the environment (allosteric, as distinct from egocentric).

2) Our “best understanding” of the information processed by the HC (in relation to neocortex, especially sensory and motor systems) is along the lines I stated, “a learnt multimodal configural ‘stimulus’ “. (Processing can’t be anything else besides dynamical.)

The physical basis of this mental processing (by the whole animal, an entirely mental and entirely physical being) can’t be axonal firing: that only transmits.The best bet I know is transformation of patterns at fields of synapses into patterns on dendrites. In some cases the field’s activity may be accessible to fMRI with 7T magnets. When so, convergence among sets of psychological tests could in principle identify the sub-component of the (mental) information being transferred physically. (I have no problem with the hidden variables in a learning machine be based on active (?dynamical) setting of switches – unlocalisable in a hash memory, unlike in a brain for certain sensory and motor variables, although not in general.)

I hope this helps my statement to seem rather fortunate!

Best regards.

– David

Advertisements

6 thoughts on “What is represented in a representation?

  1. From: Simon McGregor McGill 01 March 2016 10:37
    To: David Booth
    << with some initial brief responses from David (at <<)

    Hi David,

    Thanks for sorting out the PAICS blog!

    << Thanks to Ron for sorting ME out!

    My objection to anti-representationalists is that I find the notion of representation useful for my explanatory purposes; I don't insist that they use it for theirs. It is often frustrating to argue on these points, because substantive scientific claims are met with "You aren't allowed to say that. You have to say it in a different way." This mode of argument is not a scientific argument; it is an attempt to censor scientific debate. When one is forced to use a vocabulary suited entirely to the purposes of someone who holds a contrary view, it will always be difficult to defend one's own position, regardless of the merits of one's position.

    << I have never tried to stop anyone saying what they want. I have only ever pointed out difficulties I see with what they've said (when I've commented at all). Also, by the way, as a reviewer and editor I have always 'spoken' and acted against private attempts to stop material being published without opportunity for public defence by the author against the criticism made public – I’ve published quotations of anonymous criticism myself on occasion

    I'd be grateful if you could respond to my core challenge:

    * Scientifically speaking (not metaphysically), what *harm* do you believe the notion of representation causes to an explanation? Will it produce false empirical predictions, and if so, how?
    * What is your rationale for believing that this harm will continue, even if the notion of representation comes to have a well-thought-out and clearly-defined scientific meaning?

    << I'll reply in more detail to your re-statement above, and to the rest of your response below, in a subsequent comment on this blog post. (Regarding your P.S. below, I too find written tone difficult, but spoken tone can also be problematic. I do apologise if my emails have seemed at all abrasive at times.)

    << You ask about "harm" in the past, present or future of the use of the verb "to represent" in scientific accounts of neural processing. In the first instance I'm not alleging harm (or error): I'm merely asking what the idea adds to scientific explanation. However, the answers I've come across to that question so far have not pointed to any possibility that I can see of explicating theoretical mechanisms in a way that improves such explanation in terms that do not employ such an extension of the canonical concept.

    << I take that canon (not rule of censorship) to include statements like the following three.

    << Howard's "step into the light" represents his objections to Cameron's "leap into the dark."

    << Munch's painting "The Scream" represents existential agony.

    << The above examples represent my off-the-cuff efforts to illustrate that representing is something a person does, using a public medium of communication, such as a language or a piece (?representational) art.

    << No more comments from David below – maybe in a later comment on this blogpost.

    Working backwards through your points, this reference to goals is not teleological, in the sense of anthropomorphic purpose.

    This is tangential, but I'll bite! 😉 I have heard the claim several before that the notions of goals, preferences and beliefs are anthropomorphic. I don't buy it; I think that we understand other animals as agents in their own right, not purely as proxy humans. Indeed, I think people expert in animal interaction sometimes understand humans as proxy horses, wolves, etc. (The prevalence of animal metaphors in historical periods when we were more expert in animal interaction provides some indirect evidence of this.)

    Precisely what hippocampal activity contributes to the rat's life is control of its translocatory *acts*. One of the basic sorts of formulae in the theory of mental mechanisms I illustrated another part of, I indeed call an *intentional* type of process. This is central to Chemero and many others now taking up "Actionism" – e.g., perception serves action, which updates perception enabling further action – for the rat to get 'there' or to whatever it wants (and believes is there). These are mental processes dependent on the rat's expertise with the aspects of our physical universe and so the expert(!) language of folk psychology is first choice for labelling mechanisms subserving perception, belief, verbal or graphic representation etc.

    Yes, I agree; in general I prefer to use "belief" rather than "representation", unless I want to make a specific claim about cognitive or neurocognitive mechanisms (and hence about the systematic structure of the dynamical cognitive process and/or its relationship with physical microstates, e.g.: systematic errors; systematic variations in response times; systematic effects of direct interventions on the brain).

    Next upwards in your points, precisely the nature of mental processes (unlike sub-quantum information or uninterpreted security intelligence) is intensionality – relating the "internals" to the "externals" of a system, as you put it. I've inserted those quotes to indicate that I would not agree to a boundary at the skin or at the membrane around the central and peripheral nervous system. The 'boundary' is between [the system's] representING and the representED – and in the cases of the content of consciousness and of entertained counterfactual conditionals, what is represented is not in the environment at all.

    I've tried to be meticulous about referring to the boundary between the rat's *body* and its physical environment, rather than referring to a boundary between "the rat" and the environment – are you genuinely challenging the existence of such a boundary? The fact of this boundary is central to my motivation for using the notion of representation. I'm sure you would agree that you need to understand what my motivation is before you can impugn it!

    Your 1) and 2) at the top of the quote bring us to the heart of the matter.

    1) You, not I, have introduced the term "directly". That was J.J. Gibson's mistake. I say that (like the rest of the brain where there are adaptive synapses) the HC processes currently sensed information (yes, of course) but also sensed information retained from past exposures to that environment ('familiar'). Claiming that perception is an internal process "directly" representing(?) the affordances in the external environment added nothing in Gibson's case except philosophical confusion.

    Sorry – I wasn't suggesting that you meant "directly". I was simply observing that it is clearer to say "the hippocampus processes sensory information about the rat's position" than to say "the hippocampus processes the rat's position", because the latter can be seriously misinterpreted. The former also explicitly names all three relevant things involved in the representation-like relationship: the hippocampus, the sensory information and the position.

    The present and past sensed information precisely is ([intensionally] about) the position of the rat relative to other objects in the environment (allosteric, as distinct from egocentric).

    For my purposes the distinction between allosteric and egocentric representations of position is not relevant, because I made the point there is mutual Shannon information between two purely physical variables: the firing rate of neurons in the rat's hippocampus, and the position of the rat's body relative to the rest of the environment. Mathematically, that relation is the same whether you understand it allosterically or egocentrically. I did say physical position and Shannon information, but those crucial details seem to have got lost in your response.

    2) Our "best understanding" of the information processed by the HC (in relation to neocortex, especially sensory and motor systems) is along the lines I stated, "a learnt multimodal configural 'stimulus' ". (Processing can't be anything else besides dynamical.)

    The physical basis of this mental processing (by the whole animal, an entirely mental and entirely physical being) can't be axonal firing: that only transmits.The best bet I know is transformation of patterns at fields of synapses into patterns on dendrites. In some cases the field's activity may be accessible to fMRI with 7T magnets. When so, convergence among sets of psychological tests could in principle identify the sub-component of the (mental) information being transferred physically. (I have no problem with the hidden variables in a learning machine be based on active (?dynamical) setting of switches – unlocalisable in a hash memory, unlike in a brain for certain sensory and motor variables, although not in general.)

    I said nothing about mental processing "by the whole animal"; I made the empirical physical-sciences claim that there is mutual Shannon information between firing rates and position. This claim can in principle be tested by a machine that understands nothing about rats or positions. Strictly speaking I should have said "Shannon information (for the physical theorist)", but otherwise the claim stands or falls purely on empirical evidence.

    Warm regards,

    Simon

    P.S. It is sometimes difficult to phrase one's opinions in the right "voice" via email, especially when the exchange is very animated! It would be nice to have a quick Skype chat if that would suit you?

  2. To Simon from David (B)

    Just to clear up a mis-reading, which anyway is tangential as you state.

    I wrote, “Working backwards through your points, this reference to goals is not teleological, in the sense of anthropomorphic purpose.”

    You responded, “This is tangential, but I’ll bite! 😉 I have heard the claim several before that the notions of goals, preferences and beliefs are anthropomorphic. I don’t buy it; I think that we understand other animals as agents in their own right, not purely as proxy humans. Indeed, I think people expert in animal interaction sometimes understand humans as proxy horses, wolves, etc. (The prevalence of animal metaphors in historical periods when we were more expert in animal interaction provides some indirect evidence of this.)”

    We agree. You are re-stating what I wrote. The notion of goal is not (necessarily) anthropomorphic.

    At the start of cybernetics, mindless but autonomously *guided* missiles had *targets”. They were chosen by human beings, as also are the targets set by a humanly programmed computation when the US president presses the red button. Nevertheless, while being also a tool in human society, the digital or electromechanical machinery operates autonomously to minimise the disparity between the sensed goal position and the set goal position on the GPS. or whatever.

    Similarly the hippocampal network sub-system – with more complex movements than rocket-powered flight and even more complex sensory processing: “learnt [2016 Brain Prize!] multimodal configuring”.

    More later. – David.

  3. Hi, Simon

    This is my 3rd & final bite of your latest lovely ripe bag of cherries. 😉

    [If we continue to quote or interleave comments, could we please mark the author of each paragraph in some way? It’s very difficult to sort out who wrote what otherwise.]

    Your posting above (kindly given a new label by Ron (“A thought …”), to widen the page again) has a theme that runs through your remaining paragraphs for my comment. This returns us to the start of our exchange: my initial and continuing impression is that you are using data on only the amount of information. I am talking only about mechanisms that deliver the content of information.

    In the case of hippocampal place cells, an example of such info contents is the actual numbers of the coordinates of a tower the rat previously sat on under the milky water it is swimming in, relative to a marker such as a spotlight on the ceiling above the tank. (The standard position of the tower relative to the spotlight is not the same information [content] as the spotlight’s distance and angle from the rat’s transient present position, although both these egocentric data and that allocentric memory are needed for optimum performance in the task, e.g. shortest swimming times.)

    Our own understanding of the rat’s allosteric and egocentric positions is not the issue – you didn’t mean “we” literally, I assume. We should be talking about the rat’s sensory processing of the spotlight’s egocentric vector at present and of that light’s allosteric configuration to the place where the rat sat previously.

    I can only say that Shannon information omits the crucial details, as Shannon rightly emphasised, and that the ‘fixers’ since of the conceptual brain (whom I’ve read) have also missed them out (e.g. Tononi, by resorting to an additional unexplicated factor to insert structure).

    Regarding your final paragraph, it’s tough to reconcile your earlier claim to be talking about the rat’s sensorily based beliefs regarding the positions of rest-tower and room light and your statement here that you are not talking about the whole rat with respect to its sensory processing in the hippocampus.

    In addition, you imply here that you are not talking about the actual rat’s hippocampus: any suitable machinery will do to test your argument empirically. So the scientific issue I posed about firing rates (whichever axons you are referring to) does not matter to your theory.

    My interest as a scientist is only in actual mechanisms – physical, mental and societal – that exist in human and other species, and in lines of engineering development that are physically and socially intelligent, when the evidence on observed operating mechanisms is treated with conceptual clarity and the simplest relevant mathematics.

    I hope to keep up with your work and am grateful for your patience with this very limited medium of brief written exchanges.

    Best regards. – David

  4. David, you said:

    “[Ron, please consider starting a new ‘topic’ under this title.]”

    Please feel free to start a new topic yourself. (That goes for all of you who use this site. If you would like to be given posting permissions, please contact me).

  5. Pingback: Information contents | PAICS: Philosophy of AI and Cognitive Science

  6. Ron, this has gone into Uncastehgrised, not EmbCog.

    Please tell us all exactly how to start a blogpost with reply box within the EmbCog category specifically and only.

    Many thanks. – David.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s