A piece in today’s Guardian (“The Science Behind the Dress Colour Illusion“) quotes me as a primary source, but the exigencies of copy deadlines mean that in a few places my intended meaning was lost. Here are my (unedited) notes on the matter, which should clarify my comments in the Guardian article that might have left more than a few people scratching their heads:
What’s striking about this is not (just) the illusion itself (there are many examples of how context can affect our colour judgements), but the sharp social disagreement on this issue. Most people fall into one of two camps, with each side being sure they are right and the other side wrong, to such an extent that many cannot see it as an illusion at all: the dress simply is the colour it seems to them, and anyone who says differently is either having a laugh or can’t see colours properly. Here at the Sackler Centre for Consciousness Science, according to an unscientific straw poll conducted by my colleague Jim Parkinson this afternoon, we actually divided equally into three camps: black/blue, white/gold, and brown-gold/blue. But stick to the dispute between the first two camps: what’s going on?
Most illusions have the form: things seem one way, but this can be shown to be at odds with the way things objectively are (e.g., the length of the lines in the Müller-Lyer illusion, or the sameness of the reflectance properties of the two tiles in the checker shadow illusion). But with the dress illusion, things are different. As things stand now, we only have the picture of the dress, the two groups of people with their different experiences of the dress’s colour, and the assurances (of some people) that despite the dress looking white and gold (to some people), it actually is blue and black. If a white-and-golder asks how this could possibly be the case, they are just referred to a different picture, which looks blue and black to everyone, and are told that the second picture is of the same dress as the first picture. Unsuprisingly, this is not persuasive to the white-and-gold crowd: How *could* it be a picture of the same dress? Do you mean the same style of dress, but in a different colour? Etc. Imagine how it would be for the checker shadow illusion if one only saw the first of the three images in the Wikipedia entry (http://en.wikipedia.org/wiki/Checker_shadow_illusion), and not the other two images which demonstrate that the reflectance properties of the tiles were the same. When confronted with scientists’ claims that despite appearances, the tiles are of the same colour, one would just say “rubbish!” and go about one’s business.
So the first step in reaching a truce in the “dress wars” is to construct a demonstration that can show to the white-and-gold crowd how the very same dress can also look blue and black under different conditions.
(A good first step is with this image my Sackler colleague, Keisuke Suzuki, found on twitter:
The right half of each image is exactly the same. But in the context of the two different left halves, it is interpreted as being either white and gold, or blue and black.)
But there is still more going on here. Another striking thing about the illusion is that it is quite unlike, e.g., Muller-Lyer and the checked shadow illusion, in that not all people experience it, and those that do, often do so differently. It is as if there is a perceptual equivalent of those who can roll their tongues and those who can’t. But it is too early to say whether the difference is genetic, as with tongue rolling ability; or something affected by learning and personality, such as being a night-owl (as Bevil Conway from Wellesley College has suggested), or one’s particular sensitivity to context in perception as I and fellow Sackler college Acer Chang speculate.
I think the most promising account of visual experience we have at present is the idea that what we see has as much to do with what inputs our brain expects to receive in a given situation as it does with what inputs our brain actually receives. But how one brain negotiates expected vs actual inputs to construct a colour experience might differ from how another brain does it. Thus, some people might perceive colours more on the basis of what is in front of them, while others might (unconsciously) take into account such things as: what kind of light source is likely to have been used in this photograph? And then even for those who do take more context into account, how they do so might vary from person to person, depending on their experience, interests, expertise, etc. For example, the night-owl/day-person issue mentioned before. Or the brain of a photographer or designer used to dealing with images in photoshop, adjusting white balance, etc. may very well use context to “create” colour experience in a way that is different from someone without that expertise/experience. (They might have a better understanding of how colours “behave” under a wide variety of lighting conditions, and so are not tricked into seeing the dress as gold and white.) There may even be low-level physiological/anatomical differences that determine exactly how and whether one will be sensitive to contextual effects when experiencing colour. But so far as I know, no one has yet identified the differences of context-sensitivity that are in play with this particular (and now notorious) dress.
Director, Centre for Cognitive Science
Faculty, Sackler Centre for Consciousness Science
University of Sussex
Comments/corrections to the above largely off-the-cuff and unresearched opinions very welcome!
A deflationary view of morally competent robots.
Long before there are robots that are true, morally responsible agents,
many of them (“m-robots”) will have strong behavioural and functional
similarities to human moral agents. The design and evaluation of
m-robots should (both in the interests of producing the best designs,
and of doing what is right) eschew conceptualisations which view the
m-robot as a moral agent. Rather, I argue, those engaging in such
activities should adopt the deflationary view of m-robot morality: the
ethical questions around an m-robot’s actions concern not the purported
moral standing of the m-robot itself, but rather and solely the moral
standing of the relevant humans and human organisations involved in the
design, manufacture, and deployment of m-robots. An extreme version of
the deflationary view, which I will not defend, maintains that there is
no difference in kind between the ethical questions raised by robot
action and those raised by any other technology. Instead, I will
acknowledge the novelty of the ethical questions raised by m-robots, but
claim that they are best solved by re-conceptualising them in a
deflationary manner. Consequently, some specific recommendations are
offered concerning what our goals should be in designing m-robots, and
what kind of architectures might best achieve those goals.
“Caring robots – more dangerous than killer robots?
It might seem, at first glance, that military robotics raises many more
ethical worries than does the use of robots in caring roles. However,
this superficial impression deserves revision for a number of reasons.
Firstly, there is overwhelming evidence that robots are a very effective
tool with which to manipulate human emotional responses. It might
theoretically be possible to do this only in ethical ways of benefit to
individuals and society. Unfortunately there has been little or no
discussion of exactly what these ways might be. For the caring robots
now being developed by the private sector there is no guidance
whatsoever on these issues. We can therefore expect at best, the
manipulation of emotions in order to maximize profits. At the worst we
can expect dangerous mistakes and disreputable deceit.
There has also been very little discussion outside the specialist field
of robot ethics of just which caring roles are suitable for robots and
which roles we might wish, on good reasoned grounds, to reserve for
humans. This is surely a matter that deserves widespread public debate.
Finally, there is now a large number of international conventions,
legislation, and rules of engagement which directly impact on the
development and deployment of military robots. In complete contrast, the
field of social, domestic, and caring robots is without any significant
legislation or ethical oversight. Caring, not killing, is now the wild
lawless frontier of robotics.
It’s easy to be unaware of the fact that notions similar to, if not identical with, the concept of the “extended mind” were in circulation before, say, 1998. Yet there were writers advocating active (as opposed to philosophical) externalism before that date. I have noted before that Tuomela 1989 is one such source:
“The main arguments in [this] paper are directed against the latter thesis, according to which internal (or autonomous or narrow) psychological states as opposed to noninternal ones suffice for explanation in psychology. Especially, feedback-based actions are argued to require indispensable reference to noninternal explanantia, often to explanatory common causes.” — Methodological Solipsism and Explanation in Psychology, Raimo Tuomela, Philosophy of Science Vol. 56, No. 1 (Mar., 1989) , pp. 23-47.
But there is an even clearer statement of the thesis dating back a decade before that, in Aaron Sloman’s The Computer Revolution in Philosophy (available for free here):
“Because these ideas have been made precise and implemented in the design of computing systems, we can now, without being guilty of woolly and unpackable metaphors, say things like: the environment is part of the mechanism (or its mind), and the mechanism is simultaneously part of (i.e. ‘in’) the environment!” — Aaron Sloman, The Computer Revolution in Philosophy: Philosophy, science and models of mind, Harvester Press, 1978, Section 6.5.
Here we have not only the extended mind, but situatedness as well!
Admittedly, not everything Sloman says in that book is friendly to an externalist perspective on mind, but I doubt he would take that to be a criticism.
David Leavens reminded me of Gregory Bateson saying similar things in 1972:
“… we may say that ‘mind’ is immanent in those circuits of the brain which are complete within the brain. Or that mind is immanent in circuits that are complete within the system, brain plus body. Or, finally, that mind is immanent in the larger system — man plus environment .”
In “Intelligence as a Way of Life” (2000), I note, in precisely this context (the precursors of active externalism), that Bateson’s 1971 “The Cybernetics of ‘Self’: A Theory of Alcoholism” says “the mental characteristics of the system are immanent not in some part, but in the system as a whole”, and also:
“The computer is only an arc of a larger circuit which always includes a man and an environment from which information is received and upon which efferent messages from the computer have effect. This total system, or ensemble, may legitimately be said to show mental characteristics”.
I then explicitly link his remarks to Tuomela 1989 and Clark and Chalmers 1998. Thanks again, David.
This morning, Tad Zawidzki drew my attention to the publication on Tuesday of this paper: Multisensory Integration in Complete Unawareness. What Faivre et al report there is exactly the kind of phenomenon that Ryan Scott, Jason Samaha, Zoltan Dienes and I have been investigating. In fact, we have been aware of Faivre et al’s study and cite it in our paper (that is currently under review).
Their work is good, but ours goes further. Specifically, we show that:
- a) Cross-modal associations can be learned when neither of the stimuli in the two modalities are consciously perceived (whereas the Faivre et al study relies on previously learned associations between consciously perceived stimuli).
- b) Such learning can occur with non-linguistic stimuli.
Together, a) and b) really strengthen the case against accounts that assert that consciousness is required for multi-sensory integration (e.g., Global Workspace Theory). Some defenders of such theories might try to brush aside results like that of Faivre et al by revising their theories to say that consciousness is only required for higher-level cognition, such as learning; and/or by setting aside linguistic stimuli as a special case of (consciously) pre-learned cross-modal associations which can be exploited by unconscious processes to achieve the appearance of multi-sensory integration. Our results block both of these attempts to save (what we refer to as) integration theories.
The principle of embodiment in cognitive science emphasises that the main object of cognition is to reason about systems which the agent itself is part of and can affect through its actions. I propose that particular real-world circumstances can undermine the assumption that the process of reasoning does not affect the systems being reasoned about, and explore why this is a problem for typical conceptions of rationality. We will also discuss how Sorensen’s concept of epistemic blind spots could affect mathematical reasoning, in light of the Lucas-Penrose argument about human transcendence of mechanism. But it will come as a surprise.
Wed 9th Apr, 1:30-3:00, Keith Wilson, ‘The Argument from Looks: A Plea for Representational Humility’
The assumption that perceptual experience (seeing, hearing, and so on) is fundamentally representational is common in much recent philosophy and cognitive science. It is an assumption, however, that is rarely argued for or examined in detail. According to this assumption, perceptual experience (as distinct from judgement or belief) represents the world as being, or as seeming to be, some particular way. That is, each experience has a determinate set of truth conditions. In this paper, I present an argument, inspired by Travis (2004), that aims to challenge this orthodoxy, instead claiming that there is no single representational content of experience. Consequently, whilst the argument does not entirely rule out the existence of perceptual representations, it does highlight a fundamental tension in the way philosophers and scientists of perception have thought about such representation that severely constrains its explanatory role, raising a number of questions that have yet to be satisfactorily answered by proponents of the representational view.
Wed 12th March 12:30-14:00
‘Epistemic and Inferential Consistency in Knowledge-Based Systems’
One way to understand the knowledge-based systems approach to AI is as the attempt to give an artificial agent knowledge (or give it the ability to act like a human that has that knowledge) by putting linguaform representations of that knowledge into the agent’s database (its knowledge base). The agent can then add to its knowledge base by applying rules of inference to the sentences in it. An important desideratum for this process is that only true sentences are added (else they cannot be knowledge). Since typical rules of inference would allow the addition of any sentences, including false ones, to an inconsistent database, care must be taken to ensure that knowledge bases are consistent. Much effort has been expended on devising tractable ways to do this (e.g., truth maintenance systems, assumption-based truth maintenance systems, partitioned paraconsistent knowledge bases that are locally consistent but may be globally inconsistent, etc.) I argue that for certain kinds of knowledge representation languages (autoepistemic logics), a further constraint, which I call epistemic consistency, must be met. I argue for the need to check for epistemic consistency despite the fact that, unlike for consistency simpliciter, failing to meet this constraint is not a logical possibility. The most basic form of checking that this constraint is met is to ensure that there are no sentences in an agent’s knowledge base that constitute what Sorensen has called an epistemic blindspot for that agent (e.g., “It is raining, but Hal doesn’t know it”, for the agent Hal). This constraint must be maintained both when initialising the knowledge base, and when applying rules of inference, a fact which requires generalising from Sorensen’s notion of an epistemic blindspot to the concept of epistemic blindspot sets (a move that is independently motivated in applying Sorensen’s surprise examination paradox solution to the strengthened paradox of the toxin). In addition, and along similar lines, I argue that another form of consistency, which I call inferential consistency, must be maintained. Inferential consistency does not involve epistemically problematic sentences, but rather epistemically problematic inferences, such as ones concerning the number of inferences one has made. I consider one way of dealing with such cases, which has the alarming consequence of rendering all rules of inference strictly invalid. Specifically, I argue that the validity of a rule of inference can only be retained if a semantic restriction (that of excluding reference to the inference process itself) is placed on the sentences over which it can operate.
Fellow Sackler member Jim Parkinson brought to my attention the fact that this year’s Flame Challenge – explaining science to 11-year-olds in less than 300 words – is on the topic “What is Color?”. I decided to take up the challenge; here’s my entry (299 words!):
The question “what is color?” is tricky. Understood one way, it hardly needs answering for people with normal vision, who have no problem learning how to use the word “color” and what the names for different colors are: color is just part of the way that things look. But that answer would be of little use to a blind person, since for them objects don’t “look” any way at all. Science should try to explain things for everyone, so here’s an explanation of color that works for all people, sighted or blind.Light is a collection of extremely small particles called photons. A photon might begin its journey at a lamp, bounce off an object (such as a book), and end its journey by being absorbed by one of the cells that line the back wall inside your eye. Photons wiggle while moving – some wiggle slowly, some quickly.The color of an object is the mixture of wiggle speeds of photons the object gives off in normal light.Sighted people can see an object’s color because the way a photon affects their eye cells depends on its wiggle speed. For example, if your eye absorbs a slow wiggling photon, you see red; a fast wiggling photon, you see blue. Mixtures of wiggle speeds have a mixture of effects on your eye cells, letting you see a mixture of colors. Something colored white gives off photons of all wiggle speeds.If you shine red light on a white ball it looks red, but its actual color is still white because if it were in normal light it would give off photons of all wiggle speeds. Similarly, a blue book in the dark is still blue because it would still give off fast wiggling photons were it in normal light.
Working on thesis.
Working on Joint Session talk. Thought my subject – panpsychism and the composition problem – would be a welcome change from natural kinds and downward causation, but it turns out that deproblematising composition and adding the idea of the mind being composed of multiple virtual machines is a good way of arguing for non-reductive, downwardly causal mental properties.
Working on talk for E-int and Joint Session.
Went to 1st person approach conference in Berkeley – changed plan and gave a response to Susan Stewart’s criticism of synthetic phenomenology work.
Gave talk last week to philosophy faculty research progress meeting.
Going to Sweden on Monday till August.
Supervising MSc student – implementing web browsing advisor built on architecture inspired by Bernard Baars global workspace theory.
Preparing for presentation & working on thesis.
1 – The philosophy of mind reading group (see http://www.ifl.pt/index.php?id1=3&id2=8) had a meeting on a draft chapter of my book: Cognitive Technologies in Everyday Life: Tools for Thinking and Feeling. It generated some interesting discussion and it was very nice for me after all the time I’ve put into this.
2 – I’ve started organizing a research in progress group modelled on … you’ve guessed it E-I which will hopefully meet for the first time next week.
3 – Trying to finish a review for JCS of The Crucible of Consciousness by Zoltan Torey which is supposed to be in Friday.
Working on Joint Session talk.