PAICS: Philosophy of AI and Cognitive Science

What’s really interesting about the dress colour illusion

A piece in today’s Guardian (“The Science Behind the Dress Colour Illusion“) quotes me as a primary source, but the exigencies of copy deadlines mean that in a few places my intended meaning was lost.  Here are my (unedited) notes on the matter, which should clarify my comments in the Guardian article that might have left more than a few people scratching their heads:

What’s striking about this is not (just) the illusion itself (there are many examples of how context can affect our colour judgements), but the sharp social disagreement on this issue.  Most people fall into one of two camps, with each side being sure they are right and the other side wrong, to such an extent that many cannot see it as an illusion at all: the dress simply is the colour it seems to them, and anyone who says differently is either having a laugh or can’t see colours properly. Here at the Sackler Centre for Consciousness Science, according to an unscientific straw poll conducted by my colleague Jim Parkinson this afternoon, we actually divided equally into three camps: black/blue, white/gold, and brown-gold/blue.  But stick to the dispute between the first two camps:  what’s going on?

Most illusions have the form:  things seem one way, but this can be shown to be at odds with the way things objectively are (e.g., the length of the lines in the Müller-Lyer illusion, or the sameness of the reflectance properties of the two tiles in the checker shadow illusion).  But with the dress illusion, things are different.  As things stand now, we only have the picture of the dress, the two groups of people with their different experiences of the dress’s colour, and the assurances (of some people) that despite the dress looking white and gold (to some people), it actually is blue and black.  If a white-and-golder asks how this could possibly be the case, they are just referred to a different picture, which looks blue and black to everyone, and are told that the second picture is of the same dress as the first picture.  Unsuprisingly, this is not persuasive to the white-and-gold crowd:  How *could* it be a picture of the same dress?  Do you mean the same style of dress, but in a different colour? Etc.  Imagine how it would be for the checker shadow illusion if one only saw the first of the three images in the Wikipedia entry (http://en.wikipedia.org/wiki/Checker_shadow_illusion), and not the other two images which demonstrate that the reflectance properties of the tiles were the same.  When confronted with scientists’ claims that despite appearances, the tiles are of the same colour, one would just say “rubbish!” and go about one’s business.

So the first step in reaching a truce in the “dress wars” is to construct a demonstration that can show to the white-and-gold crowd how the very same dress can also look blue and black under different conditions.

(A good first step is with this image my Sackler colleague, Keisuke Suzuki, found on twitter:

https://twitter.com/namin3485/status/571148630855254016/photo/1

The right half of each image is exactly the same.  But in the context of the two different left halves, it is interpreted as being either white and gold, or blue and black.)

But there is still more going on here.  Another striking thing about the illusion is that it is quite unlike, e.g., Muller-Lyer and the checked shadow illusion, in that not all people experience it, and those that do, often do so differently.  It is as if there is a perceptual equivalent of those who can roll their tongues and those who can’t.  But it is too early to say whether the difference is genetic, as with tongue rolling ability; or something affected by learning and personality, such as being a night-owl (as Bevil Conway from Wellesley College has suggested), or one’s particular sensitivity to context in perception as I and fellow Sackler college Acer Chang speculate.

I think the most promising account of visual experience we have at present is the idea that what we see has as much to do with what inputs our brain expects to receive in a given situation as it does with what inputs our brain actually receives.  But how one brain negotiates expected vs actual inputs to construct a colour experience might differ from how another brain does it.  Thus, some people might perceive colours more on the basis of what is in front of them, while others might (unconsciously) take into account such things as: what kind of light source is likely to have been used in this photograph?  And then even for those who do take more context into account, how they do so might vary from person to person, depending on their experience, interests, expertise, etc.  For example, the night-owl/day-person issue mentioned before.  Or the brain of a photographer or designer used to dealing with images in photoshop, adjusting white balance, etc. may very well use context to “create” colour experience in a way that is different from someone without that expertise/experience. (They might have a better understanding of how colours “behave” under a wide variety of lighting conditions, and so are not tricked into seeing the dress as gold and white.)  There may even be low-level physiological/anatomical differences that determine exactly how and whether one will be sensitive to contextual effects when experiencing colour.  But so far as I know, no one has yet identified the differences of context-sensitivity that are in play with this particular (and now notorious) dress.

Ron Chrisley
Director, Centre for Cognitive Science
Faculty, Sackler Centre for Consciousness Science
University of Sussex

Comments/corrections to the above largely off-the-cuff and unresearched opinions very welcome!

February 28, 2015 Posted by | Uncategorized | 3 Comments

A deflationary view of morally competent robots

Today, fellow PAICS-er Blay Whitby and I are invited speakers for a mini-workshop at Tufts Univeristy on moral competence in autonomous robots.  Here’s my abstract:

A deflationary view of morally competent robots.
Ron Chrisley

Long before there are robots that are true, morally responsible agents,
many of them (“m-robots”) will have strong behavioural and functional
similarities to human moral agents.  The design and evaluation of
m-robots should (both in the interests of producing the best designs,
and of doing what is right) eschew conceptualisations which view the
m-robot as a moral agent.  Rather, I argue, those engaging in such
activities should adopt the deflationary view of m-robot morality: the
ethical questions around an m-robot’s actions concern not the purported
moral standing of the m-robot itself, but rather and solely the moral
standing of the relevant humans and human organisations involved in the
design, manufacture, and deployment of m-robots.  An extreme version of
the deflationary view, which I will not defend, maintains that there is
no difference in kind between the ethical questions raised by robot
action and those raised by any other technology.  Instead, I will
acknowledge the novelty of the ethical questions raised by m-robots, but
claim that they are best solved by re-conceptualising them in a
deflationary manner.  Consequently, some specific recommendations are
offered concerning what our goals should be in designing m-robots, and
what kind of architectures might best achieve those goals.

And here is Blay’s:

“Caring robots – more dangerous than killer robots?
Blay Whitby”

It might seem, at first glance, that military robotics raises many more
ethical worries than does the use of robots in caring roles. However,
this superficial impression deserves revision for a number of reasons.
Firstly, there is overwhelming evidence that robots are a very effective
tool with which to manipulate human emotional responses. It might
theoretically be possible to do this only in ethical ways of benefit to
individuals and society. Unfortunately there has been little or no
discussion of exactly what these ways might be. For the caring robots
now being developed by the private sector there is no guidance
whatsoever on these issues. We can therefore expect at best, the
manipulation of emotions in order to maximize profits. At the worst we
can expect dangerous mistakes and disreputable deceit.
There has also been very little discussion outside the specialist field
of robot ethics of just which caring roles are suitable for robots and
which roles we might wish, on good reasoned grounds, to reserve for
humans. This is surely a matter that deserves widespread public debate.
Finally, there is now a large number of international conventions,
legislation, and rules of engagement which directly impact on the
development and deployment of military robots. In complete contrast, the
field of social, domestic, and caring robots is without any significant
legislation or ethical oversight. Caring, not killing, is now the wild
lawless frontier of robotics.

December 18, 2014 Posted by | Uncategorized | Leave a comment

Aaron Sloman on the Extended Mind – in 1978

It’s easy to be unaware of the fact that notions similar to, if not identical with, the concept of the “extended mind” were in circulation before, say, 1998. Yet there were writers advocating active (as opposed to philosophical) externalism before that date. I have noted before that Tuomela 1989 is one such source:

“The main arguments in [this] paper are directed against the latter thesis, according to which internal (or autonomous or narrow) psychological states as opposed to noninternal ones suffice for explanation in psychology. Especially, feedback-based actions are argued to require indispensable reference to noninternal explanantia, often to explanatory common causes.” — Methodological Solipsism and Explanation in Psychology, Raimo Tuomela, Philosophy of Science Vol. 56, No. 1 (Mar., 1989) , pp. 23-47.

But there is an even clearer statement of the thesis dating back a decade before that, in Aaron Sloman’s The Computer Revolution in Philosophy (available for free here):

“Because these ideas have been made precise and implemented in the design of computing systems, we can now, without being guilty of woolly and unpackable metaphors, say things like: the environment is part of the mechanism (or its mind), and the mechanism is simultaneously part of (i.e. ‘in’) the environment!” — Aaron Sloman, The Computer Revolution in Philosophy: Philosophy, science and models of mind, Harvester Press, 1978, Section 6.5.

Here we have not only the extended mind, but situatedness as well!

Admittedly, not everything Sloman says in that book is friendly to an externalist perspective on mind, but I doubt he would take that to be a criticism.

Ron

UPDATE

David Leavens reminded me of Gregory Bateson saying similar things in 1972:

“… we may say that ‘mind’ is immanent in those circuits of the brain which are complete within the brain. Or that mind is immanent in circuits that are complete within the system, brain plus body. Or, finally, that mind is immanent in the larger system — man plus environment .”

In “Intelligence as a Way of Life” (2000), I note, in precisely this context (the precursors of active externalism), that Bateson’s 1971 “The Cybernetics of ‘Self': A Theory of Alcoholism” says “the mental characteristics of the system are immanent not in some part, but in the system as a whole”, and also:

“The computer is only an arc of a larger circuit which always includes a man and an environment from which information is received and upon which efferent messages from the computer have effect. This total system, or ensemble, may legitimately be said to show mental characteristics”.

I then explicitly link his remarks to Tuomela 1989 and Clark and Chalmers 1998. Thanks again, David.

November 17, 2014 Posted by | Uncategorized | 1 Comment

Multi-sensory integration without consciousness

This morning, Tad Zawidzki drew my attention to the publication on Tuesday of this paper: Multisensory Integration in Complete Unawareness. What Faivre et al report there is exactly the kind of phenomenon that Ryan Scott, Jason Samaha, Zoltan Dienes and I have been investigating. In fact, we have been aware of Faivre et al’s study and cite it in our paper (that is currently under review).

Their work is good, but ours goes further. Specifically, we show that:

  • a) Cross-modal associations can be learned when neither of the stimuli in the two modalities are consciously perceived (whereas the Faivre et al study relies on previously learned associations between consciously perceived stimuli).
  • b) Such learning can occur with non-linguistic stimuli.

Together, a) and b) really strengthen the case against accounts that assert that consciousness is required for multi-sensory integration (e.g., Global Workspace Theory). Some defenders of such theories might try to brush aside results like that of Faivre et al by revising their theories to say that consciousness is only required for higher-level cognition, such as learning; and/or by setting aside linguistic stimuli as a special case of (consciously) pre-learned cross-modal associations which can be exploited by unconscious processes to achieve the appearance of multi-sensory integration. Our results block both of these attempts to save (what we refer to as) integration theories.

October 2, 2014 Posted by | Uncategorized | , , , , , | Leave a comment

Wed 16 Apr, 1:30-3:00, Fulton 101, Simon McGregor, ‘What Happens When Reasoning Has Side Effects?’

The principle of embodiment in cognitive science emphasises that the main object of cognition is to reason about systems which the agent itself is part of and can affect through its actions. I propose that particular real-world circumstances can undermine the assumption that the process of reasoning does not affect the systems being reasoned about, and explore why this is a problem for typical conceptions of rationality. We will also discuss how Sorensen’s concept of epistemic blind spots could affect mathematical reasoning, in light of the Lucas-Penrose argument about human transcendence of mechanism. But it will come as a surprise.

April 14, 2014 Posted by | Uncategorized | Leave a comment

Updates

Updates 29/6/11

Apologies: Blay

Paul

Working on thesis.

Simon

Working on Joint Session talk. Thought my subject – panpsychism and the composition problem – would be a welcome change from natural kinds and downward causation, but it turns out that deproblematising composition and adding the idea of the mind being composed of multiple virtual machines is a good way of arguing for non-reductive, downwardly causal mental properties.

Tom

Working on talk for E-int and Joint Session.

Ron

Went to 1st person approach conference in Berkeley – changed plan and gave a response to Susan Stewart’s criticism of synthetic phenomenology work.

Gave talk last week to philosophy faculty research progress meeting.

Going to Sweden on Monday till August.

Supervising MSc student – implementing web browsing advisor built on architecture inspired by Bernard Baars global workspace theory.

July 3, 2011 Posted by | Uncategorized | Leave a comment

updates

Updates

Paul
Preparing for presentation & working on thesis.

Rob
1 – The philosophy of mind reading group (see http://www.ifl.pt/index.php?id1=3&id2=8) had a meeting on a draft chapter of my book: Cognitive Technologies in Everyday Life: Tools for Thinking and Feeling. It generated some interesting discussion and it was very nice for me after all the time I’ve put into this.

2 – I’ve started organizing a research in progress group modelled on … you’ve guessed it E-I which will hopefully meet for the first time next week.

3 – Trying to finish a review for JCS of The Crucible of Consciousness by Zoltan Torey which is supposed to be in Friday.

Simon
Working on Joint Session talk.

June 17, 2011 Posted by | Uncategorized | Leave a comment

Updates

  • Wrote paper with Blay for “What Makes Us Moral?” conference in Amsterdam at month’s end and submitted it to the conference website. Presented the paper at a seminar here for last-minute feedback before submission.
  • Re-wrote Chapter 5 of my thesis “The Limits of Concepts and Conceptual Abilities” into a standalone paper for a course I’m attending of the SweCog National Research School in Cognitive Science. Planning to submit it somewhere by month’s end.
  • Doctoral thesis went lost in the (registered) post. So far neither Sweden nor the UK want to claim responsibility. Annoying as this may complicate my pay-grade change to postdoc status (seriously). Did I mention that the first time the bookbinders bound my thesis, they got my name wrong? :-P

Joel

June 12, 2011 Posted by | Uncategorized | Leave a comment

Updates

Apologies for the delay.

June 11, 2011 Posted by | Uncategorized | Leave a comment

Informatics_paics Updates

  • Collected my bound thesis from the bookbinders on Monday and posted it to Sussex: pretty much the last thing I have to do before I officially have earned my degree!
  • Finished a first re-write of the paper I presented at Toward a Science of Consciousness – Stockholm, hoping to submit in the next few weeks (on the limits of concepts and conceptual abilities).
  • Engaging in some email discussions with Blay and a philosopher here in Lund about compatibilism.
  • Assisting Göran Sonesson with comments on a paper he is submitting, on the ability of chimpanzees to interpret different semiotic resources.
  • Making painfully slow progress on the paper I need to write (so I can present!) at the What Makes Us Moral? conference in Amsterdam later this month.

Joel

June 2, 2011 Posted by | Uncategorized | 1 Comment

Follow

Get every new post delivered to your Inbox.