Through The Eye of The Robot: Synthetic Phenomenology

Screenshot 2019-09-02 at 08.41.54

In July of 2007 (!), Joel Parthemore and I exhibited an installation as part of the Whittingham Riddell Shrewsbury Open Art Exhibition.  The theme that year was: “Batteries Not Included: Mind as Machine?”.  Joel and I made a video document of our installation, which has collected digital dust on. virtual bookshelf — until now.  You can view the video at:

https://tinyurl.com/synthetic-phenomenology-2.

You can see the exhibition catalogue at:

http://users.sussex.ac.uk/~ronc/shrewsbury_open_2007.pdf.

One of the takeaways from the exhibition that, for me, still lingers, is the introduction of a phenomenon that I refer to as “exterospection”.  We were displaying on a large screen enactive specifications of the visual experiences that a robot was modelling.  But since the screen was an object in the robot’s visual environment, it came to “have” (model) visual experiences of those specifications.  So like introspection, it could (in theory!) examine its experiences and learn more about them.  But unlike introspection, it would do so not by “looking within” using an “inner glance”, but by exteroception of a particular external thing in a particular way:  exterospection.  This is not the same as an “autocerebroscope”, or looking at one’s own fMRI scan, since images of brain states are not (yet?) in themselves specifications of the contents of experience.

Advertisement

Embodiment: Six Themes

Screenshot 2019-09-02 at 06.34.24

I’m writing this from Zürich airport, on my way back to England after an excellent sojourn at the Dharma Sangha Zen Centre (www.dharma-sangha.de) on the German/Swiss frontier.  I was there for a cosy meeting of the Society for Mind-Matter Research (www.mindmatter.de) on the topic of embodiment. My talk gave a brief overviews of six ways in which my research has investigated the role of embodiment in mind and computation.  You can view my slides here: prezi.com/view/TLzIVu5YT

Prediction and chaos in the markets

It’s been a while.

Screen Shot 2019-05-08 at 00.41.35

The May 6th, 2019 UK edition of Metro published an article entitled “Can we trust machines to predict the stock market with 100% accuracy?“, by Sonya Barlow.

The piece, which is more in-depth and better-researched than one might expect, included a single sentence from me, which was a portion of my response to this question from Sonya:  “The more AI [is] used in predictions, the less that AI can predict – Can that really be the case?”

Here is my response in full:

It’s not in general true that the more AI is used in predictions, the less it can predict, if the AI is not a part of the system it is predicting (predicting sunspot activity, for example).  But in the kinds of cases many people are interested in, such as AI for financial prediction, it’s a different matter, because in finance, the AI is typically part of the system it is trying to predict.  That is, the predictions the AI makes are used to take action (buy, sell, short, whatever) in that system.  And the presence of predictors (machine or human) in a system, taking action in that same system on the basis of their predictions, makes the system more difficult to predict (by machines or humans).
Why is this so?  To see why, consider a relatively simple, one-on-one system with two members: you and your opponent.  The best way to predict what your opponent is going to do is to model them: figure out what their strategy is, and predict that they will do whatever that strategy recommends in the current situation.  You then choose your best action given what you predict they will do.  But if they are also a predictor like you, then you both have a problem.  Even if you know what your opponent’s strategy is — it’s to predict what you are going to do, and act appropriately — predicting what they will do depends on what they predict you will do, which in turn depends on your prediction of what they are going to do, which is back where we started. Thus, the behaviour of the system is an unstable, chaotic circle.
This doesn’t mean that we’ll stop using AIs to predict — on the contrary, they will become (even more) obligatory, just to stay in the predictive arms race.  To fail to use them would make you more easily predictable, and thus at a relative disadvantage.

Second-order change blindness

Screen Shot 2018-08-25 at 11.07.25.png

In recent talks in Warsaw (IACAP) and Krakow (ASSC), I sketched some experimental designs that would allow us to see whether visual experience is backward-looking or forward-looking (do we experience things as they were, or as they will be? Or neither?).  When I shared the early form of these designs with Matt Jaquiery a few years back, he pointed out that they assumed an affirmative answer to a question which had not yet been answered (or asked) in the experimental literature: do people suffer from change blindness with respect to second-order spatio-temporal visual properties?  Specifically, will the usual distractors (white flash, “mud splashes”, etc) result in subjects failing to notice a change of trajectory of a visual object?  Matt then designed and conducted a web-based experiment to answer this question.  Nora Andermane helped out by running a lab-based version.  The answer is yes.  A paper presenting our results is now under review with PLoS ONE.  The bioRxiv preprint (“Trajectory changes are susceptible to change blindness manipulations”) is available now (comments welcome!):

https://www.biorxiv.org/content/early/2018/08/13/391359.

Here’s the abstract:

People routinely fail to notice that things have changed in a visual scene if they do not perceive the changes in the process of occurring, a phenomenon known as ‘change blindness’. The majority of lab-based change blindness studies use static stimuli and require participants to identify simple changes such as alterations in stimulus orientation or scene composition. This study uses a ‘flicker’ paradigm adapted for dynamic stimuli which allowed for both simple orientation changes and more complex trajectory changes. Participants were required to identify a moving rectangle which underwent one of these changes against a background of moving rectangles which did not. The results demonstrated that participants’ ability to correctly identify the target deteriorated with the presence of a visual mask and a larger number of distractor objects, consistent with findings in previous change blindness work. The study provides evidence that the flicker paradigm can be used to induce change blindness with dynamic stimuli, and that changes to predictable trajectories are detected or missed in the similar way as orientation changes.

Simon Bowes: Natural Minds

Congratulations to Simon Bowes, who just passed his DPhil viva with (very) minor corrections.  The internal examiner was Blay Whitby, and the external examiner was Robin Hendry.

Natural Minds: Summary

My project is an empirically informed investigation of the philosophical problem of mental causation, and simultaneously a philosophical investigation of the status of cognitive scientific generalisations. If there is such a thing as mental causation (mental states making things happen qua mental states, not as unnecessary accompaniments of physical causes the full description of which require reference only to fundamental physical entities and their interactions), and if we can classify the mental states involved in these causes in a way useful for making predictions and giving explanations (that is, if there can be a science of mental states), then these states will be natural kinds, in a sense that it will be part of my task to spell out. The second part of my task is to say how the scientific statements made using these mental kinds are not susceptible to being reduced to statements about physical kinds, and in fact require the taking into account facts at many levels of explanation, including the biological and social levels. Lastly, I will be setting out the case for Virtual Machine Functionalism being the correct account of the relationship between cognitive states and the broader physical world.
A theme of this thesis is that although there may be something problematic with traditional accounts of natural kinds and representations when it comes to contemporary cognitive science, this is no reason for thinking that those terms are not useful; we should refine rather than eliminate them. Some may say that whether or not to retain the use of such technical terms is not much more than a matter of taste, as what is important is the understanding, not the name. So, many may prefer to lose the name in order to distance themselves from the details of the theories associated with them. I think, however, that something would be lost in our understanding if we reject these terms and the theoretical understanding they describe, something that perhaps was there before the wrong-headed theoretical details that came to be associated with the terms. The accumulations of mistaken theorising should be thrown out with the murky bathwater. Some terms that have been coined in the development of our understanding of the mind, such as ‘qualia’, should indeed be washed away, but others should be dried off and clothed anew.
Broadly speaking, my argument will be that to square the widely held but somewhat contradictory intuitions of Physicalism and Anti-reductionism regarding mental states will require rejection or modification of two other commonly held intuitions, namely Physical Causal Closure (some take this as necessary for Physicalism) and Supervenience (with Physical Realisation).
Another way of stating my aim is in terms of defending the intuitive distinction between metaphorical and literal uses of intentional vocabulary, such as ‘wanting’ and ‘trying’. I have been surprised in discussions with colleagues of a mathematical and scientific bent, who take a physicalistic stance on questions of consciousness, that this distinction is seen by them as purely verbal, that they see no meaningful difference between saying of a drop of water that it is ‘trying’ to get to the bottom of the window pane, and using that word to say of a person that he or she is trying to get to the top of the mountain. Much of what follows is an attempt to describe a metaphysics that is rigourously materialist and scientific, but in which the difference between the two cases has a natural place.
In brief, the difference lies in the notion of intentionality; the fact that in the case of my desiring something, there is a mental state ‘in’ me that has evolved for the purpose of directing my actions towards a state of affairs ‘out there’ in the world. Such states are things that can be scientifically studied, and a scientific account of human action would be incomplete if it did not refer to such states. In the case of the water drop, there are no similar states without which the scientific understanding of water droplet action would be incomplete. The temptation to elide the distinction between intentional and non-intentional descriptions is based on a simplistic belief that since all causation is physical, there is no meaningful distinction between the kinds of causes that the makes water drops drop, and those that make climbers climb. This results from the fact that it is sometimes felt that reference to such things as personal agency in intentional explanations of action is to allow in a disagreeable form of dualism. I disagree, and argue that a complete, physicalistic, scientific account of human behaviour must include reference to irreducible, mental kinds, such as beliefs and desires.
The form of the argument follows the content, having as it does Natural Kinds at its centre, and arguing as it does that Natural Kinds are at the centre of the web of concepts that form our understanding of mind and its place in the natural world. The starting point is simple folk explanations of human actions, for example, ‘He ate the apple because…’ followed by a set of conditions including combinations of beliefs and desires that together constitute sufficient reasons for the act of eating. Many would say such purported explanations are mere folk tales, fictions that mask our ignorance of the true story, which will, when we know how to tell it, have a much reduced cast of characters, an exclusive set of ‘purely’ physical types.
This is well trodden, and by now quite muddy, ground. Enter the various adherents of the new ‘embodied cognitive science’. However, it is not clear whose side they are on, whether they could tip the balance in favour of one side or the other, or indeed whether they will speak with one voice on this question. One of the aims of the present piece is to analyse what positions within the debate the embodiment theorist is most likely to take. The triangulation points for mapping this terrain will be arguments for the autonomy of sciences of the mind, where Natural Kinds will take centre stage, and counterarguments that rely on the notion of supervenience. An account of Natural Kinds, which I call the Topographical view, will be outlined, which, I claim, avoids the problems of other accounts, and is suitable for use in the generalisations constructed in embodied cognitive science. Following that, we will plug this account into the debate around the autonomy of special sciences in general, and the problem of mental causation in particular. Flowing from that the discussion will broaden out into an investigation of causation, including a refined understanding of causal closure and explanation. After applying the results of these discussions to our understanding of the supervenience relation, a defensible account of emergentism will be given. Next, we will move on to laying out a picture of the kinds of properties of mental states that may be referred to in explanations of rational action, namely, the representational contents of mental states. In order to understand the nature of these states, the feedback dynamics between hierarchically structured levels of cognition involved in their evolution will be emphasised, leading to a picture of embodied cognition that is broad and externalist as opposed to narrow and internalistic. Finally, we will look at consciousness, as a necessary accompaniment of true intentional action, as opposed to behaviour that intentional language may be used metaphorically of, showing that subjects of intentional actions are emergent from brain/body/world dynamics. We will finish with a look at the implications of the refined functionalist account defended herein for the metaphysical notions we started with. The conclusion drawn will be that we can indeed refer to genuine mental causes which ground the non-metaphorical use of intentional explanations. I will end by sketching some implications for the notion of free will.

 

Robot opera: Robert Gyorgyi interviews Ron Chrisley

Robert Gyorgyi, a Music student here at Sussex, recently interviewed me for his dissertation on robot opera.  He asked me about my recent collaborations, in which I programmed Nao robots to perform in operas composed for them.  Below is the transcript.

Robot-opera-1024x576

Interview with Dr Ron Chrisley, 20 April 2018, 12:00, University of Sussex

Bold text: Interviewer (Robert Gyorgyi), [R]: Dr Ron Chrisley

NB: The names ‘Ed’ and ‘Evelyn’ often come up within the interview. ‘Ed’ refers to Ed Hughes, the composer of Opposite of Familiarity (2017) and Evelyn to ‘Evelyn Ficarra’, composer of O, One (2017)

How did you hear about the project? Was it a sort of group brainstorming or was the idea proposed to you?

[R] -Evelyn approached me, then we had a meeting when she explained her vision to me.

These NAO robots are social robots designed to speak, not to sing. Was the assignment of their new task your main challenge? How did you do that? Continue reading

Machine Messiah: Lessons for AI in “Destination: Void”

Tomorrow is the first day of a two-day conference to be held at Jesus College, Cambridge on the topic: “Who’s afraid of the Super-Machine?  AI in Sci-Fi Film and Literature” (https://science-human.org/upcoming/), hosted by the Science & Human Dimension division of the AI & The Future of Humanity Project.

cropped-forbidden-planet-shdp-conference

I’m speaking on Friday:

Machine Messiah: Lessons for AI in Destination: Void

In Destination: Void (1965), Frank Herbert anticipates many current and future ethical, social and philosophical issues arising from humanity’s ambitions to create artificial consciousness.  Unlike with his well-known Dune millieu, which explicitly sidesteps such questions via the inclusion of a universally-respected taboo against artificial intelligence, the moon-based scientist protagonists in Destination: Void explicitly aim to create artificial consciousness, despite previous disastrous attempts.  A key aspect of their strategy is to relinquish direct control of the process of creation, instead designing combinations of resources (a cloned spaceship crew of scientists, engineers, psychiatrists and chaplains with interlocking personality types) and catalytic situations (a colonising space mission that is, unknown to the clone crew, doomed with scheduled crises ) that the moon-based scientists hope will impel the crew members to bring about, if not explicitly design, an artificial consciousness based in the ship’s computer.  As with Herbert’s other works, there is a strong emphasis on the messianic and the divine, but here it is in the context of a superhuman machine, and the ethics of building such.  I will aim to extract from Herbert’s incredibly prescient story several lessons, ranging from the practical to the theological, concerning artificial consciousness, including: the engineering of emergence and conceptual change; intelligent design and “Adam as the first AI”; the naturalisation of spiritual discourse; and the doctrine of the Imago Dei as a theological injunction to engage in artificial consciousness research.

Revisionism about Qualia: Prospects for a Reconciliation Between Physicalism and Qualia

 

On January 30th I’ll be presenting joint work with Aaron Sloman (“Revisionism about Qualia: Prospects for a Reconciliation Between Physicalism and Qualia”) at a conference in Antwerp on Dan Dennett’s work in philosophy of mind (sponsored by the Centre for Philosophical Psychology and European Network for Sensory Research).  Both Aaron and Dan will be in attendance.  I don’t have an abstract of our talk, but it will be based on a slimmed-down version of our 2016 paper (with some additions, hopefully taking into account some recent development’s in Dan’s position on qualia).

The official deadline for registration has passed, but if you are interested in attending perhaps Bence Nanay, the organiser, can still accommodate you?  Below please find the list of speakers and original calls for registration and papers.

Centre for Philosophical Psychology and European Network for Sensory Research

Call for registration 

Conference with Daniel Dennett on his work in philosophy of mind. January 30, 2018. 

Speakers:

  • Daniel Dennett (Tufts)
  • Elliot Carter (Toronto)
  • Ron Chrisley and Aaron Sloman (Sussex)
  • Krzysztof Dolega (Bochum)
  • Markus Eronen (Leuven)
  • Csaba Pleh (CEU)
  • Anna Strasser (Berlin)

This conference accompanies Dennett’s deliverance of the 7th Annual Marc Jannerod Lecture (the attendance of this public lecture is free). 

Registration (for the conference, not the public lecture): 100 Euros (including conference dinner – negotiable if you dont want conference dinner). Send an email to Nicolas Alzetta (nalzetta@yahoo.com) to register. Please register by December 21. 

Workshop with Daniel Dennett, January 30, 2018

Call for papers!

Daniel Dennett will give the Seventh Annual Marc Jeannerod Lecture (on empirically grounded philosophy of mind) in January 2018. To accompany this lecture, the University of Antwerp organizes a workshop on  Dennett’s philosophy of mind on January 30, 2018, where he will be present.

There are no parallel sections. Only blinded submissions are accepted.

Length: 3000 words. Single spaced!

Deadline: October 15, 2017. Papers should be sent to nanay@berkeley.edu