Simon Bowes: Natural Minds

Congratulations to Simon Bowes, who just passed his DPhil viva with (very) minor corrections.  The internal examiner was Blay Whitby, and the external examiner was Robin Hendry.

Natural Minds: Summary

My project is an empirically informed investigation of the philosophical problem of mental causation, and simultaneously a philosophical investigation of the status of cognitive scientific generalisations. If there is such a thing as mental causation (mental states making things happen qua mental states, not as unnecessary accompaniments of physical causes the full description of which require reference only to fundamental physical entities and their interactions), and if we can classify the mental states involved in these causes in a way useful for making predictions and giving explanations (that is, if there can be a science of mental states), then these states will be natural kinds, in a sense that it will be part of my task to spell out. The second part of my task is to say how the scientific statements made using these mental kinds are not susceptible to being reduced to statements about physical kinds, and in fact require the taking into account facts at many levels of explanation, including the biological and social levels. Lastly, I will be setting out the case for Virtual Machine Functionalism being the correct account of the relationship between cognitive states and the broader physical world.
A theme of this thesis is that although there may be something problematic with traditional accounts of natural kinds and representations when it comes to contemporary cognitive science, this is no reason for thinking that those terms are not useful; we should refine rather than eliminate them. Some may say that whether or not to retain the use of such technical terms is not much more than a matter of taste, as what is important is the understanding, not the name. So, many may prefer to lose the name in order to distance themselves from the details of the theories associated with them. I think, however, that something would be lost in our understanding if we reject these terms and the theoretical understanding they describe, something that perhaps was there before the wrong-headed theoretical details that came to be associated with the terms. The accumulations of mistaken theorising should be thrown out with the murky bathwater. Some terms that have been coined in the development of our understanding of the mind, such as ‘qualia’, should indeed be washed away, but others should be dried off and clothed anew.
Broadly speaking, my argument will be that to square the widely held but somewhat contradictory intuitions of Physicalism and Anti-reductionism regarding mental states will require rejection or modification of two other commonly held intuitions, namely Physical Causal Closure (some take this as necessary for Physicalism) and Supervenience (with Physical Realisation).
Another way of stating my aim is in terms of defending the intuitive distinction between metaphorical and literal uses of intentional vocabulary, such as ‘wanting’ and ‘trying’. I have been surprised in discussions with colleagues of a mathematical and scientific bent, who take a physicalistic stance on questions of consciousness, that this distinction is seen by them as purely verbal, that they see no meaningful difference between saying of a drop of water that it is ‘trying’ to get to the bottom of the window pane, and using that word to say of a person that he or she is trying to get to the top of the mountain. Much of what follows is an attempt to describe a metaphysics that is rigourously materialist and scientific, but in which the difference between the two cases has a natural place.
In brief, the difference lies in the notion of intentionality; the fact that in the case of my desiring something, there is a mental state ‘in’ me that has evolved for the purpose of directing my actions towards a state of affairs ‘out there’ in the world. Such states are things that can be scientifically studied, and a scientific account of human action would be incomplete if it did not refer to such states. In the case of the water drop, there are no similar states without which the scientific understanding of water droplet action would be incomplete. The temptation to elide the distinction between intentional and non-intentional descriptions is based on a simplistic belief that since all causation is physical, there is no meaningful distinction between the kinds of causes that the makes water drops drop, and those that make climbers climb. This results from the fact that it is sometimes felt that reference to such things as personal agency in intentional explanations of action is to allow in a disagreeable form of dualism. I disagree, and argue that a complete, physicalistic, scientific account of human behaviour must include reference to irreducible, mental kinds, such as beliefs and desires.
The form of the argument follows the content, having as it does Natural Kinds at its centre, and arguing as it does that Natural Kinds are at the centre of the web of concepts that form our understanding of mind and its place in the natural world. The starting point is simple folk explanations of human actions, for example, ‘He ate the apple because…’ followed by a set of conditions including combinations of beliefs and desires that together constitute sufficient reasons for the act of eating. Many would say such purported explanations are mere folk tales, fictions that mask our ignorance of the true story, which will, when we know how to tell it, have a much reduced cast of characters, an exclusive set of ‘purely’ physical types.
This is well trodden, and by now quite muddy, ground. Enter the various adherents of the new ‘embodied cognitive science’. However, it is not clear whose side they are on, whether they could tip the balance in favour of one side or the other, or indeed whether they will speak with one voice on this question. One of the aims of the present piece is to analyse what positions within the debate the embodiment theorist is most likely to take. The triangulation points for mapping this terrain will be arguments for the autonomy of sciences of the mind, where Natural Kinds will take centre stage, and counterarguments that rely on the notion of supervenience. An account of Natural Kinds, which I call the Topographical view, will be outlined, which, I claim, avoids the problems of other accounts, and is suitable for use in the generalisations constructed in embodied cognitive science. Following that, we will plug this account into the debate around the autonomy of special sciences in general, and the problem of mental causation in particular. Flowing from that the discussion will broaden out into an investigation of causation, including a refined understanding of causal closure and explanation. After applying the results of these discussions to our understanding of the supervenience relation, a defensible account of emergentism will be given. Next, we will move on to laying out a picture of the kinds of properties of mental states that may be referred to in explanations of rational action, namely, the representational contents of mental states. In order to understand the nature of these states, the feedback dynamics between hierarchically structured levels of cognition involved in their evolution will be emphasised, leading to a picture of embodied cognition that is broad and externalist as opposed to narrow and internalistic. Finally, we will look at consciousness, as a necessary accompaniment of true intentional action, as opposed to behaviour that intentional language may be used metaphorically of, showing that subjects of intentional actions are emergent from brain/body/world dynamics. We will finish with a look at the implications of the refined functionalist account defended herein for the metaphysical notions we started with. The conclusion drawn will be that we can indeed refer to genuine mental causes which ground the non-metaphorical use of intentional explanations. I will end by sketching some implications for the notion of free will.

 

Advertisements

What philosophy can offer AI

https3a2f2fcdn-evbuc-com2fimages2f279452862f1213613672012f12foriginal

My piece on “What philosophy can offer AI” is now up at AI firm LoopMe’s blog. This is part of the run-up to my speaking at their event, “Artificial Intelligence: The Future of Us”, to be held at the British Museum next month.  Here’s what I wrote (the final gag is shamelessly stolen from Peter Sagal of NPR’s “Wait Wait… Don’t Tell Me!”):

Despite what you may have heard, philosophy at its best consists in rigorous thinking about important issues, and careful examination of the concepts we use to think about those issues.  Sometimes this analysis is achieved through considering potential exotic instances of an otherwise everyday concept, and considering whether the concept does indeed apply to that novel case — and if so, how.

In this respect, artificial intelligence (AI), of the actual or sci-fi/thought experiment variety, has given philosophers a lot to chew on, providing a wide range of detailed, fascinating instances to challenge some of our most dearly-held concepts:  not just “intelligence”, “mind”, and “knowledge”, but also “responsibility”, “emotion”, “consciousness”, and, ultimately, “human”.

But it’s a two-way street: Philosophy has a lot to offer AI too.

Examining these concepts allows the philosopher to notice inconsistency, inadequacy or incoherence in our thinking about mind, and the undesirable effects this can have on AI design.  Once the conceptual malady is diagnosed, the philosopher and AI designer can work together (they are sometimes the same person) to recommend revisions to our thinking and designs that remove the conceptual roadblocks to better performance.

This symbiosis is most clearly observed in the case of artificial general intelligence (AGI), the attempt to produce an artificial agent that is, like humans, capable of behaving intelligently in an unbounded number of domains and contexts

The clearest example of the requirement of philosophical expertise when doing AGI concerns machine consciousness and machine ethics: at what point does an AGI’s claim to mentality become real enough that we incur moral obligations toward it?  Is it at the same time as, or before, it reaches the point at which we would say it is conscious?  At when it has moral obligations of its own? And is it moral for us to get to the point where we have moral obligations to machines?  Should that even be AI’s goal?

These are important questions, and it is good that they are being discussed more even though the possibilities they consider aren’t really on the horizon.  

Less well-known is that philosophical sub-disciplines other than ethics have been, and will continue to be, crucial to progress in AGI.  

It’s not just the philosophers that say so; Quantum computation pioneer and Oxford physicist David Deutsch agrees: “The whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology”.  That “not” might overstate things a bit (I would soften it to “not only”), but it’s clear that Deutch’s vision of philosophy’s role in AI will not be limited to being a kind of ethics panel that assesses the “real work” done by others.

What’s more, philosophy’s relevance doesn’t just kick in once one starts working on AGI — which substantially increases its market share.  It’s an understatement to say that AGI is a subset of AI in general.  Nearly all, of the AI that is at work now providing relevant search results, classifying images, driving cars, and so on is not domain-independent AGI – it is technological, practical AI, that exploits the particularities of its domain, and relies on human support to augment its non-autonomy to produce a working system. But philosophical expertise can be of use even to this more practical, less Hollywood, kind of AI design.

The clearest point of connection is machine ethics.  

But here the questions are not the hypothetical ones about whether a (far-future) AI has moral obligations to us, or we to it.  Rather the questions will be more like this: 

– How should we trace our ethical obligations to each other when the causal link between us and some undesirable outcome for another, is mediated by a highly complex information process that involves machine learning and apparently autonomous decision-making?  

– Do our previous ethical intuitions about, e.g., product liability apply without modification, or do we need some new concepts to handle these novel levels of complexity and (at least apparent) technological autonomy?

As with AGI, the connection between philosophy and technological, practical AI is not limited to ethics.  For example, different philosophical conceptions of what it is to be intelligent suggest different kinds of designs for driverless cars.  Is intelligence a disembodied ability to process symbols?  Is it merely an ability to behave appropriately?  Or is it, at least in part, a skill or capacity to anticipate how one’s embodied sensations will be transformed by the actions one takes?  

Contemporary, sometimes technical, philosophical theories of cognition are a good place to start when considering what way of conceptualising the problem and solution will be best for a given AI system, especially in the case of design that has to be truly ground breaking to be competitive.

Of course, it’s not all sweetness and light. It is true that there has been some philosophical work that has obfuscated the issues around AI, thereby unnecessarily hindering progress. So, to my recommendation that philosophy play a key role in artificial intelligence, terms and conditions apply.  But don’t they always?

Guessing Games and The Power of Prediction

The CogPhi reading group resumes next week.  CogPhi offers the chance to read through and discuss recent literature in the Philosophy of Artifical Intelligence and Cognitive51zmr2bn5hhl-_sx329_bo1204203200_ Science.  Each week a different member of the group leads the others through the chosen reading for that week. This term we’ll be working through Andy Clark’s new book on predictive processing, Surfing Uncertainty: Prediction, Action and the Embodied Mind.

CogPhi meets fortnightly, sharing the same time slot and room as E-Intentionality, which meets fortnightly in the alternate weeks. Although CogPhi announcements will be made on the E-Int mailing list, attendance at one  seminar series is not required for attendance at the other.  CogPhi announcements will also be made here.

Next week, October 20th, from 13:00-13:50 in Freeman G31, Jonny Lee will lead the discussion of the Introduction (“Guessing Games”) and Chapter 1 (“Prediction Machines”).  Have your comments and questions ready beforehand.  In fact, feel free to post them in advance, here, as comments on this post.

EDIT:  Jonny sent out the following message yesterday, the 19th:

It’s been brought to my attention that covering both the introduction and chapter 1 might be too much material for one meeting. As such, let’s say we’ll just stick to the introduction. If you’ve already read chapter 1, apologies, but you’ll be ahead of the game. On the other hand, if the amount of reading was putting you off, you’ve now only got 10 pages to get through!

 

The Mereological Constraint

 

voice-in-head

brainworldmagazine.com

E-Intentionality, February 26th 2016, Pevensey 2A11, 12:00-12:50

Ron Chrisley: The Mereological Constraint

I will discuss what I call the mereological constraint, which can be traced back at least as far as Putnam’s writings in the 1960s, and is the idea, roughly, that a mind cannot have another mind as a proper constituent.  I show that the implications (benefits?) of such a constraint, if true, would be far-ranging, allowing one to finesse the Chinese room and Chinese nation arguments against computationalism, reject certain notions of extended mind, reject most group minds, make a ruling on the modality of sensory substitution, etc.  But is the mereological conjecture true?  I will look at some possible arguments for the conjecture, including one that appeals to the fact that rationality must be grounded in the non-rational, and one that attempts to derive the constraint from a comparable one concerning the individuation of computational states.  I will also consider an objection to the conjecture, that argues that it would confer on us a priori knowledge of facts that are, intuitively, empirical.

Audio (28.5 mb, .mp3)