Wed 16 Apr, 1:30-3:00, Fulton 101
Simon McGregor: What Happens When Reasoning Has Side Effects?
The principle of embodiment in cognitive science emphasises that the main object of cognition is to reason about systems which the agent itself is part of and can affect through its actions. I propose that particular real-world circumstances can undermine the assumption that the process of reasoning does not affect the systems being reasoned about, and explore why this is a problem for typical conceptions of rationality. We will also discuss how Sorensen’s concept of epistemic blind spots could affect mathematical reasoning, in light of the Lucas-Penrose argument about human transcendence of mechanism. But it will come as a surprise.
Wed 9th Apr, 1:30-3:00
Keith Wilson: The Argument from Looks: A Plea for Representational Humility
The assumption that perceptual experience (seeing, hearing, and so on) is fundamentally representational is common in much recent philosophy and cognitive science. It is an assumption, however, that is rarely argued for or examined in detail. According to this assumption, perceptual experience (as distinct from judgement or belief) represents the world as being, or as seeming to be, some particular way. That is, each experience has a determinate set of truth conditions. In this paper, I present an argument, inspired by Travis (2004), that aims to challenge this orthodoxy, instead claiming that there is no single representational content of experience. Consequently, whilst the argument does not entirely rule out the existence of perceptual representations, it does highlight a fundamental tension in the way philosophers and scientists of perception have thought about such representation that severely constrains its explanatory role, raising a number of questions that have yet to be satisfactorily answered by proponents of the representational view.
Wed 12th March 12:30-14:00, Richmond AS03
Ron Chrisley: Epistemic and Inferential Consistency in Knowledge-Based Systems
One way to understand the knowledge-based systems approach to AI is as the attempt to give an artificial agent knowledge (or give it the ability to act like a human that has that knowledge) by putting linguaform representations of that knowledge into the agent’s database (its knowledge base). The agent can then add to its knowledge base by applying rules of inference to the sentences in it. An important desideratum for this process is that only true sentences are added (else they cannot be knowledge). Since typical rules of inference would allow the addition of any sentences, including false ones, to an inconsistent database, care must be taken to ensure that knowledge bases are consistent. Much effort has been expended on devising tractable ways to do this (e.g., truth maintenance systems, assumption-based truth maintenance systems, partitioned paraconsistent knowledge bases that are locally consistent but may be globally inconsistent, etc.) I argue that for certain kinds of knowledge representation languages (autoepistemic logics), a further constraint, which I call epistemic consistency, must be met. I argue for the need to check for epistemic consistency despite the fact that, unlike for consistency simpliciter, failing to meet this constraint is not a logical possibility. The most basic form of checking that this constraint is met is to ensure that there are no sentences in an agent’s knowledge base that constitute what Sorensen has called an epistemic blindspot for that agent (e.g., “It is raining, but Hal doesn’t know it”, for the agent Hal). This constraint must be maintained both when initialising the knowledge base, and when applying rules of inference, a fact which requires generalising from Sorensen’s notion of an epistemic blindspot to the concept of epistemic blindspot sets (a move that is independently motivated in applying Sorensen’s surprise examination paradox solution to the strengthened paradox of the toxin). In addition, and along similar lines, I argue that another form of consistency, which I call inferential consistency, must be maintained. Inferential consistency does not involve epistemically problematic sentences, but rather epistemically problematic inferences, such as ones concerning the number of inferences one has made. I consider one way of dealing with such cases, which has the alarming consequence of rendering all rules of inference strictly invalid. Specifically, I argue that the validity of a rule of inference can only be retained if a semantic restriction (that of excluding reference to the inference process itself) is placed on the sentences over which it can operate.
Working on thesis.
Working on Joint Session talk. Thought my subject – panpsychism and the composition problem – would be a welcome change from natural kinds and downward causation, but it turns out that deproblematising composition and adding the idea of the mind being composed of multiple virtual machines is a good way of arguing for non-reductive, downwardly causal mental properties.
Working on talk for E-int and Joint Session.
Went to 1st person approach conference in Berkeley – changed plan and gave a response to Susan Stewart’s criticism of synthetic phenomenology work.
Gave talk last week to philosophy faculty research progress meeting.
Going to Sweden on Monday till August.
Supervising MSc student – implementing web browsing advisor built on architecture inspired by Bernard Baars global workspace theory.
Preparing for presentation & working on thesis.
1 – The philosophy of mind reading group (see http://www.ifl.pt/index.php?id1=3&id2=8) had a meeting on a draft chapter of my book: Cognitive Technologies in Everyday Life: Tools for Thinking and Feeling. It generated some interesting discussion and it was very nice for me after all the time I’ve put into this.
2 – I’ve started organizing a research in progress group modelled on … you’ve guessed it E-I which will hopefully meet for the first time next week.
3 – Trying to finish a review for JCS of The Crucible of Consciousness by Zoltan Torey which is supposed to be in Friday.
Working on Joint Session talk.