Epistemic and Inferential Consistency in Knowledge-Based Systems

 

Wed 12th March 12:30-14:00, Richmond AS03

Ron Chrisley: Epistemic and Inferential Consistency in Knowledge-Based Systems

One way to understand the knowledge-based systems approach to AI is as the attempt to give an artificial agent knowledge (or give it the ability to act like a human that has that knowledge) by putting linguaform representations of that knowledge into the agent’s database (its knowledge base).  The agent can then add to its knowledge base by applying rules of inference to the sentences in it.  An important desideratum for this process is that only true sentences are added (else they cannot be knowledge).  Since typical rules of inference would allow the addition of any sentences, including false ones, to an inconsistent database, care must be taken to ensure that knowledge bases are consistent.  Much effort has been expended on devising tractable ways to do this (e.g., truth maintenance systems, assumption-based truth maintenance systems, partitioned paraconsistent knowledge bases that are locally consistent but may be globally inconsistent, etc.)  I argue that for certain kinds of knowledge representation languages (autoepistemic logics), a further constraint, which I call epistemic consistency, must be met.  I argue for the need to check for epistemic consistency despite the fact that, unlike for consistency simpliciter, failing to meet this constraint is not a logical possibility.  The most basic form of checking that this constraint is met is to ensure that there are no sentences in an agent’s knowledge base that constitute what Sorensen has called an epistemic blindspot for that agent (e.g., “It is raining, but Hal doesn’t know it”, for the agent Hal).  This constraint must be maintained both when initialising the knowledge base, and when applying rules of inference, a fact which requires generalising from Sorensen’s notion of an epistemic blindspot to the concept of epistemic blindspot sets (a move that is independently motivated in applying Sorensen’s surprise examination paradox solution to the strengthened paradox of the toxin).  In addition, and along similar lines, I argue that another form of consistency, which I call inferential consistency, must be maintained.  Inferential consistency does not involve epistemically problematic sentences, but rather epistemically problematic inferences, such as ones concerning the number of inferences one has made.  I consider one way of dealing with such cases, which has the alarming consequence of rendering all rules of inference strictly invalid.  Specifically, I argue that the validity of a rule of inference can only be retained if a semantic restriction (that of excluding reference to the inference process itself) is placed on the sentences over which it can operate.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s