Creating The Future: Ethics & Implications

Today MindShare hosts their annual London Huddle (http://www.mindshareworld.com/uk/huddle). As part of the event, at 3pm I’ll be chatting with LoopMe’s Head of Agency, Jack Edmonds, about some of the ethical issues concerning, and implications of, artificial intelligence:

Screen Shot 2017-11-09 at 09.23.35

Advertisements

Epistemic Consistency in Knowledge-Based Systems

IMG_6144 (1).jpg

Today I was informed that my extended abstract, “Epistemic Consistency in Knowledge-Based Systems”, has been accepted for presentation at PT-AI 2017 in Leeds in November. The text of the extended abstract is below.  The copy-paste job I’ve done here loses all the italics, etc.; the proper version is at:

http://sussex.ac.uk/Users/ronc/papers/pt-ai-2017-abstract.pdf

Comments welcome, especially to similar work, papers I should cite, etc.


Epistemic Consistency in Knowledge-Based Systems (extended abstract)

Ron Chrisley
Centre for Cognitive Science,
Sackler Centre for Consciousness Science, and Department of Informatics
University of Sussex, Falmer, United Kingdom ronc@sussex.ac.uk

1 Introduction

One common way of conceiving the knowledge-based systems approach to AI is as the attempt to give an artificial agent knowledge that P by putting a (typically lin- guaform) representation that means P into an epistemically privileged database (the agent’s knowledge base). That is, the approach typically assumes, either explicitly or implicitly, that the architecture of a knowledge-based system (including initial knowledge base, rules of inference, and perception/action systems) is such that the following sufficiency principle should be respected:

  • Knowledge Representation Sufficiency Principle (KRS Principle): if a sen- tence that means P is in the knowledge base of a KBS, then the KBS knows that P.

The KRS Principle is so strong that, although it might be able to be respected by KBSs that deal exclusively with a priori matters (e.g., theorem provers), most if not all empirical KBSs will, at least some of the time, fail to meet it. Nevertheless, it remains an ideal toward which KBS design might be thought to strive.

Accordingly, it is commonly acknowledged that knowledge bases for KBSs should be consistent, since classical rules of inference permit the addition of any sentence to an inconsistent KB. Accordingly, much effort has been spent on devis- ing tractable ways to ensure consistency or otherwise prevent inferential explosion.

2 Propositional epistemic consistency

However, it has not been appreciated that for certain kinds of KBSs, a further con- straint, which I call propositional epistemic consistency, must be met. To explain this constraint, some notions must be defined:

  • An epistemic KBS is one that can represent propositions attributing propositional knowledge to subjects (such as that expressed by “Dave knows the mission is a failure”).
  • An autoepistemic KBS is an epistemic KBS that is capable of representing, and therefore of attributing propositional knowledge to, itself (e.g., “HAL knows that Dave knows that the mission is a failure” in the case of the KBS HAL).

All autoepistemic systems (natural or artificial) suffer from epistemic blindspots (Sorensen, 1984):

  • A proposition P is an an epistemic blindspot for a KBS X if P is consistent, but the proposition that X knows that P is not consistent.

Thus, if an autoepistemic KBS is to respect the the KRS Principle, no epistemic blindspots (for that KBS) can appear in its knowledge base.

Despite this, it is of course not logically impossible that a sentence S expressing an epistemic blindspot for a KBS X may end up in X ’s KB. If this were to happen, it follows that X would not respect the KRS Principle. Worse, the fact that epistemic blindspots are consistent means that this possibility remains even if X has perfect, ideal methods of normal consistency maintenance. S being in X’s KB yields a kind of inconsistency distinct from normal inconsistency (since it can occur even when X’s KB, including S, is consistent). Accordingly, X’s KB being free of epistemic blindspots for X is a kind of consistency beyond consistency simpliciter; this is what I call propositional epistemic consistency. To ensure that a KBS respects the KRS Principle, then, it is not sufficient to ensure that its KB is consistent in the normal manner; one must also ensure that it is propositionally epistemically consistent.

Ensuring propositional epistemic consistency for a KBS X amounts to taking two precautions:

  1. Ensuring that there are no epistemic blindpsots for X in the initial KB;
  2. When any sentence S is about to be added to the KB (via inference, perception, etc), checking that S is not an epistemic blindspot for X.

Both steps involve checking that a given sentence is not an epistemic blindspot for a given system X. Beyond checking the consistency of S (and the consistency of S with the current KB), this amounts to checking whether it would be a contradiction to suppose that S is known by X . In turn, this amounts to expressing S in conjunctive normal form, where the first conjunct is the proposition P, and the second conjunct of is of the form ¬K(x,P), where x refers to X.

Unfortunately, this last condition implies that unlike for consistency simpliciter, checking for propositional epistemic consistency cannot proceed purely syntacti- cally. Simple consistency is a matter of what holds in all models, and is therefore an a priori matter independent of the state of affairs in the actual world. But whether or not an expression in fact refers to a given individual does depend on the state of affairs in the actual world, and cannot be determined via a priori means alone.

In the face of this apparent intractability, and the fact that it derives from a kind of unrestricted self-reference, one might be tempted to reduce propositional epistemic

consistency checking to simple consistency checking in a way parallel to the way Prior proposes for dealing with the paradox of the liar. Prior suggests that we under- stand each sentence to be implicitly asserting “this sentence is true”(Prior, 1976). This renders such sentences as “This sentence is not true” as straightforwardly false, and thus non-paradoxical. A parallel move would be to suggest that every KBS’s KB is implicitly asserting the negation of every epistemic blindspot for that KBS. This would render every epistemic blindspot for that KBS inconsistent with that KBS’s KB, allowing it to be excluded via simple consistency maintenance. But this is overkill: epistemic blindspots are not, in general, false. And the ones that are prob- lematic are so because they are true, so having their negations in the KB violates the KRS Principle.

3 Inferential epistemic consistency

There are similar, problematic interactions concerning inference. Consider inference G:

  1. HAL has made more than two inferences
  2. HAL has made fewer than four inferences
  3. If someone has made more than two inferences and fewer than four inferences, they have made three inferences
  4. Therefore, HAL has made three inferencesOn the face of it, G is a valid argument; the rules of inference it employs are valid

in that they guarantee the truth of the conclusion, given the truth of the premises. And such an analysis is correct (or at least seems so) for the case of you or I putting forward G, or making the inference it licenses. But the case of HAL carrying out this inference is another matter entirely. If HAL makes this inference, HAL comes to believe something false, since after the inference is made, HAL believes that HAL has made three inferences, when in fact HAL has made four. HAL’s KB would exhibit inferential epistemic inconsistency.

On the standard view, one makes an inference by first determining if the premises are true and the transitions from premise to conclusion are valid. If they are, then one should believe the conclusion. Unfortunately, such an approach would license HAL to make inference G.

Prompted by these considerations, and taking a more participatory view of in- ference, I propose that in when one is about to make an inference, in addition to checking for soundness and validity of an inference, one should consider the near- est possible world in which one carries out the inference. Only if the conclusion still follows validly from true premises in that world should one make the inference and believe the conclusion (in this world). On this view, HAL would not be entitled to make the inference in G, as its conclusion is false in the nearest possible world in which HAL makes the inference.

Notice that, like the epistemic blindspots considered earlier, the conclusion of G that HAL is not entitled to believe is, nevertheless, consistent: possibly true. The conclusion is not, however, a blindspot: the proposition that HAL knows the conclu- sion of G is not a contradiction. Nor is it just an inferential variation on an epistemic blindspot.

4 Conclusion

The primary conclusion of the foregoing is that designers of autoepistemic KBSs must supplement consistency checks with epistemic consistency checks of two kinds (propositional and inferential) in order to:

  • Respect the KRS Principle that underlies all KBS use;
  • Ensure the validity of inferences KBSs make about themselves;
  • Ensure consistency of KBS knowledge bases;
  • Prevent the introduction of false propositions into KBS knowledge bases.

References

  • Prior, A. (1976). Papers in logic and ethics. Duckworth.
  • Sorensen, R. (1984). Conditional blindspots and the knowledge squeeze: a solution to the prediction paradox. Australasian J. Phil., 62, 126-135.

The Future of Smart Living

 

Screen Shot 2017-09-30 at 17.36.58

Image from Culture Vulture Issue 09: Smart Living, MindShare: 2017.

I’ve just posted on LinkedIn a rare (for me!) piece of near-futurology:

https://www.linkedin.com/pulse/future-smart-living-ron-chrisley/

This article is an expansion of “The Shift From Conscious To Unconscious Data” that I wrote earlier this year for Culture Vulture Issue 09: Smart Living, pp 48-49, MindShare.

For convenience, I’ve included the text here.


The future of smart living

The move to unconscious data and AI beyond deep learning will require substantial algorithmic – and ethical – innovation

In a way, the hype is right: the robots are here. It might not look like it, but they are. If we understand robots to be artificial agents that can, based on information they revceive from their environment, autonomously take action in the world, then robots are in our cars, homes, hospitals, schools, workplaces, and our own bodies, even if they don’t have the expected humanoid shape and size. And more are on the way. What will it be like to live in the near-future world full of sensing, adapting, and acting technologies? Will it be like things are now, but more so (whatever that might mean)? Or will it be qualitatively different?

There are several indications that the technological changes about to occur will result in qualitative shifts in the structure of our lives.

We can expect a dramatic increase in the quantity, kinds, and temporal resolution of the sensors our embedded smart technologies will use.

One example involves sensors. We can expect a dramatic increase in the quantity, kinds, and temporal resolution of the sensors our embedded smart technologies will use. And in many cases these sensors will be aimed directly at us, the users. Most significantly, we will see a shift from technologies that solely use symbolic, rational-level data that we consciously provide (our purchasing history, our stated preferences, the pages we “like”, etc.) to ones that use information about us that is even more revealing, despite (or because) it is unconscious/not under our control. It will start with extant, ubiquitous input devices used in novel ways (such as probing your emotional state or unexpressed preferences by monitoring the dynamics of your mouse trajectories over a web page), but will quickly move to an uptake and exploitation of sensors that more directly measure our bio-indicators, such as eye trackers, heart rate monitors, pupillometry, etc.

We can expect an initial phase of applications and systems that are designed to shift users into purchasing/adopting, becoming proficient with, and actively using these technologies: Entertainment will no doubt lead the way, but other uses of the collected data (perhaps buried in EULAs) will initially piggyback on them. Any intrusiveness or inconvenience of these sensors, initially tolerated for the sake of the game or interactive cinematic experience, will give way to familiarity and acceptance, allowing other applications to follow.

Any intrusiveness or inconvenience of these sensors, initially tolerated for the sake of the game or interactive cinematic experience, will give way to familiarity and acceptance, allowing other applications to follow.

The intimate, sub-rational, continuous, dynamic and temporally-precise data these sensors will provide will enable exquisitely precise user-modelling (or monitoring) of a kind previously unimaginable. This in turn will enable technologies that will be able (or at least seem) to understand our intentions and anticipate our needs and wants. Key issues will involve ownership/sharing/selling/anonymisation of this data, the technologies for and rights to shielding oneself from such sensing (e.g., in public spaces) and the related use of decoys (technologies designed to provide false readings to these sensors), and delimiting the boundaries of responsibility and informed consent in cases where technologies can side-step rational choice and directly manipulate preferences and attitudes.

The engine behind this embedded intelligence will be artificial intelligence. The recent (and pervasively covered) rise of machine learning has been mainly to with recent advances in two factors: 1) the enormous data sets the internet has created, and 2) blindingly fast hardware such as GPUs. We can continue to expect advances in 1) with the new kinds and quantities of data that the new sensors will provide. The second factor is hard to predict, with experts differing on whether we will continue to reap the benefits of Moore’s Law, and on whether quantum computation is capable of delivering on its theoretical promise anytime soon.

The algorithms exploiting these two factors of data and speed have typically been minor variations on and recombination of those developed in the 80s and 90s. Although quantum computation might (or might not) allow the increased hardware trend to continue, the addition of further kinds of data will allow novel technologies in all spheres that are exquisitely tuned to the user.

On the other hand, the increased quantity of data, especially its temporal resolution, will require advances in machine learning algorthims – expect a move beyond simple, feedforward architectures from the 90s to systems that develop expectations about what they will sense (and do), and that use these expectations as a way to manage information overload by attending only to the important parts of the data.

This will yield unprecedented degrees of dynamic integration between us and our technology. What is often neglected in thinking about the pros and cons of such technologies is the way we adapt to them. One of the most exciting prospects, but also unforeseen risks, and needs to be thought of carefully. In particular, new conceptual tools.

The sooner we can develop and deploy into society at large a machine ethics that locates responsibility with the correct humans, and not with the technologies themselves, the better.

Embedded, autonomous technology will lead to situations that, given our current legal and ethical systems, will appear ambiguous: who is to blame when something goes wrong involving a technology that has adapted to a user’s living patterns? Is it the user, for having a lifestyle that was too far outside of the “normal” lifestyles used in the dynamic technology’s testing and quality control? Or is the fault of the designer/manufacturer/retailer/provider/procurer of that technology, for not ensuring that the technology would yield safe results in a greater number of user situations, of for not providing clear guidelines to the user on what “normal” use is? Given this conundrum, the temptation will often be to blame neither, but blame the technology itself instead, especially if it is made to look humanoid, given a name, voice, “personality” etc. We might very well see a phase of cynical, gratuitous use of anthromorphism whose main function is misdirect potential blame by “scapegoating the robot”. The sooner we can develop and deploy into society at large a machine ethics that locates responsibility with the correct humans, and not with the technologies themselves, the better.

(Another) joint paper with Aaron Sloman published

Screenshot 2017-06-13 16.25.17The proceedings of EUCognition 2016 in Vienna, co-edited by myself, Vincent Müller, Yulia Sandamirskaya and Markus Vincze, have just been published online (free access):  

In it is a joint paper by Aaron Sloman and myself, entitled “Architectural Requirements for Consciousness“.  Here is the abstract:

This paper develops, in sections I-III, the virtual machine architecture approach to explaining certain features of consciousness first proposed in [1] and elaborated in [2], in which particular qualitative aspects of experiences (qualia) are proposed to be particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of an agent that make that agent prone to believe the kinds of things that are typically believed to be true of qualia (e.g., that they are ineffable, immediate, intrinsic, and private). Section IV aims to make it intelligible how the requirements identified in sections II and III could be realised in a grounded, sensorimotor, cognitive robotic architecture.

AI: The Future of Us — a fireside chat with Ron Chrisley and Stephen Upstone

As mentioned in a previous post, I was invited to speak at “AI: The Future of Us” at the British Museum earlier this month.  Rather than give a lecture, it was decided that I should have a “fireside chat” with Stephen Upstone, the CEO and founder of LoopMe, the AI company hosting the event.  We had fun, and got some good feedback, so we’re looking into doing something similar this Autumn — watch this space.

Our discussion was structured around the following questions/topics being posed to me:

  • My background (what I do, what is Cognitive Science, how did I start working in AI, etc.)
  • What is the definition of consciousness and at what point can we say an AI machine is conscious?
  • What are the ethical implications for AI? Will we ever reach the point at which we will need to treat AI like a human? And how do we define AI’s responsibility?
  • Where do you see AI 30 years from now? How do you think AI will revolutionise our lives? (looking at things like smart homes, healthcare, finance, saving the environment, etc.)
  • So on your view, how far away are we from creating a super intelligence that will be better than humans in every aspect from mental to physical and emotional abilities? (Will we reach a point when the line between human and machine becomes blurred?)
  • So is AI not a threat? As Stephen Hawking recently said in the Guardian “AI will be either the best or worst thing for humanity”. What do you think? Is AI something we don’t need to be worried about?

You can listen to our fireside chat here.

What philosophy can offer AI

https3a2f2fcdn-evbuc-com2fimages2f279452862f1213613672012f12foriginal

My piece on “What philosophy can offer AI” is now up at AI firm LoopMe’s blog. This is part of the run-up to my speaking at their event, “Artificial Intelligence: The Future of Us”, to be held at the British Museum next month.  Here’s what I wrote (the final gag is shamelessly stolen from Peter Sagal of NPR’s “Wait Wait… Don’t Tell Me!”):

Despite what you may have heard, philosophy at its best consists in rigorous thinking about important issues, and careful examination of the concepts we use to think about those issues.  Sometimes this analysis is achieved through considering potential exotic instances of an otherwise everyday concept, and considering whether the concept does indeed apply to that novel case — and if so, how.

In this respect, artificial intelligence (AI), of the actual or sci-fi/thought experiment variety, has given philosophers a lot to chew on, providing a wide range of detailed, fascinating instances to challenge some of our most dearly-held concepts:  not just “intelligence”, “mind”, and “knowledge”, but also “responsibility”, “emotion”, “consciousness”, and, ultimately, “human”.

But it’s a two-way street: Philosophy has a lot to offer AI too.

Examining these concepts allows the philosopher to notice inconsistency, inadequacy or incoherence in our thinking about mind, and the undesirable effects this can have on AI design.  Once the conceptual malady is diagnosed, the philosopher and AI designer can work together (they are sometimes the same person) to recommend revisions to our thinking and designs that remove the conceptual roadblocks to better performance.

This symbiosis is most clearly observed in the case of artificial general intelligence (AGI), the attempt to produce an artificial agent that is, like humans, capable of behaving intelligently in an unbounded number of domains and contexts

The clearest example of the requirement of philosophical expertise when doing AGI concerns machine consciousness and machine ethics: at what point does an AGI’s claim to mentality become real enough that we incur moral obligations toward it?  Is it at the same time as, or before, it reaches the point at which we would say it is conscious?  At when it has moral obligations of its own? And is it moral for us to get to the point where we have moral obligations to machines?  Should that even be AI’s goal?

These are important questions, and it is good that they are being discussed more even though the possibilities they consider aren’t really on the horizon.  

Less well-known is that philosophical sub-disciplines other than ethics have been, and will continue to be, crucial to progress in AGI.  

It’s not just the philosophers that say so; Quantum computation pioneer and Oxford physicist David Deutsch agrees: “The whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology”.  That “not” might overstate things a bit (I would soften it to “not only”), but it’s clear that Deutch’s vision of philosophy’s role in AI will not be limited to being a kind of ethics panel that assesses the “real work” done by others.

What’s more, philosophy’s relevance doesn’t just kick in once one starts working on AGI — which substantially increases its market share.  It’s an understatement to say that AGI is a subset of AI in general.  Nearly all, of the AI that is at work now providing relevant search results, classifying images, driving cars, and so on is not domain-independent AGI – it is technological, practical AI, that exploits the particularities of its domain, and relies on human support to augment its non-autonomy to produce a working system. But philosophical expertise can be of use even to this more practical, less Hollywood, kind of AI design.

The clearest point of connection is machine ethics.  

But here the questions are not the hypothetical ones about whether a (far-future) AI has moral obligations to us, or we to it.  Rather the questions will be more like this: 

– How should we trace our ethical obligations to each other when the causal link between us and some undesirable outcome for another, is mediated by a highly complex information process that involves machine learning and apparently autonomous decision-making?  

– Do our previous ethical intuitions about, e.g., product liability apply without modification, or do we need some new concepts to handle these novel levels of complexity and (at least apparent) technological autonomy?

As with AGI, the connection between philosophy and technological, practical AI is not limited to ethics.  For example, different philosophical conceptions of what it is to be intelligent suggest different kinds of designs for driverless cars.  Is intelligence a disembodied ability to process symbols?  Is it merely an ability to behave appropriately?  Or is it, at least in part, a skill or capacity to anticipate how one’s embodied sensations will be transformed by the actions one takes?  

Contemporary, sometimes technical, philosophical theories of cognition are a good place to start when considering what way of conceptualising the problem and solution will be best for a given AI system, especially in the case of design that has to be truly ground breaking to be competitive.

Of course, it’s not all sweetness and light. It is true that there has been some philosophical work that has obfuscated the issues around AI, thereby unnecessarily hindering progress. So, to my recommendation that philosophy play a key role in artificial intelligence, terms and conditions apply.  But don’t they always?

The Ethics of AI and Healthcare

ai-doctor2-570x300I was interviewed by Verdict.co.uk recently about the ethics of AI in healthcare.  One or
two remarks of mine from that interview
are included near the end of this piece that appeared last week:

http://www.verdict.co.uk/the-ai-impact-healthcare-industry-is-changing/

My views are on this are considerably more nuanced than these quotes suggest, so I am thinking of turning my extensive prep notes for the interview into a piece to be posted here and/or on a site like TheConversation.com.  These thoughts are completely distinct from the ones included in the paper Steve Torrance and I wrote a few years back, “Modelling consciousness-dependent expertise in machine medical moral agents“.