Embodiment: Six Themes

Screenshot 2019-09-02 at 06.34.24

I’m writing this from Zürich airport, on my way back to England after an excellent sojourn at the Dharma Sangha Zen Centre (www.dharma-sangha.de) on the German/Swiss frontier.  I was there for a cosy meeting of the Society for Mind-Matter Research (www.mindmatter.de) on the topic of embodiment. My talk gave a brief overviews of six ways in which my research has investigated the role of embodiment in mind and computation.  You can view my slides here: prezi.com/view/TLzIVu5YT

HumanE AI

This video interview is a good summary of my take on what we’re trying to do in the European HumanE AI project (humane-ai.eu) – and thus also what can/should be done at Stanford HAI (hai.stanford.edu). Of course I meant to say “overestimate” not “underestimate” near the end!

Screenshot 2019-08-23 at 10.23.22

Prediction and chaos in the markets

It’s been a while.

Screen Shot 2019-05-08 at 00.41.35

The May 6th, 2019 UK edition of Metro published an article entitled “Can we trust machines to predict the stock market with 100% accuracy?“, by Sonya Barlow.

The piece, which is more in-depth and better-researched than one might expect, included a single sentence from me, which was a portion of my response to this question from Sonya:  “The more AI [is] used in predictions, the less that AI can predict – Can that really be the case?”

Here is my response in full:

It’s not in general true that the more AI is used in predictions, the less it can predict, if the AI is not a part of the system it is predicting (predicting sunspot activity, for example).  But in the kinds of cases many people are interested in, such as AI for financial prediction, it’s a different matter, because in finance, the AI is typically part of the system it is trying to predict.  That is, the predictions the AI makes are used to take action (buy, sell, short, whatever) in that system.  And the presence of predictors (machine or human) in a system, taking action in that same system on the basis of their predictions, makes the system more difficult to predict (by machines or humans).
Why is this so?  To see why, consider a relatively simple, one-on-one system with two members: you and your opponent.  The best way to predict what your opponent is going to do is to model them: figure out what their strategy is, and predict that they will do whatever that strategy recommends in the current situation.  You then choose your best action given what you predict they will do.  But if they are also a predictor like you, then you both have a problem.  Even if you know what your opponent’s strategy is — it’s to predict what you are going to do, and act appropriately — predicting what they will do depends on what they predict you will do, which in turn depends on your prediction of what they are going to do, which is back where we started. Thus, the behaviour of the system is an unstable, chaotic circle.
This doesn’t mean that we’ll stop using AIs to predict — on the contrary, they will become (even more) obligatory, just to stay in the predictive arms race.  To fail to use them would make you more easily predictable, and thus at a relative disadvantage.

Epistemic Consistency in Knowledge-Based Systems

IMG_6144 (1).jpg

Today I was informed that my extended abstract, “Epistemic Consistency in Knowledge-Based Systems”, has been accepted for presentation at PT-AI 2017 in Leeds in November. The text of the extended abstract is below.  The copy-paste job I’ve done here loses all the italics, etc.; the proper version is at:

http://sussex.ac.uk/Users/ronc/papers/pt-ai-2017-abstract.pdf

Comments welcome, especially to similar work, papers I should cite, etc.


Epistemic Consistency in Knowledge-Based Systems (extended abstract)

Ron Chrisley
Centre for Cognitive Science,
Sackler Centre for Consciousness Science, and Department of Informatics
University of Sussex, Falmer, United Kingdom ronc@sussex.ac.uk

1 Introduction

One common way of conceiving the knowledge-based systems approach to AI is as the attempt to give an artificial agent knowledge that P by putting a (typically lin- guaform) representation that means P into an epistemically privileged database (the agent’s knowledge base). That is, the approach typically assumes, either explicitly or implicitly, that the architecture of a knowledge-based system (including initial knowledge base, rules of inference, and perception/action systems) is such that the following sufficiency principle should be respected:

  • Knowledge Representation Sufficiency Principle (KRS Principle): if a sen- tence that means P is in the knowledge base of a KBS, then the KBS knows that P.

The KRS Principle is so strong that, although it might be able to be respected by KBSs that deal exclusively with a priori matters (e.g., theorem provers), most if not all empirical KBSs will, at least some of the time, fail to meet it. Nevertheless, it remains an ideal toward which KBS design might be thought to strive.

Accordingly, it is commonly acknowledged that knowledge bases for KBSs should be consistent, since classical rules of inference permit the addition of any sentence to an inconsistent KB. Accordingly, much effort has been spent on devis- ing tractable ways to ensure consistency or otherwise prevent inferential explosion.

2 Propositional epistemic consistency

However, it has not been appreciated that for certain kinds of KBSs, a further con- straint, which I call propositional epistemic consistency, must be met. To explain this constraint, some notions must be defined:

  • An epistemic KBS is one that can represent propositions attributing propositional knowledge to subjects (such as that expressed by “Dave knows the mission is a failure”).

Continue reading

The Future of Smart Living

 

Screen Shot 2017-09-30 at 17.36.58

Image from Culture Vulture Issue 09: Smart Living, MindShare: 2017.

I’ve just posted on LinkedIn a rare (for me!) piece of near-futurology:

https://www.linkedin.com/pulse/future-smart-living-ron-chrisley/

This article is an expansion of “The Shift From Conscious To Unconscious Data” that I wrote earlier this year for Culture Vulture Issue 09: Smart Living, pp 48-49, MindShare.

For convenience, I’ve included the text here.


The future of smart living

The move to unconscious data and AI beyond deep learning will require substantial algorithmic – and ethical – innovation

In a way, the hype is right: the robots are here. It might not look like it, but they are. If we understand robots to be artificial agents that can, based on information they revceive from their environment, autonomously take action in the world, then robots are in our cars, homes, hospitals, schools, workplaces, and our own bodies, even if they don’t have the expected humanoid shape and size. And more are on the way. What will it be like to live in the near-future world full of sensing, adapting, and acting technologies? Will it be like things are now, but more so (whatever that might mean)? Or will it be qualitatively different?

There are several indications that the technological changes about to occur will result in qualitative shifts in the structure of our lives.

We can expect a dramatic increase in the quantity, kinds, and temporal resolution of the sensors our embedded smart technologies will use.

One example involves sensors. We can expect a dramatic increase in the quantity, kinds, and temporal resolution of the sensors our embedded smart technologies will use. And in many cases these sensors will be aimed directly at us, the users. Most significantly, we will see a shift from technologies that solely use symbolic, rational-level data that we consciously provide (our purchasing history, our stated preferences, the pages we “like”, etc.) to ones that use information about us that is even more revealing, despite (or because) it is unconscious/not under our control. It will start with extant, ubiquitous input devices used in novel ways (such as probing your emotional state or unexpressed preferences by monitoring the dynamics of your mouse trajectories over a web page), but will quickly move to an uptake and exploitation of sensors that more directly measure our bio-indicators, such as eye trackers, heart rate monitors, pupillometry, etc.

We can expect an initial phase of applications and systems that are designed to shift users into purchasing/adopting, becoming proficient with, and actively using these technologies: Entertainment will no doubt lead the way, but other uses of the collected data (perhaps buried in EULAs) will initially piggyback on them. Any intrusiveness or inconvenience of these sensors, initially tolerated for the sake of the game or interactive cinematic experience, will give way to familiarity and acceptance, allowing other applications to follow.

Any intrusiveness or inconvenience of these sensors, initially tolerated for the sake of the game or interactive cinematic experience, will give way to familiarity and acceptance, allowing other applications to follow.

The intimate, sub-rational, continuous, dynamic and temporally-precise data these sensors will provide will enable exquisitely precise user-modelling (or monitoring) of a kind previously unimaginable. This in turn will enable technologies that will be able (or at least seem) to understand our intentions and anticipate our needs and wants. Key issues will involve ownership/sharing/selling/anonymisation of this data, the technologies for and rights to shielding oneself from such sensing (e.g., in public spaces) and the related use of decoys (technologies designed to provide false readings to these sensors), and delimiting the boundaries of responsibility and informed consent in cases where technologies can side-step rational choice and directly manipulate preferences and attitudes.

The engine behind this embedded intelligence will be artificial intelligence. The recent (and pervasively covered) rise of machine learning has been mainly to with recent advances in two factors: 1) the enormous data sets the internet has created, and 2) blindingly fast hardware such as GPUs. We can continue to expect advances in 1) with the new kinds and quantities of data that the new sensors will provide. The second factor is hard to predict, with experts differing on whether we will continue to reap the benefits of Moore’s Law, and on whether quantum computation is capable of delivering on its theoretical promise anytime soon.

The algorithms exploiting these two factors of data and speed have typically been minor variations on and recombination of those developed in the 80s and 90s. Although quantum computation might (or might not) allow the increased hardware trend to continue, the addition of further kinds of data will allow novel technologies in all spheres that are exquisitely tuned to the user.

On the other hand, the increased quantity of data, especially its temporal resolution, will require advances in machine learning algorthims – expect a move beyond simple, feedforward architectures from the 90s to systems that develop expectations about what they will sense (and do), and that use these expectations as a way to manage information overload by attending only to the important parts of the data.

This will yield unprecedented degrees of dynamic integration between us and our technology. What is often neglected in thinking about the pros and cons of such technologies is the way we adapt to them. One of the most exciting prospects, but also unforeseen risks, and needs to be thought of carefully. In particular, new conceptual tools.

The sooner we can develop and deploy into society at large a machine ethics that locates responsibility with the correct humans, and not with the technologies themselves, the better.

Embedded, autonomous technology will lead to situations that, given our current legal and ethical systems, will appear ambiguous: who is to blame when something goes wrong involving a technology that has adapted to a user’s living patterns? Is it the user, for having a lifestyle that was too far outside of the “normal” lifestyles used in the dynamic technology’s testing and quality control? Or is the fault of the designer/manufacturer/retailer/provider/procurer of that technology, for not ensuring that the technology would yield safe results in a greater number of user situations, of for not providing clear guidelines to the user on what “normal” use is? Given this conundrum, the temptation will often be to blame neither, but blame the technology itself instead, especially if it is made to look humanoid, given a name, voice, “personality” etc. We might very well see a phase of cynical, gratuitous use of anthromorphism whose main function is misdirect potential blame by “scapegoating the robot”. The sooner we can develop and deploy into society at large a machine ethics that locates responsibility with the correct humans, and not with the technologies themselves, the better.

(Another) joint paper with Aaron Sloman published

Screenshot 2017-06-13 16.25.17The proceedings of EUCognition 2016 in Vienna, co-edited by myself, Vincent Müller, Yulia Sandamirskaya and Markus Vincze, have just been published online (free access):  

In it is a joint paper by Aaron Sloman and myself, entitled “Architectural Requirements for Consciousness“.  Here is the abstract:

This paper develops, in sections I-III, the virtual machine architecture approach to explaining certain features of consciousness first proposed in [1] and elaborated in [2], in which particular qualitative aspects of experiences (qualia) are proposed to be particular kinds of properties of components of virtual machine states of a cognitive architecture. Specifically, they are those properties of components of virtual machine states of an agent that make that agent prone to believe the kinds of things that are typically believed to be true of qualia (e.g., that they are ineffable, immediate, intrinsic, and private). Section IV aims to make it intelligible how the requirements identified in sections II and III could be realised in a grounded, sensorimotor, cognitive robotic architecture.