The Future of Smart Living

 

Screen Shot 2017-09-30 at 17.36.58

Image from Culture Vulture Issue 09: Smart Living, MindShare: 2017.

I’ve just posted on LinkedIn a rare (for me!) piece of near-futurology:

https://www.linkedin.com/pulse/future-smart-living-ron-chrisley/

This article is an expansion of “The Shift From Conscious To Unconscious Data” that I wrote earlier this year for Culture Vulture Issue 09: Smart Living, pp 48-49, MindShare.

For convenience, I’ve included the text here.


The future of smart living

The move to unconscious data and AI beyond deep learning will require substantial algorithmic – and ethical – innovation

In a way, the hype is right: the robots are here. It might not look like it, but they are. If we understand robots to be artificial agents that can, based on information they revceive from their environment, autonomously take action in the world, then robots are in our cars, homes, hospitals, schools, workplaces, and our own bodies, even if they don’t have the expected humanoid shape and size. And more are on the way. What will it be like to live in the near-future world full of sensing, adapting, and acting technologies? Will it be like things are now, but more so (whatever that might mean)? Or will it be qualitatively different?

There are several indications that the technological changes about to occur will result in qualitative shifts in the structure of our lives.

We can expect a dramatic increase in the quantity, kinds, and temporal resolution of the sensors our embedded smart technologies will use.

One example involves sensors. We can expect a dramatic increase in the quantity, kinds, and temporal resolution of the sensors our embedded smart technologies will use. And in many cases these sensors will be aimed directly at us, the users. Most significantly, we will see a shift from technologies that solely use symbolic, rational-level data that we consciously provide (our purchasing history, our stated preferences, the pages we “like”, etc.) to ones that use information about us that is even more revealing, despite (or because) it is unconscious/not under our control. It will start with extant, ubiquitous input devices used in novel ways (such as probing your emotional state or unexpressed preferences by monitoring the dynamics of your mouse trajectories over a web page), but will quickly move to an uptake and exploitation of sensors that more directly measure our bio-indicators, such as eye trackers, heart rate monitors, pupillometry, etc.

We can expect an initial phase of applications and systems that are designed to shift users into purchasing/adopting, becoming proficient with, and actively using these technologies: Entertainment will no doubt lead the way, but other uses of the collected data (perhaps buried in EULAs) will initially piggyback on them. Any intrusiveness or inconvenience of these sensors, initially tolerated for the sake of the game or interactive cinematic experience, will give way to familiarity and acceptance, allowing other applications to follow.

Any intrusiveness or inconvenience of these sensors, initially tolerated for the sake of the game or interactive cinematic experience, will give way to familiarity and acceptance, allowing other applications to follow.

The intimate, sub-rational, continuous, dynamic and temporally-precise data these sensors will provide will enable exquisitely precise user-modelling (or monitoring) of a kind previously unimaginable. This in turn will enable technologies that will be able (or at least seem) to understand our intentions and anticipate our needs and wants. Key issues will involve ownership/sharing/selling/anonymisation of this data, the technologies for and rights to shielding oneself from such sensing (e.g., in public spaces) and the related use of decoys (technologies designed to provide false readings to these sensors), and delimiting the boundaries of responsibility and informed consent in cases where technologies can side-step rational choice and directly manipulate preferences and attitudes.

The engine behind this embedded intelligence will be artificial intelligence. The recent (and pervasively covered) rise of machine learning has been mainly to with recent advances in two factors: 1) the enormous data sets the internet has created, and 2) blindingly fast hardware such as GPUs. We can continue to expect advances in 1) with the new kinds and quantities of data that the new sensors will provide. The second factor is hard to predict, with experts differing on whether we will continue to reap the benefits of Moore’s Law, and on whether quantum computation is capable of delivering on its theoretical promise anytime soon.

The algorithms exploiting these two factors of data and speed have typically been minor variations on and recombination of those developed in the 80s and 90s. Although quantum computation might (or might not) allow the increased hardware trend to continue, the addition of further kinds of data will allow novel technologies in all spheres that are exquisitely tuned to the user.

On the other hand, the increased quantity of data, especially its temporal resolution, will require advances in machine learning algorthims – expect a move beyond simple, feedforward architectures from the 90s to systems that develop expectations about what they will sense (and do), and that use these expectations as a way to manage information overload by attending only to the important parts of the data.

This will yield unprecedented degrees of dynamic integration between us and our technology. What is often neglected in thinking about the pros and cons of such technologies is the way we adapt to them. One of the most exciting prospects, but also unforeseen risks, and needs to be thought of carefully. In particular, new conceptual tools.

The sooner we can develop and deploy into society at large a machine ethics that locates responsibility with the correct humans, and not with the technologies themselves, the better.

Embedded, autonomous technology will lead to situations that, given our current legal and ethical systems, will appear ambiguous: who is to blame when something goes wrong involving a technology that has adapted to a user’s living patterns? Is it the user, for having a lifestyle that was too far outside of the “normal” lifestyles used in the dynamic technology’s testing and quality control? Or is the fault of the designer/manufacturer/retailer/provider/procurer of that technology, for not ensuring that the technology would yield safe results in a greater number of user situations, of for not providing clear guidelines to the user on what “normal” use is? Given this conundrum, the temptation will often be to blame neither, but blame the technology itself instead, especially if it is made to look humanoid, given a name, voice, “personality” etc. We might very well see a phase of cynical, gratuitous use of anthromorphism whose main function is misdirect potential blame by “scapegoating the robot”. The sooner we can develop and deploy into society at large a machine ethics that locates responsibility with the correct humans, and not with the technologies themselves, the better.

The Ethics of AI and Healthcare

ai-doctor2-570x300I was interviewed by Verdict.co.uk recently about the ethics of AI in healthcare.  One or
two remarks of mine from that interview
are included near the end of this piece that appeared last week:

http://www.verdict.co.uk/the-ai-impact-healthcare-industry-is-changing/

My views are on this are considerably more nuanced than these quotes suggest, so I am thinking of turning my extensive prep notes for the interview into a piece to be posted here and/or on a site like TheConversation.com.  These thoughts are completely distinct from the ones included in the paper Steve Torrance and I wrote a few years back, “Modelling consciousness-dependent expertise in machine medical moral agents“.