Creating The Future: Ethics & Implications

Today MindShare hosts their annual London Huddle (http://www.mindshareworld.com/uk/huddle). As part of the event, at 3pm I’ll be chatting with LoopMe’s Head of Agency, Jack Edmonds, about some of the ethical issues concerning, and implications of, artificial intelligence:

Screen Shot 2017-11-09 at 09.23.35

Advertisements

The Future of Smart Living

 

Screen Shot 2017-09-30 at 17.36.58

Image from Culture Vulture Issue 09: Smart Living, MindShare: 2017.

I’ve just posted on LinkedIn a rare (for me!) piece of near-futurology:

https://www.linkedin.com/pulse/future-smart-living-ron-chrisley/

This article is an expansion of “The Shift From Conscious To Unconscious Data” that I wrote earlier this year for Culture Vulture Issue 09: Smart Living, pp 48-49, MindShare.

For convenience, I’ve included the text here.


The future of smart living

The move to unconscious data and AI beyond deep learning will require substantial algorithmic – and ethical – innovation

In a way, the hype is right: the robots are here. It might not look like it, but they are. If we understand robots to be artificial agents that can, based on information they revceive from their environment, autonomously take action in the world, then robots are in our cars, homes, hospitals, schools, workplaces, and our own bodies, even if they don’t have the expected humanoid shape and size. And more are on the way. What will it be like to live in the near-future world full of sensing, adapting, and acting technologies? Will it be like things are now, but more so (whatever that might mean)? Or will it be qualitatively different?

There are several indications that the technological changes about to occur will result in qualitative shifts in the structure of our lives.

We can expect a dramatic increase in the quantity, kinds, and temporal resolution of the sensors our embedded smart technologies will use.

One example involves sensors. We can expect a dramatic increase in the quantity, kinds, and temporal resolution of the sensors our embedded smart technologies will use. And in many cases these sensors will be aimed directly at us, the users. Most significantly, we will see a shift from technologies that solely use symbolic, rational-level data that we consciously provide (our purchasing history, our stated preferences, the pages we “like”, etc.) to ones that use information about us that is even more revealing, despite (or because) it is unconscious/not under our control. It will start with extant, ubiquitous input devices used in novel ways (such as probing your emotional state or unexpressed preferences by monitoring the dynamics of your mouse trajectories over a web page), but will quickly move to an uptake and exploitation of sensors that more directly measure our bio-indicators, such as eye trackers, heart rate monitors, pupillometry, etc.

We can expect an initial phase of applications and systems that are designed to shift users into purchasing/adopting, becoming proficient with, and actively using these technologies: Entertainment will no doubt lead the way, but other uses of the collected data (perhaps buried in EULAs) will initially piggyback on them. Any intrusiveness or inconvenience of these sensors, initially tolerated for the sake of the game or interactive cinematic experience, will give way to familiarity and acceptance, allowing other applications to follow.

Any intrusiveness or inconvenience of these sensors, initially tolerated for the sake of the game or interactive cinematic experience, will give way to familiarity and acceptance, allowing other applications to follow.

The intimate, sub-rational, continuous, dynamic and temporally-precise data these sensors will provide will enable exquisitely precise user-modelling (or monitoring) of a kind previously unimaginable. This in turn will enable technologies that will be able (or at least seem) to understand our intentions and anticipate our needs and wants. Key issues will involve ownership/sharing/selling/anonymisation of this data, the technologies for and rights to shielding oneself from such sensing (e.g., in public spaces) and the related use of decoys (technologies designed to provide false readings to these sensors), and delimiting the boundaries of responsibility and informed consent in cases where technologies can side-step rational choice and directly manipulate preferences and attitudes.

The engine behind this embedded intelligence will be artificial intelligence. The recent (and pervasively covered) rise of machine learning has been mainly to with recent advances in two factors: 1) the enormous data sets the internet has created, and 2) blindingly fast hardware such as GPUs. We can continue to expect advances in 1) with the new kinds and quantities of data that the new sensors will provide. The second factor is hard to predict, with experts differing on whether we will continue to reap the benefits of Moore’s Law, and on whether quantum computation is capable of delivering on its theoretical promise anytime soon.

The algorithms exploiting these two factors of data and speed have typically been minor variations on and recombination of those developed in the 80s and 90s. Although quantum computation might (or might not) allow the increased hardware trend to continue, the addition of further kinds of data will allow novel technologies in all spheres that are exquisitely tuned to the user.

On the other hand, the increased quantity of data, especially its temporal resolution, will require advances in machine learning algorthims – expect a move beyond simple, feedforward architectures from the 90s to systems that develop expectations about what they will sense (and do), and that use these expectations as a way to manage information overload by attending only to the important parts of the data.

This will yield unprecedented degrees of dynamic integration between us and our technology. What is often neglected in thinking about the pros and cons of such technologies is the way we adapt to them. One of the most exciting prospects, but also unforeseen risks, and needs to be thought of carefully. In particular, new conceptual tools.

The sooner we can develop and deploy into society at large a machine ethics that locates responsibility with the correct humans, and not with the technologies themselves, the better.

Embedded, autonomous technology will lead to situations that, given our current legal and ethical systems, will appear ambiguous: who is to blame when something goes wrong involving a technology that has adapted to a user’s living patterns? Is it the user, for having a lifestyle that was too far outside of the “normal” lifestyles used in the dynamic technology’s testing and quality control? Or is the fault of the designer/manufacturer/retailer/provider/procurer of that technology, for not ensuring that the technology would yield safe results in a greater number of user situations, of for not providing clear guidelines to the user on what “normal” use is? Given this conundrum, the temptation will often be to blame neither, but blame the technology itself instead, especially if it is made to look humanoid, given a name, voice, “personality” etc. We might very well see a phase of cynical, gratuitous use of anthromorphism whose main function is misdirect potential blame by “scapegoating the robot”. The sooner we can develop and deploy into society at large a machine ethics that locates responsibility with the correct humans, and not with the technologies themselves, the better.

The Ethics of AI and Healthcare

ai-doctor2-570x300I was interviewed by Verdict.co.uk recently about the ethics of AI in healthcare.  One or
two remarks of mine from that interview
are included near the end of this piece that appeared last week:

http://www.verdict.co.uk/the-ai-impact-healthcare-industry-is-changing/

My views are on this are considerably more nuanced than these quotes suggest, so I am thinking of turning my extensive prep notes for the interview into a piece to be posted here and/or on a site like TheConversation.com.  These thoughts are completely distinct from the ones included in the paper Steve Torrance and I wrote a few years back, “Modelling consciousness-dependent expertise in machine medical moral agents“.

Ethically designing robots without designing ethical robots

robot_ethicsNext Thursday, November 17th, at 13:00 I’ll be leading the E-Intentionality seminar in Freeman G22. I’ll be using this seminar as a dry run for the first part of my keynote lecture at the UAE Social Robotics meeting next week. It builds on work that I first presented at Tufts in 2014.

Abstract:

Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. I look at one approach to ethically designing robots, that of designing ethical robots – robots that are given a set of rules that are intended to encode an ethical system, and which are to be applied by the robot in the generation of its behaviour. I argue that this approach will in many cases obfuscate, rather than clarify, the lines of responsibility involved (resulting in “moral murk”), and can lead to ethically adverse situations. After giving an example of such cases, I offer an alternative approach to ethical design of robots, one that does not presuppose that notions of obligation and permission apply to the robot in question, thereby avoiding the problems of moral murk and ethical adversity.

Artificial social agents in a world of conscious beings

I forgot to mention in the update posted earlier today that fellow PAICSer, Steve Torrance, will also be a keynote speaker at the 2nd Joint UAE Symposium on Social Robotics.  Here are his title and abstract.logo

Artificial social agents in a world of conscious beings.

Steve Torrance

Abstract

It is an important fact about each of us that we are conscious beings, and that the others we interact with in our social world are also conscious beings. Yet we are appear to be on the edge of a revolution in new social relationships – interactions and intimacies with a variety of non-conscious artificial social agents (ASAs) – both virtual and physical. Granted, we often behave, in the company of such ASAs as though they are conscious, and as though they are social beings. But in essence we still think of them, at least in our more reflective moments, as “tools” or “systems” – smart, and getting smarter, but lacking phenomenal awareness or real emotion.

In my talk I will discuss ways in which reflection on consciousness – both natural and (would-be) artificial – impacts on our intimate social relationships with robots. And I will propose some implications for human responsibilities in developing these technologies.

I will focus on two questions: (1) What would it take for an ASA to be conscious in a way that “matters”? (2) Can we talk of genuine social relationships or interactions with agents that have no consciousness?

On question (1), I will look at debates in the fields of machine consciousness and machine ethics, in order to examine the range of possible positions that may be taken. I will suggest that there is a close relation between thinking of a being as having a conscious phenomenology, and adopting a range of ethical attitudes towards that being. I will also discuss an important debate between those who take a “social-relational” approach to phenomenological and ethical attributions, and those who take an “objectivist” approach. I will offer ways to resolve that debate. This will help provide guidance, I hope, to those who are developing the technologies for smarter ASAs, which possibly may have stronger claims to be taken as conscious. On (2), I will suggest that, even for ASAs that are acknowledged not to be conscious, it is possible that there could be a range of ethical roles that they could come to occupy, in a way that would justify our talking of “artificial social agents” in a rich sense, one that would imply that they had both genuine ethico-social responsibilities and ethico-social entitlements.

The spread of ASAs – whether or not genuinely conscious, social or ethical – will impose heavy responsibilities upon technologists, and those they work with, to guide the social impacts of such agents in acceptable directions, as such agents increasingly inter-operate with us and with our lives. I will thus conclude by pointing to some issues of urgent social concern that are raised by the likely proliferation of ASAs in the coming years and decades.

Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots

Next month I’m giving a keynote address at the 2nd Joint UAE Symposium on Social Robotics, entitled: “Human Responsibility, Robot Mind: Conceptual Design Constraints for Social Robots”logo

Abstract: Advances in social robot design will be achieved hand-in-hand with increased clarity in our concepts of responsibility, folk psychology, and (machine) consciousness. 1) Since robots will not, in the near future, be responsible agents, avoiding some moral hazards (e.g., that of abdication of responsibility) will require designs that assist in tracing complex lines of responsibility backwards from outcomes, through the robot, and back to the appropriate humans and/or social institutions. 2) An intuitive understanding by human users of the (possibly quite alien) perceptual and cognitive predicament of robots will be essential to improving cooperation with them, as well as assisting diagnosis, robot training, and the design process itself. Synthetic phenomenology is the attempt to combine robot designs with assistive technologies such as virtual reality to make the experience-like states of cognitive robots understandable to users. 3) Making robot minds more like our own would be facilitated by finding designs that make robots susceptible to the same (mis-)conceptions concerning perception, experience and consciousness that humans have. Making a conscious-like robot will thus involve making robots that find it natural to believe that their inner states are private and non-material. In all three cases, improving robot-human interaction will be as much about an increased understanding of human responsibility, folk psychology and consciousness as it will be about technological breakthroughs in robot hardware and architecture.

Robot crime?

1041809723Yesterday I was interviewed by Radio Sputnik to comment on some recent claims about robot/AI crime.  They have made a transcription and recording of the interview available here.

Some highlights:

“We need to be worried about criminals using AI in three different ways. One is to evade detection: if one has some artificial intelligence technology, one might be able, for instance, to engage in certain kinds of financial crimes in a way that can be randomized in a particular way that avoids standard methods of crime detection. Or criminals could use computer programs to notice patterns in security systems that a human couldn’t notice, and find weaknesses that a human would find very hard to identify… And then finally a more common use might be of AI to just crack passwords and codes, and access accounts and data that people previously could leave secure. So these are just three examples of how AI would be a serious threat to security of people in general if it were in the hands of the wrong people.”

“I think it would be a tragedy if we let fear of remote possibilities of AI systems committing crimes, if that fear stopped us from investigating artificial intelligence as a positive technology that might help us solve some of the problems our world is facing now. I’m an optimist in that I think that AI as a technology can very well be used for good, and if we’re careful, can be of much more benefit than disadvantage.”

“I think that as long as legislators and law enforcement agencies understand what the possibilities are, and understand that the threat is humans committing crimes with AI rather than robots committing crimes, then I think we can head off any potential worries with the appropriate kinds of regulations and updating of our laws.”