Robot Opera Mini-Symposium Video

47309

Last June I participated in the Robot Opera Mini Symposium organised by the Centre for Research in Opera and Music Theatre (CROMT) at Sussex.  A video of all the talks, and the robot opera performances themselves, is available below.  My 17-minute talk can be found at 08:40 into the video.

Machine Messiah: Lessons for AI in “Destination: Void”

Tomorrow is the first day of a two-day conference to be held at Jesus College, Cambridge on the topic: “Who’s afraid of the Super-Machine?  AI in Sci-Fi Film and Literature” (https://science-human.org/upcoming/), hosted by the Science & Human Dimension division of the AI & The Future of Humanity Project.

cropped-forbidden-planet-shdp-conference

I’m speaking on Friday:

Machine Messiah: Lessons for AI in Destination: Void

In Destination: Void (1965), Frank Herbert anticipates many current and future ethical, social and philosophical issues arising from humanity’s ambitions to create artificial consciousness.  Unlike with his well-known Dune millieu, which explicitly sidesteps such questions via the inclusion of a universally-respected taboo against artificial intelligence, the moon-based scientist protagonists in Destination: Void explicitly aim to create artificial consciousness, despite previous disastrous attempts.  A key aspect of their strategy is to relinquish direct control of the process of creation, instead designing combinations of resources (a cloned spaceship crew of scientists, engineers, psychiatrists and chaplains with interlocking personality types) and catalytic situations (a colonising space mission that is, unknown to the clone crew, doomed with scheduled crises ) that the moon-based scientists hope will impel the crew members to bring about, if not explicitly design, an artificial consciousness based in the ship’s computer.  As with Herbert’s other works, there is a strong emphasis on the messianic and the divine, but here it is in the context of a superhuman machine, and the ethics of building such.  I will aim to extract from Herbert’s incredibly prescient story several lessons, ranging from the practical to the theological, concerning artificial consciousness, including: the engineering of emergence and conceptual change; intelligent design and “Adam as the first AI”; the naturalisation of spiritual discourse; and the doctrine of the Imago Dei as a theological injunction to engage in artificial consciousness research.

Revisionism about Qualia: Prospects for a Reconciliation Between Physicalism and Qualia

 

On January 30th I’ll be presenting joint work with Aaron Sloman (“Revisionism about Qualia: Prospects for a Reconciliation Between Physicalism and Qualia”) at a conference in Antwerp on Dan Dennett’s work in philosophy of mind (sponsored by the Centre for Philosophical Psychology and European Network for Sensory Research).  Both Aaron and Dan will be in attendance.  I don’t have an abstract of our talk, but it will be based on a slimmed-down version of our 2016 paper (with some additions, hopefully taking into account some recent development’s in Dan’s position on qualia).

The official deadline for registration has passed, but if you are interested in attending perhaps Bence Nanay, the organiser, can still accommodate you?  Below please find the list of speakers and original calls for registration and papers.

Centre for Philosophical Psychology and European Network for Sensory Research

Call for registration 

Conference with Daniel Dennett on his work in philosophy of mind. January 30, 2018. 

Speakers:

  • Daniel Dennett (Tufts)
  • Elliot Carter (Toronto)
  • Ron Chrisley and Aaron Sloman (Sussex)
  • Krzysztof Dolega (Bochum)
  • Markus Eronen (Leuven)
  • Csaba Pleh (CEU)
  • Anna Strasser (Berlin)

This conference accompanies Dennett’s deliverance of the 7th Annual Marc Jannerod Lecture (the attendance of this public lecture is free). 

Registration (for the conference, not the public lecture): 100 Euros (including conference dinner – negotiable if you dont want conference dinner). Send an email to Nicolas Alzetta (nalzetta@yahoo.com) to register. Please register by December 21. 

Workshop with Daniel Dennett, January 30, 2018

Call for papers!

Daniel Dennett will give the Seventh Annual Marc Jeannerod Lecture (on empirically grounded philosophy of mind) in January 2018. To accompany this lecture, the University of Antwerp organizes a workshop on  Dennett’s philosophy of mind on January 30, 2018, where he will be present.

There are no parallel sections. Only blinded submissions are accepted.

Length: 3000 words. Single spaced!

Deadline: October 15, 2017. Papers should be sent to nanay@berkeley.edu

Machine consciousness at the Brighton Digital Festival

image-exmachina-800x500

Next Tuesday I’ll be giving a brief talk on machine consciousness prior to a screening of the film Ex Machina, as part of the Brighton Digital Festival. Sackler colleagues Keisuke Suzuki and David Schwartzman will be giving consciousness-illuminating VR demos involving our Nao robots as well. The event is being organised in conjunction with the British Science Association.  More info at http://theoldmarket.com/shows/toms-film-club-ex_machina-2015/


Update, 4 October 2017:

Here are some photos of the event, courtesy of Amber John.  As you can see, the title I settled on was “Turing Tests and Machine Consciousness”.

IMG_4889.JPGIMG_4890.JPGIMG_4892.JPGIMG_4896.JPGTOM1.pngTOM2.jpg

Self-listening for music generation

grays_lato_imgNext week Sussex will host the third and last workshop of the AHRC Network “Humanising Algorithmic Listening“.  At the end of the first day a few of us with some common interests will be speaking about our recent small project proposals, with the hope of finding some common ground.  Here’s what I’ll be talking about:

Self-listening for music generation

Although it may seem obvious that in order to create interesting music one must be capable of listening to music as music, the ability to listen is often omitted in the design of musical generative systems.  And for those few systems that can listen, the emphasis is almost exclusively on listening to others, e.g., for the purposes of interactive improvisation.  I’ll describe a project that aims to explore the role that a system’s listening to, and evaluating, that system’s own musical performance (as its own musical performance) can play in musical generative systems.  What kinds of aesthetic and creative possibilities are afforded by such a design? How does the role of self-listening change at different timescales? Can self-listening generative systems shed light on neglected aspects of human performance?  A three-component architecture for answering questions such as these will be presented.

The talk immediately before mine, Nicholas Ward & Tom Davis’s “A sense of being ‘listened to’”, focusses on an aspect of performance that my thinking on these issues has neglected.  Specifically, the role that X’s perception of Y’s responses to X’s output can/should play in regulating X’s performance, both in real-time and over longer time scales.  An important component of X perceiving Y’s’ responses as responses to X is X’s determining whether or not, in the case of auditory/musical output, Y is even listening to X in the first place.  When I say component, I only mean that in the most abstract sense — it need not be a separate explicit module or step, independent of processing other’s responses in general.  And many cases of auditory production are  ecologically constrained to make a given auditory source salient/dominant, so that questions like “what (auditory) source is that person responding to?” need not be asked.  But the more general point remains:  that responding to the responses of others should be a key component (even) in a robust self-listening system.

Negotiating Computation & Enaction: Rules of Engagement

In July, PAICSers Adrian Downey and Jonny Lee (with Joe Dewhurst) organised an international conference at Sussex entitled “Computation & Representation in Cognitive Science: Enactivism, Ecological Psychology & Cybernetics”.  It was an excellent meeting, with boundary-pushing talks from:

  • Anthony Chemero (University of Cincinnati)
  • Ron Chrisley (University of Sussex)
  • Sabrina Golonka (Leeds Beckett University)
  • Alistair Isaac (University of Edinburgh)
  • Adam Linson (University of Dundee)
  • Marcin Miłkowski (Polish Academy of Sciences)
  • Nico Orlandi (UC Santa Cruz)
  • Mario Villalobos (Universidad de Tarapaca)

I never wrote an abstract for my talk, so below I include the handout instead.  But it probably only makes sense for those who heard the talk (and even then…).

Negotiating Computation & Enaction: Rules of Engagement

Ron Chrisley, University of Sussex, July 10th 2017

Disputes concerning computation and enaction typically centre on (rejection of) computationalism with respect to cognition:  the claim that cognition is computation.

I: Rules of engagement

I propose the following “rules of engagement” (numbered items, below) as a way to prevent talking past one another, arguing against straw men, etc. and instead make progress on the issues that matter. No doubt they fail to be theory neutral, strictly speaking.  And there are of course other principles that are at least as important to adhere to.  But it’s a start.

The proposals can be seen as specific instances of the general admonition: clarify the computationalist claim at issue (cf end of Grush’s review of Ramsey’s Representation Reconsidered).

But even more so:  Many different versions of the claim that “cognition is computation”, depending on the choices of the following variables:

Computationalism schema: {Some|all} cognition {in humans|others} is {actually|necessarily|possibly} {wholly|partly} {best explained in terms of} computation

So need to clarify A) the relation between them, as well as B) cognition and C) computation themselves

A: Relation?

The instantiation of this that I consider most interesting/likely to be true is given below, but for now, start with this:

  1. Computationalism is best understood as an epistemological claim

That is, I plump for the “is best explained in terms of” rather than the simple “is” version of the schema above, as it is the version with the most direct methodological, experimental, even theoretical import (we’ll see the upshot of this later).

B: Cognition?

Given my background in AI and Philosophy (rather than Psychology or Neuroscience), I am interested in cognition in its most general sense: cognition as it could be, not (just) cognition as it is.  Thus:

  1. Avoid inferences such as: “X is involved in (all cases of) human cognition, therefore X is essential to cognition”

Compare flight and feathers.

An interesting case is when computation is not the best account for some particular kind of cognition, yet only it can account for that and some other kind of cognition.

C: Computation?

  1. We should demand no higher rigour/objectvity for computational concepts than we do for other, accepted scientific concepts
  2. Avoid the reductionism Catch-22 (mainly for representation)

That is, some authors seem to insist both that:

  • A notion of representation must be reduceable to something else (preferably non-intentional) to be naturalistically acceptable
  • Any notion of representation that is reduceable to something else can be replaced by that something else and therefore is surplus to requirements.
  1. Be aware that there are distinct construals (inter-theoretic) and varieties (intra-theoretic) of computation
  • Construals: computation as running some program, operation of a Turing machine, mechanistic account (Milkowski, Piccinini), semantic account, etc.
  • Varieties: digital, analogue, quantum, connectionist, etc.

E.g., if you are arguing against a view in which either everything is computational, or there is only one kind of computation, you are unlikely to persuade a nuanced computationalist.

  1. There are computers, and computers are best explained computationally

Does a given account have the implication that computers are not computational? Or that there are no computers?  Prima facie, these should count as points against that account.

And even so, what is to stop someone from claiming that whatever concepts provide the best account of the things we intuitively (but incorrectly?) called computers, also play a role in the best account of mind?  Cf Transparent computationalism (Chrisley)

On the other hand, do not be so inclusive as to trivialise the notion of computation: pan-computationalism?

  • Actually that’s not the problem with pan-computationalism
  • The real problem is that has difficulty explaining what is specifically computational about computers (beyond universality)

Computationalism (C): the best explanation (of at least some cases) of cognition will involve (among others) computational concepts (that is, concepts that play a key role in our best explanation of (some) computers, qua computers).

  1. So even if computation is only necessary for the explanation of some kinds of cognition, C is still vindicated.

II Examples

Consider two kinds of arguments against computationalism: those that rely on enactivism, and those that do not (but which some enactivists rely on)

These summaries are probably unfair, and likely violate corresponding “rules of engagement” concerning enactivism, etc.

Enactivist arguments

  • Enactivism 1:  Self-maintenance
    • E: The operation of a computer/robot, no matter how behaviourally/functionally similar to a human, would not be sufficient for cognition because not alive/self-maintenant
      • E has same skeptical problems as Zombie/Swampman positions
      • Note:  similarity to human is misleading – it may be that given its materiality, a computer would have to be very behaviourally/functionally different from a human in order to cognise (2)
    • Why believe E? Because:
  1. a) meaning for computers is not intrinsic; and
  2. b) possession of intrinsic meaning is necessary for (explaining?) cognition
  • Why believe b)?
    • Much of our meaning is imposed from outside/others?
    • Even if one accepts b), only follows that computation can’t explain *all* cognition? (7)
    • Even if human cognition has an “intrinsic meaning” core, does that rule out the possibility of cognition that does not? (2)
  • Why believe a)?
    • Reason 1:  Because any meaning in computers is imposed from the outside
      • But why should that preclude that system might have, partially by virtue of its computational properties, (either distinct or coinciding) intrinsic meaning, in addition? (5)
      • Might living systems be examples of such?
    • Reason 2: Because:
  1. c) intrinsic meaning (only) comes from being alive; and
  2. d) computers are not alive
  • Why believe d)? (given behavioural/functional identity with living systems)
    • Because computers are made of non-living material: they don’t, e.g. metabolise
      • By definition?  Could they? (5)
      • But so are cells: the parts of cells don’t metabolise
      • Because computers are not hierarchies of living systems
        • So they have meaning, just not human meaning? (2, 7)
        • What if we hierarchically arrange them?  Why would their computational nature cease to be explanatorily relevant?
      • Why believe c)?
        • Enactivist meaning is imposed by theorist’s valuing of self-maintenance (3)
      • In any event: E is not enough to defeat C — need to show computation is not necessary
  • Enactivism 2a: Basic Minds (Hutto and Myin)
    • Computation involves representation
      • Contentious (e.g., mechanistic account) (5)
    • Representation requires content
    • There is a level of basic mind that does not involve content
    • Therefore computationalism is false
      • At most only shows that there must be a non-computational explanation of the most basic forms of cognition (7)
      • But actually one can have a non-contentful, yet intentional, notion of computation:  robust correspondence (5)
    • Enactivism 2b: Content  (Hutto and Myin)
      • Computation involves representation
        • As above
      • Representation requires content
      • There is a “hard problem of content” (HPC):  no naturalistically acceptable theory of sub-personal content
      • Therefore computationalism is false
        • Impatience: Even if true, lack of a theory is not decisive (3)
        • Some argue the “hard” problem has already been solved, long ago (Milkowski)
      • Enactivism(?) 3:  Fictionalism (after Downey, with apologies)
        • Computation involves representation
        • Although representation is useful for explaining cognition, utility doesn’t imply metaphysical truth
        • Further, considerations like HPC argue against representation, and therefore computation
        • So computationalism is (metaphysically) false
          • Relies on argument Enactivism 2b – see rejoinders above
          • Only tells against computation construed as a metaphysical claim — not a problem for C
          • Yet C, being epistemic/pragmatic, is the one that matters (1)

Non-enactivist arguments against computationalism

(to which enactivists sometimes appeal):

  • Chinese room (Searle)
    • Many objections
    • Against Strong AI (metaphysics), not against C (1)
    • Against sufficiency, not against C (7)
    • Enactivist irony: emphasising the fundamental differences between living/conscious systems and those that are not (such as Macs and PCs) allows one to question the idea that a human (Searle) can perform the same computations as those
      • Proposal: Human has different, intentionality-sensitive counterfactuals that “dead” silicon does not (5)
      • Upshot: Non-living nature of machines is a feature, not a bug — immunizes machine computation to Chinese room critique
  • Diagonalisation (e.g., Penrose)
    • G: Human mathematicians are not computing a (knowably sound) Turing-computable function when ascertaining the truth of certain mathematical propositions
    • Many objections
    • But even if the argument works, it does not impact on C, since X does not need to compute the same functions as Y for X to explain Y
    • That is, C is epistemic, not metaphysical (1)
  • Non-objectivity of computation (Putnam, Searle)
    • Anything can be seen as implementing any Turing machine
    • On some accounts, not all TM instantiations are computers (need intentionality) (5)
    • But fails, even for TMs:  counterfactuals
    • More recently, some (Maudlin, Bishop) have argued that to be explanatory, computational states can only supervene on occurrent physical states, not counterfactual ones.
      • But some occurrent states are individuated by their counterfactual properties
      • Counterfactual properties supervene on occurent state
      • Also: seems to imply computational concepts are not suitable for explaining computers (6)
    • Phenomenology (Dreyfus)
      • E. g., experts don’t engage in search algorithms when, e.g., playing chess – they just see directly the right moves.
      • Makes unfounded assumptions about what it feels like to be this or that kind of physical (computational) system
      • E. g., a (sub-personal) computation that involves millions of steps may realise an experience with no such complex structure, even skilled coping
      • But even if Dreyfus is right, does not refute C (7)

The limits of the limits of computation

I’m very pleased to have been invited to participate in an exciting international workshop being held at Sussex later this month.

di9cgzmxoaaxn4c

My very brief contribution has the title: “The limits of the limits of computation”.  Here’s the abstract:

The limits of the limits of computation

The most salient results concerning the limits of computation have proceeded by establishing the limits of formal systems. These findings are less damning than they may appear.  First, they only tell against real-world (or “physical”) computation (i.e., what it is that my laptop does that makes it so useful) to the extent to which real-world computation is best viewed as being formal, yet (at least some forms of) real-world computation are as much embodied, world-involving, dynamics-exploiting phenomena as recent cognitive science takes mind to be.  Second, the incomputability results state that formal methods are in some sense outstripped by extra-formal reality, while themselves being formal methods attempting to capture extra-formal reality (real-world computation) — an ironic pronouncement that would make Epimenides blush.    One ignores these limits on the incomputability results at one’s peril; a good example is the diagonal argument against artificial intelligence.

 

 

What philosophy can offer AI

https3a2f2fcdn-evbuc-com2fimages2f279452862f1213613672012f12foriginal

My piece on “What philosophy can offer AI” is now up at AI firm LoopMe’s blog. This is part of the run-up to my speaking at their event, “Artificial Intelligence: The Future of Us”, to be held at the British Museum next month.  Here’s what I wrote (the final gag is shamelessly stolen from Peter Sagal of NPR’s “Wait Wait… Don’t Tell Me!”):

Despite what you may have heard, philosophy at its best consists in rigorous thinking about important issues, and careful examination of the concepts we use to think about those issues.  Sometimes this analysis is achieved through considering potential exotic instances of an otherwise everyday concept, and considering whether the concept does indeed apply to that novel case — and if so, how.

In this respect, artificial intelligence (AI), of the actual or sci-fi/thought experiment variety, has given philosophers a lot to chew on, providing a wide range of detailed, fascinating instances to challenge some of our most dearly-held concepts:  not just “intelligence”, “mind”, and “knowledge”, but also “responsibility”, “emotion”, “consciousness”, and, ultimately, “human”.

But it’s a two-way street: Philosophy has a lot to offer AI too.

Examining these concepts allows the philosopher to notice inconsistency, inadequacy or incoherence in our thinking about mind, and the undesirable effects this can have on AI design.  Once the conceptual malady is diagnosed, the philosopher and AI designer can work together (they are sometimes the same person) to recommend revisions to our thinking and designs that remove the conceptual roadblocks to better performance.

This symbiosis is most clearly observed in the case of artificial general intelligence (AGI), the attempt to produce an artificial agent that is, like humans, capable of behaving intelligently in an unbounded number of domains and contexts

The clearest example of the requirement of philosophical expertise when doing AGI concerns machine consciousness and machine ethics: at what point does an AGI’s claim to mentality become real enough that we incur moral obligations toward it?  Is it at the same time as, or before, it reaches the point at which we would say it is conscious?  At when it has moral obligations of its own? And is it moral for us to get to the point where we have moral obligations to machines?  Should that even be AI’s goal?

These are important questions, and it is good that they are being discussed more even though the possibilities they consider aren’t really on the horizon.  

Less well-known is that philosophical sub-disciplines other than ethics have been, and will continue to be, crucial to progress in AGI.  

It’s not just the philosophers that say so; Quantum computation pioneer and Oxford physicist David Deutsch agrees: “The whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology”.  That “not” might overstate things a bit (I would soften it to “not only”), but it’s clear that Deutch’s vision of philosophy’s role in AI will not be limited to being a kind of ethics panel that assesses the “real work” done by others.

What’s more, philosophy’s relevance doesn’t just kick in once one starts working on AGI — which substantially increases its market share.  It’s an understatement to say that AGI is a subset of AI in general.  Nearly all, of the AI that is at work now providing relevant search results, classifying images, driving cars, and so on is not domain-independent AGI – it is technological, practical AI, that exploits the particularities of its domain, and relies on human support to augment its non-autonomy to produce a working system. But philosophical expertise can be of use even to this more practical, less Hollywood, kind of AI design.

The clearest point of connection is machine ethics.  

But here the questions are not the hypothetical ones about whether a (far-future) AI has moral obligations to us, or we to it.  Rather the questions will be more like this: 

– How should we trace our ethical obligations to each other when the causal link between us and some undesirable outcome for another, is mediated by a highly complex information process that involves machine learning and apparently autonomous decision-making?  

– Do our previous ethical intuitions about, e.g., product liability apply without modification, or do we need some new concepts to handle these novel levels of complexity and (at least apparent) technological autonomy?

As with AGI, the connection between philosophy and technological, practical AI is not limited to ethics.  For example, different philosophical conceptions of what it is to be intelligent suggest different kinds of designs for driverless cars.  Is intelligence a disembodied ability to process symbols?  Is it merely an ability to behave appropriately?  Or is it, at least in part, a skill or capacity to anticipate how one’s embodied sensations will be transformed by the actions one takes?  

Contemporary, sometimes technical, philosophical theories of cognition are a good place to start when considering what way of conceptualising the problem and solution will be best for a given AI system, especially in the case of design that has to be truly ground breaking to be competitive.

Of course, it’s not all sweetness and light. It is true that there has been some philosophical work that has obfuscated the issues around AI, thereby unnecessarily hindering progress. So, to my recommendation that philosophy play a key role in artificial intelligence, terms and conditions apply.  But don’t they always?

Russell, Russell: A Metaphysics emerges from the undergrowth

bertrand-russellThe final E-Intentionality seminar of 2016 will be led by Simon Bowes this Thursday, December 15th at 13:00 in Freeman G22.
Russell Russell:  A Metaphysics emerges from the undergrowth.
I will be examining recent arguments reviving Russellian monism, so-called neo-Russellian physicalism.  I will be asking whether it is viable both as a kind of physicalism and as a way of accounting for experiential properties in a material world.

The existence of qualia does not entail dualism

Our next E-Intentionality seminar is this Thurnaossday, December 1st, at 13:00 in Freeman
G22.  This will be a dry run of a talk I’ll be giving
as part of EUCognition2016, entitled “Architectural Requirements for Consciousness”.  You can read the abstract here, along with an extended clarificatory discussion prompted by David Booth’s comments.