Robot crime?

1041809723Yesterday I was interviewed by Radio Sputnik to comment on some recent claims about robot/AI crime.  They have made a transcription and recording of the interview available here.

Some highlights:

“We need to be worried about criminals using AI in three different ways. One is to evade detection: if one has some artificial intelligence technology, one might be able, for instance, to engage in certain kinds of financial crimes in a way that can be randomized in a particular way that avoids standard methods of crime detection. Or criminals could use computer programs to notice patterns in security systems that a human couldn’t notice, and find weaknesses that a human would find very hard to identify… And then finally a more common use might be of AI to just crack passwords and codes, and access accounts and data that people previously could leave secure. So these are just three examples of how AI would be a serious threat to security of people in general if it were in the hands of the wrong people.”

“I think it would be a tragedy if we let fear of remote possibilities of AI systems committing crimes, if that fear stopped us from investigating artificial intelligence as a positive technology that might help us solve some of the problems our world is facing now. I’m an optimist in that I think that AI as a technology can very well be used for good, and if we’re careful, can be of much more benefit than disadvantage.”

“I think that as long as legislators and law enforcement agencies understand what the possibilities are, and understand that the threat is humans committing crimes with AI rather than robots committing crimes, then I think we can head off any potential worries with the appropriate kinds of regulations and updating of our laws.”

Machine consciousness: Moving beyond “Is it possible?”

The next E-Intentionality seminar will be held Monday, June 20th from 13:00 to 14:50 in Fulton 102.  Ron Chrisley will speak on “Machine consciousness:  Moving beyond “Is it possible?””2000px-hal9000-svg as a dry run of his talk at the “Mind, Selves & Technology” workshop later that week in Lisbon:

Philosophical contributions to the field of machine consciousness have been preoccupied with questions such as: Could a machine be conscious? Could a computer be conscious solely by virtue of running the right program?  How would we know if we achieved machine consciousness? etc.  I propose that this preoccupation constitutes a dereliction of philosophical duty. Philosophers do better at helping solve conceptual problems in machine consciousness (and do better at exploiting insights from machine consciousness to help solve conceptual problems in consciousness studies in general) once they replace those general questions, as fascinating as they are, with ones that a) reflect a broader understanding of what machine consciousness is or could be; and b) are better grounded in empirical machine consciousness research.

The Embodied Nature of Computation

human-body-as-a-computer

 

The next E-Intentionality seminar will be held Wednesday, June 8th from 13:00 to 14:50 in Pevensey 1 1A3.  Ron Chrisley will speak on “The Embodied Nature of Computation” as a dry run of his talk at a symposium (“Embodied Cognition: Constructivist and Computationalist Perspectives”) at IACAP 2016 next week:

 

Although embodiment-based critiques of computation’s role in explaining mind have at times been overstated, there are important lessons from embodiment which computationalists would do well to learn. For example, orthodox schemes for individuating computations are individualist, atemporal, and anti-semantical (formal), but considering the role of the body in cognition suggests by analogy that — even to explain extant information processing systems unrelated to cognitive science and artificial intelligence contexts — computations should instead be characterised in terms that are world-involving, dynamical and intentional/meaningful. Further, the counterfactual-involving nature of computational state individuation implies that sameness of computation is not in general preserved when one substitutes a non-living computational component with a living, autonomous, free organism that merely intends to realise the same functional profile as component being replaced. Thus, contra computational orthodoxy, there is no sharp divide between the computational facts and what is usually thought of as the implementational facts, even for unambiguously computational systems. The implications of this point for some famous disputes concerning group minds, and strong AI, will be identified.

Image from digitalmediatheory.files.wordpress.com

The physical mandate for folk psychology

y-mazeThe next E-Intentionality seminar will be held Friday, April 29th from 12:00 to 12:50 in Pevensey 1 1B8 (please note change of venue).  Simon McGregor will speak on “The physical mandate for folk psychology”; abstract:

I describe a heuristic argument for understanding certain physical systems in terms of properties that resemble the beliefs and desires of folk psychology. The core of the argument is that predictions about certain events can legitimately be based on assumptions about later events, resembling Aristotelian `final causation’; however, more nuanced causal entities (resembling internally supervenient beliefs) must be introduced into these types of explanation in order for them to remain consistent with a causally local Universe.

Glocalism: Think Global, Act Local

The next E-Intentionality seminar will be held on April 22nd from 12:00 to 12:50 in Bramber House BH-253 (please note change of venue).  Simon Bowes will speak on “Glocalism: Think Global, Act Local”; abstract:

This talk will be about the much discussed tension between local and global properties of mental states. In particular it will investigate whether I can have my argumentative cake and eat it in terms of relying on local properties to solve the new riddle of induction, but global properties in arguing against reductionism in the mental causation debate.

bowes

Competition for a complete study in city planning for a fictive American city of 500,000 inhabitants, organised by the NCCP in spring 1913.  Entry no. 7 (F.A. Bourne, A.C. Comey, B.A. Haldeman and J. Nolan), in “Proceedings of the Fifth National Conference on City Planning. Chicago, Illinois, May 5-7, 1913” (Boston, MA, 1913), 212.

Audio (.mp3, 15MB)

Radical Sensorimotor Enactivism- A Rapproachment of Cognitive and E-Approaches to Conscious Perception (via Predictive Processing)

The next E-Intentionality meeting will be April 15th in Pevensey 2A2 from 12:00-12:50. The speaker will be Adrian Downey, on the topic: “Radical Sensorimotor Enactivism- A Rapproachment of Cognitive and E-Approaches to Conscious Perception (via Predictive Processing)”. Abstract:

Where conscious perception is concerned, enactive and ecological approaches (E-approaches) are considered to be dichotomous with cognitivism. I argue that my own theory of conscious perception, which I label Radical Sensorimotor Enactivism (RSE), has the conceptual and empirical resources to combine these traditionally opposed views into a unified framework for the study of conscious perception. In this paper I explain how, and why, the cognitivist theory of Predictive Processing (PP) plays an essential role in this unification. Although PP is often taken to provide an overall conceptual framework for the study of mind, I argue that RSE (not PP) provides such a framework, whilst noting that PP forms an important sub-set of the RSE approach.

RSE is an anti-representational version of the sensorimotor enactive theory of perceptual consciousness. Sensorimotor enactivism takes organisms to come into direct perceptual contact with the environment when they possess sensorimotor knowledge. They become conscious of these perceptual states when the organism attends to them, because attention is taken to be both necessary and sufficient for consciousness. I argue that attention should be construed adverbially [Mole, 2011]. Adverbial theories of attention are generally thought to cohere with empirical theories of attention known as ‘biased competition’. I explain that RSE fits best with the ‘biased affordance competition’ framework [Anderson, 2014] because this framework (unlike ‘biased competition’) does not require representation. Having provided a conceptual clarification of attention which does not require representation, we thus arrive at RSE.

PP explains perception as constituted by expectancies as to how sensory stimulation will be modified with movement, and these expectancies are taken to be brain-based. It thus matches exactly the description of sensorimotor knowledge, and so should be taken to provide an operationalisation of it [Seth, 2014]. Furthermore, PP can be used to explain attention (and so, consciousness) because PP is compatible with Anderson’s framework [Clark, 2015], and it is mathematically compatible with competition theories [Spratling, 2008]. Therefore, PP can be used to study and explain the brain’s role in conscious perception on the RSE framework. PP does not, however, on its own explain consciousness, because RSE takes brain, body, and environment to be constitutive of conscious perception.

The biggest benefit of endorsing RSE is that it combines the key aspects of both ecological and cognitivist approaches to conscious perception. E-approaches are generally considered to provide good phenomenological accounts of conscious perception which respect the fact that conscious organisms are embodied and embedded in a world. They are, however, thought to provide only descriptive (as opposed to mechanistic) explanations of consciousness, and ones which ignore the undoubtedly key role played by the brain. Cognitivist theories are thought to provide such explanations, but they do so at the expense of phenomenological adequacy, and they largely ignore embodiment and embeddedness. In combining both approaches, RSE avails of the positives of both, and so provides a promising overall conceptual framework for the study of conscious perception.

Audio (.mp3, 6.6MB, discussion only)

Upcoming dates:

  • April 15: Pevensey 2A2 (Adrian Downey)
  • April 29: Pevensey 2A2 (Simon Bowes TBC)

A role for introspection in developing Anthropic AI

aaeaaqaaaaaaaaixaaaajdi3zjhln2m4lti3mmmtnge1yy04mgrlltiynjgxnmiwnjfjma

The next E-Intentionality meeting will be April 8th (not April 1st – April Fool!), in Fulton 112 from 12:00-12:50.  The speaker will be Sam Freed, on the topic: “A role for introspection in developing Anthropic AI”.  Abstract:

AI as a technology is distinct from cognitive science in terms of methodology and requirements. Human-like AI is distinct from idealised/rational AI. Anthropic AI is defined as the part of human-like AI that deals with pre-cultural intelligence. Subjectivity is discussed as an intuitive gateway to building such AI. Introspection is defended from Watson and Simon’s attacks, and shown to be in widespread and reliable use in all human cultures. This is tied back to pragmatic AI development.

Audio (.mp3, 11MB)

Upcoming dates:

  • April 01: No meeting (Easter break)
  • April 08: Fulton 112 (Sam Freed)
  • April 15: Pevensey 2A2 (Adrian Downey)
  • April 29: Pevensey 2A2 (Simon Bowes TBC)