I’m writing this from Zürich airport, on my way back to England after an excellent sojourn at the Dharma Sangha Zen Centre (www.dharma-sangha.de) on the German/Swiss frontier. I was there for a cosy meeting of the Society for Mind-Matter Research (www.mindmatter.de) on the topic of embodiment. My talk gave a brief overviews of six ways in which my research has investigated the role of embodiment in mind and computation. You can view my slides here: prezi.com/view/TLzIVu5YT
This video interview is a good summary of my take on what we’re trying to do in the European HumanE AI project (humane-ai.eu) – and thus also what can/should be done at Stanford HAI (hai.stanford.edu). Of course I meant to say “overestimate” not “underestimate” near the end!
Next week Sussex will host the third and last workshop of the AHRC Network “Humanising Algorithmic Listening“. At the end of the first day a few of us with some common interests will be speaking about our recent small project proposals, with the hope of finding some common ground. Here’s what I’ll be talking about:
Self-listening for music generation
Although it may seem obvious that in order to create interesting music one must be capable of listening to music as music, the ability to listen is often omitted in the design of musical generative systems. And for those few systems that can listen, the emphasis is almost exclusively on listening to others, e.g., for the purposes of interactive improvisation. I’ll describe a project that aims to explore the role that a system’s listening to, and evaluating, that system’s own musical performance (as its own musical performance) can play in musical generative systems. What kinds of aesthetic and creative possibilities are afforded by such a design? How does the role of self-listening change at different timescales? Can self-listening generative systems shed light on neglected aspects of human performance? A three-component architecture for answering questions such as these will be presented.
The talk immediately before mine, Nicholas Ward & Tom Davis’s “A sense of being ‘listened to’”, focusses on an aspect of performance that my thinking on these issues has neglected. Specifically, the role that X’s perception of Y’s responses to X’s output can/should play in regulating X’s performance, both in real-time and over longer time scales. An important component of X perceiving Y’s’ responses as responses to X is X’s determining whether or not, in the case of auditory/musical output, Y is even listening to X in the first place. When I say component, I only mean that in the most abstract sense — it need not be a separate explicit module or step, independent of processing other’s responses in general. And many cases of auditory production are ecologically constrained to make a given auditory source salient/dominant, so that questions like “what (auditory) source is that person responding to?” need not be asked. But the more general point remains: that responding to the responses of others should be a key component (even) in a robust self-listening system.