Self-listening for music generation

grays_lato_imgNext week Sussex will host the third and last workshop of the AHRC Network “Humanising Algorithmic Listening“.  At the end of the first day a few of us with some common interests will be speaking about our recent small project proposals, with the hope of finding some common ground.  Here’s what I’ll be talking about:

Self-listening for music generation

Although it may seem obvious that in order to create interesting music one must be capable of listening to music as music, the ability to listen is often omitted in the design of musical generative systems.  And for those few systems that can listen, the emphasis is almost exclusively on listening to others, e.g., for the purposes of interactive improvisation.  I’ll describe a project that aims to explore the role that a system’s listening to, and evaluating, that system’s own musical performance (as its own musical performance) can play in musical generative systems.  What kinds of aesthetic and creative possibilities are afforded by such a design? How does the role of self-listening change at different timescales? Can self-listening generative systems shed light on neglected aspects of human performance?  A three-component architecture for answering questions such as these will be presented.

The talk immediately before mine, Nicholas Ward & Tom Davis’s “A sense of being ‘listened to’”, focusses on an aspect of performance that my thinking on these issues has neglected.  Specifically, the role that X’s perception of Y’s responses to X’s output can/should play in regulating X’s performance, both in real-time and over longer time scales.  An important component of X perceiving Y’s’ responses as responses to X is X’s determining whether or not, in the case of auditory/musical output, Y is even listening to X in the first place.  When I say component, I only mean that in the most abstract sense — it need not be a separate explicit module or step, independent of processing other’s responses in general.  And many cases of auditory production are  ecologically constrained to make a given auditory source salient/dominant, so that questions like “what (auditory) source is that person responding to?” need not be asked.  But the more general point remains:  that responding to the responses of others should be a key component (even) in a robust self-listening system.

Advertisements

One thought on “Self-listening for music generation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s