There is a saying in improvisation that there are no mistakes. We can take it as an acceptable starting point, because it’s clear that there’s no saying what the rules might be, or how they might change.

This tends to get a lot of traction in improvisation circles, especially when it addresses the rather dangerously prescribed traditions of classical music, and to some extent those of jazz (which may well be at its most doctrinaire when it is at its least pre-determined).

There is the matter, though, of playing as though you are not listening. This may in fact be a mistake.

It is not mistake because of what gets played under these conditions, though it does have a particular sound. It has more to do with atmosphere: even when music is at its most traditional, its most familiar, its least mindful, an audience doesn’t just hear what’s played, it hears what could be played, what is probably-to-be-expected… that is, the hearers hear you listening forward, and the composer listening forward, and making choices in a field of possibilities.

So how would you make a machine seem to listen? How would you get it to codify these parameters of calculation and search, and desire for resonance?

There is a problem with having audio in and audio out in the same room.

As we know, the amplification takes from the microphone, the microphone takes from the amplification, and the world collapses. I recently read about what exactly makes that ‘feedback’ sound, but I’ve forgotten what it is, how exactly it works. It is a bad sound.

And of course if you’re devising an environment-aware music system, there’s the problem that the machine will play along with itself, creating its own environment. It’s like the opposite of garbage in the system: there’s no difference between the garbage and the food. The recent past becomes the near future becomes the recent past, and they have to be different in order to move… forward.

And if you’re playing with a person… how is that person understanding his separateness?

The person is not making an echo, and not playing the same ‘thing’ as you, which helps. But he’s not depending only on sound, either. And he’s deciding what features to respond to — not to your sound, necessarily, but to you, through some combination of features and imitations.

Making a machine respond musically to a gesture is not in itself so problematic… but picking out something to play with? In an engaging way? The question is open.

When we listen to each other, we do not necessarily respond to the sound we hear, at least not as a physical sensation. We can store the sound as an idea or shape, remember it, draw from it, vary it, put it on our own terms, and then respond to it (or not). We actually require ourselves to do something different. We separate ourselves in order to establish our own selves.

And what would you do to have a machine remember itself — its own sound — as different from yours? Well, first it would have to have ‘a sound’. It’s not clear that it does, if it doesn’t physically resonate across space. But of course, synthesis is plenty successful on the large scale, so why exclude it?

In pursuing some machine learning studies for harmony, it strikes me that the input/output relationships are often too far from one another.

Especially: it may not always be clear what the input/output relationships are. But, perhaps more importantly, it seems excessively difficult to make the inputs change their relationships to one another.

Parameters which seem to be working independently should be able to suddenly seem related: that an increase in harmonic tension might suddenly and palpably correspond to an increase in dynamic or tempo or some other identifiable parameter. The parameters might not have so much meaning on their own (like a single note – fairly arbitrary). But the relatedness of the parameters to each other is something which can bring a strong musical impression (like a sempre PP in Beethoven, which seems to suppress another compositional development, until it can be suppressed no longer).