Performance and Improvisation in the Age of Digital Abundance

Today at the Decibel Festival, Robert Henke (aka Monolake) delivered a spectacular talk titled Composition in the Age of Digital Abundance.

Henke began by exploring the growth of the compositional state space over time as new technologies — analog hardware, digital hardware, and finally computer software — became available and then affordable. He then gave several deeply insightful suggestions on how composers can take productive advantage of the overwhelming choice they are faced with.1

In the final few minutes, Monolake transitioned to talking about how musical performance is affected by our modern era of abundance. This is a subject near and dear to my heart; it was exciting to hear Henke’s thoughts.

Monolake focused on the how the traditional notion of instrument has not yet evolved into the digital realm. From his perspective, there are several stumbling blocks:

  1. Traditional instruments are specific rather than generic; they are designed to manipulate a small set of parameters. Historically, those parameters are pitch along with whatever timbral possibilities the physicality of the instrument affords. The number of parameters available to electronic composers is effectively unbounded, so figuring out which parameters to expose in a physical interface becomes very important (as does the associated task of making it easy to expose those parameters.) But even this may not be enough; what may be important is the percentage of all parameters made physically available to the performer. That percentage has gone down as time has progressed, because interfaces have not kept up with software’s potential. User-defined physical interfaces on hardware like the iPad may lead the way forward.

  2. Traditional instruments cannot be redefined, and they must be practiced. Of necessity, modern interfaces support a generic mapping — but as that mapping can be changed at will, the possibility of mastering any one mapping decreases.

  3. Traditional instruments provide haptic feedback unmatched by modern interfaces.

These are wonderful starting insights, but I’m not convinced that they go far enough.

Haptic feedback is important, but it is not the only quality of traditional instruments that matters. Traditional instruments expose mastery interfaces that require years — a lifetime, even — to conquer. A beginning user of a mastery interface, or a musical instrument, is essentially worthless. The software world has many examples of mastery interfaces, most notably in the land of text editors where vi and emacs reign supreme. Mastery interfaces engage parts of the brain that learn from repetition; more often than not, they engage the cerebrum, the part of the brain thought to be responsible for learning and carrying out complex motor functions. Most mastery interfaces work by building on modular concepts. Traditional instruments build on pitch and sometimes timbre, both wonderfully modular concepts to the human ear. Text editors like vi, and the modular tools of the Unix command-line, expose what might best be described as a grammar allowing users to be highly expressive with economical use of keyboard motion.

Improvisation is particularly difficult. Despite the abundance of possibility embodied by computer software, when I perform electronic music live (usually using Ableton) I feel constrained by a structural rigidity that no amount of pre-performance work in Ableton can overcome. On the other hand, when I sit down at the (limited) piano to play jazz music, I feel that there is a fluid world of possibility open to me.

To me, the “bar” for improvisation isn’t strictly related to parameters and their mapping. Instead, the “bar” is related to the (admittedly fuzzy) concept of linguistic expression.2 I know I’m soloing well on the piano when the act of soloing feels the same as the act of stringing together these words. Part of this effortlessness is certainly due to training — to mastering the piano’s interface — but another part of it is unrelated and instantaneous expression. I have never once encountered this same level of “linguistic expressiveness” while performing live electronic music.

During the Q&A, I tried — unsuccessfully, I think — to hint at my thoughts along these lines. Monolake responded by describing an ambient work he’s going to perform tomorrow night. The entire work is controlled by eight physical faders. Those faders are his specific instrument; they are designed for the performance of that one piece. Monolake has practiced the piece and feels he has deeply expressive control over its performance; in fact, if anything, he feels expressively limited by the low resolution of his faders. The skeptic in me wonders just how deep this control truly is, and how linguistic the expressiveness truly is. And I’d further suggest that there are different levels of specificity possible in instruments. A traditional instrument is specific in that it is limited to a small number of parameters, but traditional instruments like the piano are capable of supporting widely different compositional thoughts: Chopin is nothing like Herbie Hancock! Monolake’s eight faders are truly only good for one piece; another piece would of necessity map those faders differently and therefore create a different instrument. There appears to be something inherently primitive about pitch and timbre that simply has no analog in the world of audio software parameters.

[1] Monolake’s talk should be available on the Decibel Festival website in a few days’ time. Watch it.

[2] As an engineer, I tend to eschew “fuzzy” concepts such as this; I got a feeling in his response that Monolake has an even stronger distaste for them. But I’m not sure how else to capture the idea?