exploring how we connect to and utilize technology

Monomes: Changing How We Make and Listen to Music

Imagine making and producing music a few decades ago… A band or artist would compose the score, perhaps write lyrics, perform the song, record it, and then distribute it through some tangible medium like a cassette tape or CD. I should mention that I am only nineteen years old, but I have vivid memories of my own cassette and CD collections, which proves how quickly music technology has advanced. Nowadays, I can search for a song on my computer, and with a few clicks of a button be able to download it onto my phone. And the song isn’t necessarily made from physical instruments. The bottom line: music making and distributing is changing rapidly, thanks to the development of technology.

I read an article which discussed the work of Matthew Davidson, who goes by “stretta” in his performance work. Davidson created a “maxforlive monome suite,” a software package for the monome device consisting of several different tools for creating more

Davidson and a monome

complex electronic music. Though I read this article, but I honestly understood very little of it due to its use of technical language and vocabulary. Feeling lost in the specific jargon, I took a look at the video at the bottom of the page and found more clarity. The system offers new ways to compile rhythms and beats to create a musical score, reminding me of Apple’s Garage Band. My first thought was, “okay, but anyone can press a few buttons.” The article even pointed out, “it is pretty much impossible to produce a ‘bad’ note.” How does Davidson compare to a traditional musician? More importantly, does his work require skill? Do I care?

In search of answers, or perhaps just more questions, I found and listened to a podcast interview with Davidson and Darwin Grosse, creator of the podcast Art+Music+Technology. After listening, I came away with several insights, most of which did not help me answer my question, but all of which made me more curious about electronic music production. First, Davidson talked about how he liked composing music on a computer because computers “don’t judge.” With traditional music, which I define as music created by tangible instruments, the musician works with other people to write the score, and the songs are often performed in front of live audiences. With music production through a monome device, no collaboration takes place, so there is no one to provide constructive criticism or judgment. But isn’t there value in that criticism? Maybe. When you collaborate with others, more creativity can be exchanged and explored so that the final product is esteemed by more than one person. In Davidson’s work, each song released is inspired and conceived by one individual. This idea, not good or bad, shows how the music industry has changed, and how it has become a more individual, rather than group, experience.

Another thing that struck me was the conversation exchanged between Davidson and Grosse about music originality and the true master of the creation. When making
electronic music, who is entitled with the credit, the human or the computer? Perhaps this is where the collaboration happens, between person and machine instead of person and person. As for originality, how can electronic music achieve this? Since this type of electronic music is made from purely pressing and adjusting buttons, how is one musician distinct from another? Can one person press a button and sound different than another person pressing the same button? Davidson compared it to two people playing the guitar: any person playing the same chord should sound the same…right? In theory this may be true, but not in practice, for numerous factors such as accent, tempo, and duration of sound makes it nearly impossible for two people to play identical sounds with traditional instruments. These factors are very different in the electronic music world. Even Grosse pointed out how early electric guitar music all sounded the same.

So what does this all mean? I think it comes down to this: do we, as consumers of this new electronic music, care about the ideas outlined above? Does the fact that advances in technology have allowed more stuff to be distributed to more people in less time bother us? Are we fazed by the fact that the dynamics of listening to a concert will change? Is it annoying that electronic music lacks the richness of physical human touch? Ultimately, I think the answer is no. To revisit my original question about skill: I think electronic music requires less, or at least the skill requirement is different. No longer must musicians be skilled in their muscle memory and rhythmic aspirations, but now they must be skilled in technical layering, or really, digital coding. Does this fact make music less enjoyable? Again, I don’t think so. We live in a dynamic, constantly advancing technical age, and I think what it comes down to is taste: if our ears are happy, then so are we.