Tag: Software

The MusEDLab will soon be launching a revamped version of the aQWERTYon with some enhancements to its visual design, including a new scale picker. Beyond our desire to make our stuff look cooler, the scale picker represents a challenge that we’ve struggled with since the earliest days of aQW development. On the one hand, we want to offer users a wide variety of intriguing and exotic scales to play with. On the other hand, our audience of beginner and intermediate musicians is likely to be horrified by a list of terms like “Lydian dominant mode.” I recently had the idea to represent all the scales as colorful icons, like so:

Musical pitches rise and fall linearly, but pitch class is circular. When you go up or down the chromatic scale, the note names “wrap around” every twelve notes. This naming convention reflects the fact that we hear notes an octave apart as being “the same”, probably because they share so many overtones. (Non-human primates hear octaves as being equivalent too.)

The note names and numbers are all based on the C major scale, which is Western music’s “default setting.” The scale notes C, D, E, F, G, A and B (the white keys on the piano) are the “normal” notes. (Why do they start on C and not A? I have no idea.) You get D-flat, E-flat, G-flat, A-flat and B-flat (the black keys on the piano) by lowering (flatting) their corresponding white key notes. Alternately, you can get the black key notes by raising or sharping the white key notes, in which case they’ll be called C-sharp, D-sharp, F-sharp, G-sharp, and A-sharp. (Let’s just briefly acknowledge that the imagery of the “normal” white and “deviant” black keys is just one of many ways that Western musical culture is super racist, and move on.)

You can represent any scale on the chromatic circle just by “switching” notes on and off. For example, if you activate the notes C, D, E-flat, F, G, A-flat and B, you get C harmonic minor. (Alternatively, you could just deactivate D-flat, E, G-flat, A, and B-flat.) Here’s how the scale looks when you write it this way:

This is how I conceive scales in my head, as a pattern of activated and deactivated chromatic scale notes. As a guitarist, it’s the most intuitive way to think about them, because each box on the circular grid corresponds to a fret, so you can read the fingering pattern right off the circle. When I think “harmonic minor,” I don’t think of note names, I think “pattern of notes and gaps with one unusually wide gap.”

Another beauty of the circle view is that you can get the other eleven harmonic minor scales just by rotating the note names while keeping the pattern of activated/deactivated notes the same. If I want E-flat harmonic minor, I just have to grab the outer ring and rotate it counterclockwise a few notches:

My next thought was to color-code the scale tones to give an indication of their sound and function:

Here’s how the color scheme works:

Green – major, natural, sharp, augmented

Blue – minor, flat, diminished

Purple – perfect (neither major nor minor)

Grey – not in the scale

Scales with more green in them sound “happier” or brighter. Scales with more blue sound “sadder” or darker. Scales with a mixture of blue and green (like harmonic minor) will have a more complex and ambiguous feeling.

My ambition with the pitch wheels is not just to make the aQWERTYon’s scale menu more visually appealing. I’d eventually like to have it be an interactive way to visualize chords too. Followers of this blog will notice a strong similarity between the circular scale and the rhythm necklaces that inspired the Groove Pizza. Just like symmetries and patterns on the rhythm necklace can tell you a lot about how beats work, so too can symmetries and patterns on the scale necklace can tell you how harmony works. So here’s my dream for the aQWERTYon’s future theory visualization interface. If you load the app and set it to C harmonic minor, here’s how it would look. To the right is a staff notation view with the appropriate key signature.

When you play a note, it would change color on the keyboard and the wheel, and appear on the staff. The app would also tell you which scale degree it is (in this case, seven.)

If you play two notes simultaneously, in this case the third and seventh notes in C Mixolydian mode, the app would draw a line between the two notes on the circle:

If you play three notes at a time, like the first, fourth and fifth notes in C Lydian, you’d get a triangle.

If your three notes spell out a chord, like the second, fourth and sixth notes in C Phrygian mode, the app would recognize it and shows the chord symbol on the staff.

The pattern continues if you play four notes at a time:

Or five notes at a time:

By rotating the outer ring of the pitch wheel, you could change the root of the scale, like I showed above with C harmonic minor. And if you rotated the inner ring, showing the scale degrees, you could get different modes of the scale. Modes are one of the most difficult concepts in music theory. That is, they’re difficult until you learn to imagine them as rotations of the scale necklace, at which point they become nothing harder than a memorization exercise.

I’m designing this system to be used with the aQWERTYon, but there’s no reason it couldn’t take ordinary MIDI input as well. Wouldn’t it be nice to have this in a window in your DAW or notation program?

Music theory is hard. There’s a whole Twitter account devoted to retweeting students’ complaints about it. Some of this difficulty is due to the intrinsic complexity of modern harmony. But a lot of it is due to terminology and notation. Our naming system for notes and chords is a set of historically contingent kludges. No rational person would design it this way from the ground up. Thanks to path dependency, we’re stuck with it, much like we’re stuck with English grammar and the QWERTY keyboard layout. Fortunately, technology gives us a lot of new ways to make all the arcana more accessible, by showing multiple representations simultaneously and by making those representations discoverable through playful tinkering.

Do you find this idea exciting? Would you like it to be functioning software, and not just a bunch of flat images I laboriously made by hand? Help the MusEDLab find a partner to fund the developer and designer time. A grant or gift would work, and we’d also be open to exploring a commercial partnership. The aQW has been a labor of volunteer love for the lab so far, and it’s already one of the best music theory pedagogy tools on the internet. But development would go a lot faster if we could fund it properly. If you have ideas, please be in touch!

User experience design is easy in situations where there’s only one thing that the user can possibly do. But as the possibilities multiply, so do the challenges. We can deal with new things using information from our prior experiences, or by being instructed. The best-designed things include the instructions for their own use, like video games whose first level act as tutorials, or doors with handles that communicate how you should operate them by their shape and placement.

We use affordances and constraints to learn how things work. Affordances suggest the range of possibilities, and constraints limit the alternatives. Constraints include:

Physical limitations. Door keys can only be inserted into keyholes vertically, but you can still insert the key upside down. Car keys work in both orientations.

Semantic constraints. We know that red lights mean stop and green lights mean go, so we infer that a red light means a device is off or inoperative, and a green light means it’s on or ready to function. We have a slow cooker that uses lights in the opposite way and it screws me up every time.

Cultural constraints. Otherwise known as conventions. (Not sure how these are different from semantic constraints.) Somehow we all know without being told that we’re supposed to face forward in the elevator. Google Glass was an epic failure because its early adopters ran into the cultural constraint of people not liking to be photographed without consent.

Logical constraints. The arrangement of knobs controlling your stove burners should match the arrangement of the burners themselves.

The absence of constraints makes things confusing. Norman gives examples of how much designers love rows of identical switches which give no clues as to their function. Distinguishing the switches by shape, size, or grouping might not look as elegant, but would make it easier to remember which one does what thing.

Helpful designs use visibility (making the relevant parts visible) and feedback (giving actions an immediate and obvious effect.) Everyone hates the power buttons on iMacs because they’re hidden on the back, flush with the case. Feedback is an important way to help us distinguish the functional parts from the decorative ones. Propellerheads Reason is an annoying program because its skeuomorphic design puts as many decorative elements on the screen as functional ones. Ableton Live is easier to use because everything on the screen is functional.

When you can’t make things visible, you can give feedback via sound. Pressing a Mac’s power button doesn’t immediately cause the screen to light up, but that’s okay, because it plays the famous startup sound. Norman’s examples of low-tech sound feedback include the “zzz” sound of a functioning zipper, a tea kettle’s whistle, and the various sounds that machines make when they have mechanical problems. The problem with sound as feedback is that it can be intrusive and annoying.

The term “affordance” is the source for a lot of confusion. Norman tries to clarify it in his article “Affordance, Conventions and Design.” He makes a distinction between real and perceived affordances. Anything that appears on a computer screen is a perceived affordance. The real affordances of a computer are its physical components: the screen itself, the keyboard, the trackpad. The MusEDLab was motivated to create the aQWERTYon by considering the computer’s real affordances for music making. Most software design ignores the real affordances and only considers the perceived ones.

Designers of graphical user interfaces rely entirely on conceptual models and cultural conventions. (Consider how many programs use a graphic of a floppy disk as a Save icon, and now compare to the last time you saw an actual floppy disk.) For Norman, graphics are perceived affordances by definition.

Joanna McGrenere and Wayne Ho try to nail the concept down harder in “Affordances: Clarifying and Evolving a Concept.” The term was coined by the perceptual psychologist James J. Gibson in his book The Ecological Approach to Visual Perception. For Gibson, affordances exist independent of the actor’s ability to perceive them, and don’t depend on the actor’s experiences and culture. For Norman, affordances can include both perceived and actual properties, which to me makes more sense. If you can’t figure out that an affordance exists, then what does it matter if it exists or not?

Norman collapses two distinct aspects of design: an object’s utility of an object and the way that users learn or discover that utility. But are designing affordances and designing the information about the affordances the same thing? McGrenere and Ho say no, that it’s the difference between usefulness versus usability. They complain that the HCI community has focused on usability at the expense of usefulness. Norman says that a scrollbar is a learned convention, not a real affordance. McGrenere and Ho disagree, because the scrollbar affords scrolling in a way that’s built into the software, making it every bit as much a real affordance as if it were a physical thing. The learned convention is the visual representation of the scrollbar, not the basic fact of it.

The best reason to distinguish affordances from their communication or representation is that sometimes the communication gets in the way of the affordance itself. For example, novice software users need graphical user interfaces, while advanced users prefer text commands and keyboard shortcuts. A beginner needs to see all the available commands, while a pro prefers to keep the screen free of unnecessary clutter. Ableton Live is a notoriously beginner-unfriendly program because it prioritizes visual economy and minimalism over user handholding. A number of basic functions are either invisible or so tiny as to be effectively invisible. Apple’s GarageBand welcomes beginners with photorealistic depictions of everything, but its lack of keyboard shortcuts makes it feel like wearing oven mitts for expert users. For McGrenere and Ho, the same feature of one of these programs can be an affordance or anti-affordance depending on the user.

This post documents my final project for User Experience Design with June Ahn

Overview of the problem

The aQWERTYon is a web-based music performance and theory learning interface designed by the NYU Music Experience Design Lab. The name is a play on “QWERTYaccordion.” The aQWERTYon invites novices to improvise and compose using a variety of scales and chords normally available only to advanced musicians. Notes map onto the computer keyboard such that the rows play scales and the columns play chords. The user can not play any wrong notes, which encourages free and playful exploration. The aQWERTYon has a variety of instrument sounds to choose from, and it can also act as a standard MIDI controller for digital audio workstations (DAWs) like GarageBand, Logic, and Ableton Live. As of this writing, there have been aQWERTYon 32,000 sessions.

One of our core design principles is to work within our users’ real-world technological limitations. We build tools in the browser so they will be platform-independent and accessible anywhere where there is internet access. Our aim with the aQWERTYon was to find the musical possibilities in a typical computer with no additional software or hardware. That question led us to investigate ways of turning the standard QWERTY keyboard into a beginner-friendly instrument.

While the aQWERTYon has been an effective tool in classrooms and online, it has some design deficiencies as well. It is difficult for unassisted users to figure out what the app is for. While its functionality is easily discovered through trial and error, its musical applications are less self-explanatory. Some of this is due to the intrinsic complexity of music theory and all the daunting terminology that comes with it. But some of it is the lack of context and guidance we provide to new users.

The conjecture

This assignment coincided with discussions already taking place in the lab around redesigning the aQW. Many of those focused on a particular element of the user interface, the scale picker.

The user has a variety of scales to choose from, ranging from the familiar to the exotic. However, these scales all have impenetrable names. How are music theory novices supposed to make sense of names like harmonic minor or Lydian mode? How would they know to choose one scale or another? We debated the least off-putting way of presenting these choices: should we represent them graphically? Associate each one with a well-known piece of music? Or just list them alphabetically? I proposed a system of graphical icons showing the notes comprising each scale. While novices will find them no more intelligible than the names, the hope is that they would be sufficiently visually appealing to invite users to explore them by ear.

Conversations with June helped me understand that there are some broader and more profound user experience problems to solve before users ever get to the scale picker. What is the experience of simply landing on the app for the first time? How do people know what to do? From this conversation came the germ of a new idea, a landing page offering a tutorial or introduction. We want users to have a feeling of discovery, a musical “aha moment”, the chance to be a musical insider. The best way to do that seemed to be to give users a playlist of preset songs to jam with.

User characteristics and personas

There are three major user groups for the aQWERTYon, who I will describe as students,teachers, and explorers.

Students and teachers

Students use the aQW in a guided and structured setting: a classroom, a private lesson, or an online tutorial. There are several distinct user personas: elementary, middle and high school students, both mainstream and with special needs; college students; and online learners, mostly adults. Each student persona has its corresponding teacher persona. For example, I use the aQW with my music technology students at Montclair State University and NYU, and with some private students.

The aQW’s biggest fan is MusEDLab partner Matt McLean, who teaches at the Little Red Schoolhouse and runs a nonprofit organization called the Young Composers and Improvisors Workshop. Matt uses the aQW to teach composition in both settings, in person and online. He has documented his students’ use of the aQW extensively. Some examples:

Explorers

I use the term explorers to describe people who use the aQW without any outside guidance. Explorers do not fit into specific demographic groups, but they center around two broad, overlapping personas: bedroom producers and music theory autodidacts. Explorers may find the aQW via a link, a social media posting, or a Google search. We know little about these users beyond what is captured by Google Analytics. However, we can make some assumptions based on our known referral sources. For example, this blog is a significant driver of traffic to the aQW. I have numerous posts on music theory and composition that link to the aQW so that readers can explore the concepts for themselves. My blog readership includes other music educators and some professional musicians, but the majority are amateur musicians and very enthusiastic listeners. These are exactly the users we are trying to serve: people who want to learn about music independently, either for creative purposes or to simply satisfy curiosity.

While I am a music educator, I have spent most of my life as a self-taught bedroom producer, so I identify naturally with the explorers. I have created several original pieces of music with the aQW, both for user testing purposes and to show its creative potential. While I have an extensive music theory background, I am a rudimentary keyboard player at best. This has limited my electronic music creation to drawing in the MIDI piano roll with the mouse pointer, since I can not perform my ideas on a piano-style controller. The aQW suits my needs perfectly, since I can set it to any scale I want and shred fearlessly. Here is an unedited improvisation I performed using a synthesizer instrument I created in Ableton Live:

My hope is that more would-be explorers feel invited to use the aQW for similar creative purposes in their own performance and composition.

Tasks and Scenarios

It is possible to configure the aQWERTYon via URL parameters to set the key and scale, and to hide components of the user interface. When teachers create exercises or assignments, they can link or embed the aQW with its settings locked to keep students from getting lost or confused. However, this does not necessarily invite the user to explore or experiment. Here is an example of an aQW preset to accompany a Beyoncé song. This preset might be used for a variety of pedagogical tasks, including learning some or all of the melody, creating a new countermelody, or improvising a solo. The harmonic major scale is not one that is usually taught, but it a useful way to blend major and minor tonalities. Students might try using more standard scales like major or harmonic minor, and listen for ways that they clash with Beyoncé’s song.

Tasks and scenarios for explorers might include creating a melody, bassline or chords for an original piece of music. For example, a self-taught dance music producer might feel limited by the scales that are easiest to play on a piano-style keyboard (major, natural minor, pentatonics) and be in search of richer and more exotic sounds. This producer might play their track in progress and improvise on top using different scale settings.

One of the users I tested with suggested an alternative explorer use case. He is an enthusiastic amateur composer and arranger, who is trying to arrange choral versions of pop and rock songs. He is a guitarist who has little formal music theory knowledge. He might use the aQW to try out harmonic ideas by ear, write down note names that form pleasing combinations, and then transfer them to the guitar or piano-based MIDI controller.

Understanding the problem

In the age of the computer and the internet, many aspects of music performance, composition and production are easy to self-teach. However, music theory remains an obstacle for many bedroom producers and pop musicians (not to mention schooled musicians!) There are so many chords and scales and rules and technical vocabulary, all of which have to be applied in all twelve keys. To make matters worse, terminology hangs around long after its historical context has disappeared. We no longer know what the Greek modes sound like, but we use their names to describe modern scales. C-sharp and D-flat were different pitches in historical tuning systems, but now both names describe the same pitch. The harmonic and melodic minor scales are named after a stylistic rule for writing melodies that was abandoned hundreds of years ago. And so on.

Most existing theory resources draw on the Western classical tradition, using examples and conventions from a repertoire most contemporary musicians and listeners find unfamiliar. Furthermore, these resources presume the ability to read standard music notation. Web resources that do address popular music are usually confusing and riddled with errors. I have worked with Soundfly to fill this vacuum by creating high-quality online coursesaimed at popular musicians. Even with the best teaching resources, though, theory remains daunting. Exploring different chords and scales on an instrument requires significant technical mastery, and many musicians give up before ever reaching that point.

The aQW is intended to ease music theory learning by making scales and chords easy to discover even by complete novices. Our expectation is that after explorers are able to try theory ideas out in a low-pressure and creative setting, they will be motivated to put them to work playing instruments, composing or producing. Alternatively, users can simply perform and compose directly with the aQW itself.

Social and technical context

Most computer-based melody input systems are modeled on the piano. This is most obvious for hardware, since nearly all MIDI controllers take the form of literal piano keyboards. It is also true for software, which takes the piano keyboard as the primary visualization scheme for pitch. For example, the MIDI editor in every DAW displays pitches on a “piano roll”.

Some DAWs include a “musical typing” feature that maps the piano layout to the QWERTY keyboard, as an expediency for users who either lack MIDI hardware controllers, or who do not have them on hand. Apple’s GarageBand uses the ASDFG row of the keyboard for the white keys and the QWERTY row for the black keys. They use the other rows for such useful controls as pitch bend, modulation, sustain, octave shifting and simple velocity control.

Useful and expedient though it is, Musical Typing has some grave shortcomings as a user interface. It presumes familiarity with the piano keyboard, but is not very playable for users do who possess that familiarity. The piano layout makes a poor fit for the grid of computer keys. For example, there is no black key on the piano between the notes E and F, but the QWERTY keyboard gives no visual reminder of that fact, so it is necessary to just remember it. Unfortunately, the “missing” black key between E and F happens to be the letter R, which is GarageBand’s keyboard shortcut for recording. While hunting for E-flat or F-sharp, users are prone to accidentally start recording over their work. I have been using GarageBand for seven years and still do this routinely.

Ableton’s Push controller represents an interesting break with MIDI controller orthodoxy. It is a grid of 64 touch pads surrounded by various buttons, knobs and sliders.

The pads were designed to trigger samples and loops like a typical drum machine, but Ableton also includes a melody mode for the Push. By default, it maps notes to the grid in rows staggered by fourths, which makes the layout identical to the bottom four strings of the guitar. This is quite a gift for guitarists like me, since I can use my familiar chord and scale fingerings, rather than hunting and pecking for them on the piano. Furthermore, the Push can be set so that the pads play only the notes within a particular scale, giving a “no wrong notes” experience similar to the aQWERTYon. Delightful though this mode is, however, it is imperfect. Root notes of the scale are colored blue, and other notes are colored white. While this makes the roots easy to distinguish, it is not so easy to visually differentiate the other pitches.

Touchscreen devices like the iPhone and iPad open up additional new possibilities for melodic interfaces. Many mobile apps continue to use the piano keyboard for note input, but some take advantage of the touchscreen’s unique affordances. One such is Thumbjam, which enables the user to divide the screen into slices of arbitrary thickness that can map to any arbitrary combination of notes.

The app offers hundreds of preset scales to choose from. The user may have a small range of notes, each of which is large and easy to distinguish, or a huge range of notes, each of which occupies a narrow strip of screen area. Furthermore, the screen can be split to hold four different scales, played from four different instruments. While all of this configurability is liberating, it is also overwhelming. Also, the scales are one-dimensional lines; there is no easy way to play chords and arpeggios.

Evaluation criteria

Is the aQW’s potential obvious enough to draw in explorers and educators? Will it be adopted as a tool for self-teaching? Does it invite playful exploration and experimentation? Is it satisfying for real-world musical usage? Is the UI self-explanatory, or at least discoverable? Is the music theory content discoverable? Have we identified the right user persona(s)? Is the aQW really a tool for beginners? Or is it an intermediate music theory learning tool? Or an advanced composition tool? Is the approach of a “playlist” of example songs the right one? Which songs, artists and genres should we include on the landing page? How many presets should we include? Should we limit it to a few, or should we offer a large, searchable database? And how do we deal with the fact that many songs require multiple scales to play?

Proposed solution

I tested several interactive wireframes of this landing page concept. Click the image to try it yourself:

The first wireframe had nine preset songs. I wanted to offer reasonable musical diversity without overwhelming the user. The tenth slot linked to the “classic” aQW, where users are free to select their own video, scale, root, and so on. I chose songs that appealed to me (and presumably other adult explorers), along with some current pop songs familiar to younger users. I wanted to balance the choices by race, gender, era, and genre. I was also bound by a musical constraint: all songs need to be playable using a single scale in a single key. The initial preset list was:

Adele – “Send My Love (To Your New Lover)”

Mary J Blige – “Family Affair”

Miles Davis – “Sssh/Peaceful”

Missy Elliott – “Get Ur Freak On”

Björk – “All Is Full Of Love”

Michael Jackson – “Don’t Stop ’Til You Get Enough”

Katy Perry – “Teenage Dream”

AC/DC – “Back In Black”

Daft Punk – “Get Lucky”

After a few test sessions, it became apparent that no one was clicking Mary J Blige. Also, the list did not include any current hip-hop. I therefore replaced her with Chance The Rapper. I initially offered a few sentences of instruction, but feedback from my MusEDLab colleagues encouraged me to reduce the prompt down to just a few words: “Pick a song, type, jam.”

Further testing showed that while adults are willing to try out any song, familiar or not, children and teens are much choosier. Therefore, I added two more presets, “Hotline Bling” by Drake and “Formation” by Beyoncé. The latter song proved problematic, however, because its instrumental backing is so sparse and minimal that it is difficult to hear how other notes might fit into it. I ultimately swapped it for “Single Ladies.” I had rejected this song initially, because it uses the idiosyncratic harmonic major scale. However, I came to see this quirk as a positive bonus–since one of our goals is to encourage users to explore new sounds and concepts, a well-known and well-loved song using an unusual scale is a rare gift.

User testing protocol

I used a think-aloud protocol, asking testers to narrate their thought processes as they explored the app. I recorded the one-on-one sessions using Screenflow. When testing with groups of kids, this was impractical, so instead I took notes during and after each session. For each user, I opened the interactive wireframe, and told them, “This is a web based application for playing music with your computer keyboard. I’m going to ask you to tell me what you see on the screen, what you think it does, and what you think will happen when you click things.” I did not offer any other explanation or context, because I wanted to see whether the landing page was self-explanatory and discoverable. I conducted informal interviews with users during and after the sessions as well.

User testing results

I tested with ten adults and around forty kids. The adults ranged in age from early twenties to fifties. All were musicians, at varying levels of ability and training, mostly enthusiastic amateurs. Sessions lasted for twenty or thirty minutes. There were two groups of kids: a small group of eighth graders at the Little Red Schoolhouse, and a large group of fourth graders from PS 3 who were visiting NYU. These testing sessions were shorter, ten to fifteen minutes each.

Discovering melodies

It is possible to play the aQW by clicking the notes onscreen using the mouse, though this method is slow and difficult. Nevertheless, a number of the younger testers did this, even after I suggested that it would be easier on the keyboard.

An adult tester with some keyboard and guitar experience told me, “This is great, it’s making me play patterns that I normally don’t play.” He was playing on top of the Miles Davis track, and he was quickly able to figure out a few riffs from Miles’ trumpet solo.

Discovering chords

Several testers systematically identified chords by playing alternating notes within a row, while others discovered them by holding down random groups of keys. None of the testers discovered that they could easily play chords using columns of keys until I prompted them to do so. One even asked, “Is there a relationship between keys if I play them vertically? I don’t know enough about music to know that.” After I suggested he try the columns, he said, “If I didn’t know [by ear] how chords worked, I’d miss the beauty of this.” He compared the aQW to GarageBand’s musical typing: “This is not that. This is a whole new thing. This is chord oriented. As a guitarist, I appreciate that.” The message is clear: we need to make the chords more obvious, or more actively assist users in finding them.

Other theory issues

For the most part, testers were content to play the scales they were given, though some of the more expert musicians changed the scales before even listening to the presets. However, not everyone realized that the presets were set to match the song. A few asked me: “How do I know what key this song is in?” We could probably state explicitly that the presets line up automatically.

In general, adult testers found the value of the aQW as a theory learning tool to be immediately apparent. One told me: “If I had this when I was a kid, I would have studied music a lot. I used to hate music theory. I learned a lot of stuff, but the learning process was awful… Your kids’ generation will learn music like this (snaps fingers).”

Sounds

The aQW comes with a large collection of SoundFonts, and users of all ages enjoyed auditioning them, sometimes for long periods of time. Sometimes they apologized for how fascinating they found the sounds to be. But it is remarkable to have access to so many instrument timbres so effortlessly. Computers turn us all into potential orchestrators, arrangers, and sound designers.

Screen layout

The more design-oriented testers appreciated the sparseness and minimalism of the graphics, finding them calming and easy to understand.

Several testers complained that the video window takes up too much screen real estate, and is placed too prominently. Two commented that videos showing live performers, like “Back In Back,” were valuable because that helped with timekeeping and inspiration. Otherwise, however, testers found the videos to either be of little value or actively distracting. One suggested having the videos hidden or minimized by default, with the option to click to expand them. Others requested that the video be below the keyboard and other crucial controls. Also, the eighth graders reported that some of the video content was distracting because of its content, for example the partying shown in “Teenage Dream.” Unsuitable content will be an ongoing issue using many of the pop songs that kids like.

Technical browser issues

Having the aQWERTYon run in the browser has significant benefits, but a few limitations as well. Because the URL updates every time the parameters change, clicking the browser’s Back button does not produce the expected behavior–it might take ten or fifteen clicks to actually return to a previous page. I changed the links in later versions so each one opens the aQW in a new tab so the landing page would always be available. However, web audio is very memory-intensive, and the aQW will function slowly or not at all if it is open in more than one tab simultaneously.

Song choices

The best mix of presets is always going to depend on the specific demographics of any given group of users. However, the assortment I arrived at was satisfying enough for the groups I tested with. Miles Davis and Björk do not have the wide appeal of Daft Punk or Michael Jackson, but their presence was very gratifying for the more hipster-ish testers. I was extremely impressed that an eighth grader selected the Miles song, though this kid turns out to be the son of a Very Famous Musician and is not typical.

Recording functionality

Testers repeatedly requested the ability to record their playing. The aQW did start out with a very primitive recording feature, but it will require some development to make it usable. The question is always, how much functionality is enough? Should users be able to overdub? If so, how many tracks? Is simple recording enough, or would users need to able to mix, edit, and select takes?

One reason that recording has been a low development priority is that users can easily record their performances via MIDI into any DAW or notation program. The aQW behaves as if it were a standard MIDI controller plugged into the computer. With so many excellent DAWs in the world, it seems less urgent for us to replicate their functionality. However, there is one major limitation of recording this way: it captures the notes being played, but not the sounds. Instead, the DAW plays back the MIDI using whatever software instruments it has available. Users who are attached to a specific SoundFont cannot record them unless they use a workaround like Soundflower. This issue will require more discussion and design work.

New conjectures and future work

One of my most significant user testers for the landing page wireframe was Kevin Irlen, the MusEDLab’s chief software architect and main developer of the aQW itself. He found the landing page concept sufficiently inspiring that he created a more sophisticated version of it, the app sequencer:

We can add presets to the app sequencer using a simple web form, which is a significant improvement over the tedious process of creating my wireframes by hand. The sequencer pulls images automatically from YouTube, another major labor-saver. Kevin also added a comment field, which gives additional opportunity to give prompts and instructions. Each sequencer preset generates a unique URL, making it possible to generate any number of different landing pages. We will be able to create custom landing pages focusing on different artists, genres or themes.

Songs beyond the presets

Testing with the fourth graders showed that we will need to design a better system for users who want to play over songs that we do not include among the presets. That tutorial needs to instruct users how to locate YouTube URLs, and more dauntingly, how to identify keys and scales. I propose an overlay or popup:

Keyfinding

Testing with fourth graders also showed that helping novice users with keyfinding may not be as challenging as I had feared. The aQW defaults to the D minor pentatonic scale, and that scale turns out to fit fairly well over most current pop songs. If it doesn’t, some other minor pentatonic scale is very likely to work. This is due to a music-theoretical quirk of the pentatonic scale: it happens to share pitches with many other commonly-used scales and chords. As long as the root is somewhere within the key, the minor pentatonic will sound fine. For example, in C major:

C minor pentatonic sounds like C blues

D minor pentatonic sounds like Csus4

E minor pentatonic sounds like Cmaj7

F minor pentatonic sounds like C natural minor

G minor pentatonic sounds like C7sus4

A minor pentatonic is the same as C major pentatonic

B minor pentatonic sounds like C Lydian mode

We are planning to revamp the root picker to show both a larger piano keyboard and a pitch wheel. We also plan to add more dynamic visualization options for notes as they are played, including a staff notation view, the chromatic circle, and the circle of fifths. The aQW leaves several keys on the keyboard unused, and we could use them for additional controls. For example, we might use the Control key to make note velocities louder, and Option to make them quieter. The arrow keys might be used to cycle through the scale menu and to shift the root.

Built-in theory pedagogy

There is a great deal of opportunity to build more theory pedagogy on top of the aQW, and to include more of it within the app itself. We might encourage chord playing by automatically showing chord labels at the top of each column. We might include popups or links next to each scale giving some explanation of why they sound the way they do, and to give some suggested musical uses. One user proposes a game mode for more advanced users, where the scale is set to chromatic and players must identify the “wrong” or outside notes. Another proposes a mode similar to Hooktheory, where users could sequence chord progressions to play on top of.

Rhythmic assistance

A few testers requested some kind of help or guidance with timekeeping. One suggested a graphical score in the style of Guitar Hero, or a “follow the bouncing ball” rhythm visualization. Another pointed out that an obvious solution would be to incorporate the Groove Pizza, perhaps in miniature form in a corner of the screen. Synchronizing all of this to YouTube videos would need to be done by hand, so far as I know, but perhaps an automated solution exists. Beat detection is certainly an easier MIR challenge than key or chord detection. If we were able to automatically sync to the tempo of a song, we could add the DJ functionality requested by one tester, letting users add cue points, loop certain sections, and slow them down.

Odds and ends

One eighth grader suggested that we make aQW accounts with “musical passwords.”

An adult tester referred to the landing page as the “Choose Your Own Adventure screen.” The idea of musical adventure is exactly the feeling I was hoping for.

In addition to notes on the staff, one tester requested a spectrum visualizer. This is perhaps an esoteric request, but real-time spectrograms are quite intuitive and might be useful.

Finally, one tester made a comment that was striking in its broader implications for music education: “I’m not very musical, I don’t really play an instrument, so these kinds of tricks are helpful for me. It didn’t take me long to figure out how the notes are arranged.” This person is a highly expert producer, beatmaker and live performer using Ableton Live. I asked how he came to this expertise, and he said he felt compelled to learn it to compensate for his lack of “musicianship”. It makes me sad that such a sophisticated musician does not realize that his skills “count”. In empowering music learners with the aQW, I also hope we are able to help computer musicians value themselves.

Since its launch, you’ve been able to export your Groove Pizza beats as WAV files, or continue working on them in Soundtrap. But now, thanks to MusEDLab developer Jordana Bombi, you can also save your beats as MIDI files as well.

You can bring these MIDI files into your music production software tool of choice: Ableton Live, Logic, Pro Tools, whatever. How cool is that?

There are a few limitations at the moment: your beats will be rendered in 4/4 time, regardless of how many slices your pizza has. You can always set the right time signature after you bring the MIDI into your production software. Also, your grooves will export with no swing–you’ll need to reinstate that in your software as well.

We have some more enhancements in the pipeline, aside from fixing the limitations just mentioned. We’re working on a “continue in Noteflight” feature, real-time MIDI input and output, and live performance using the QWERTY keyboard. I’ll keep you posted.

Ableton recently launched a delightful web site that teaches the basics of beatmaking, production and music theory using elegant interactives. If you’re interested in music education, creation, or user experience design, you owe it to yourself to try it out.

If you’ve been following the work of the NYU Music Experience Design Lab, you might notice some strong similarities between Ableton’s site and our tools. That’s no coincidence. Dennis and I have been having an informal back and forth on the role of technology in music education for a few years now. It’s a relationship that’s going to get a step more formal this fall at the 2017 Loop Conference – more details on that as it develops.

Meanwhile, Peter Kirn’s review of the Learning Music site raises some probing questions about why Ableton might be getting involved in education in the first place. But first, he makes some broad statements about the state of the musical world that are worth repeating in full.

I think there’s a common myth that music production tools somehow take away from the need to understand music theory. I’d say exactly the opposite: they’re more demanding.

Every musician is now in the position of composer. You have an opportunity to arrange new sounds in new ways without any clear frame from the past. You’re now part of a community of listeners who have more access to traditions across geography and essentially from the dawn of time. In other words, there’s almost no choice too obvious.

The music education world has been slow to react to these new realities. We still think of composition as an elite and esoteric skill, one reserved only for small class of highly trained specialists. Before computers, this was a reasonable enough attitude to have, because it was mostly true. Not many of us can learn an instrument well enough to compose with it, then learn to notate our ideas. Even fewer of us will be able to find musicians to perform those compositions. But anyone with an iPhone and twenty dollars worth of apps can make original music using an infinite variety of sounds, and share that music online to anyone willing to listen. My kids started playing with iOS music apps when they were one year old. With the technical barriers to musical creativity falling away, the remaining challenge is gaining an understanding of music itself, how it works, why some things sound good and others don’t. This is the challenge that we as music educators are suddenly free to take up.

There’s an important question to ask here, though: why Ableton?

To me, the answer to this is self-evident. Ableton has been in the music education business since its founding. Like Adam Bell says, every piece of music creation software is a de facto education experience. Designers of DAWs might even be the most culturally impactful music educators of our time. Most popular music is made by self-taught producers, and a lot of that self-teaching consists of exploring DAWs like Ableton Live. The presets, factory sounds and affordances of your DAW powerfully inform your understanding of musical possibility. If DAW makers are going to be teaching the world’s producers, I’d prefer if they do it intentionally.

So far, there has been a divide between “serious” music making tools like Ableton Live and the toy-like iOS and web apps that my kids use. If you’re sufficiently motivated, you can integrate them all together, but it takes some skill. One of the most interesting features of Ableton’s web site, then, is that each interactive tool includes a link that will open up your little creation in a Live session. Peter Kirn shares my excitement about this feature.

There are plenty of interactive learning examples online, but I think that “export” feature – the ability to integrate with serious desktop features – represents a kind of breakthrough.

Ableton Live is a superb creation tool, but I’ve been hesitant to recommend it to beginner producers. The web site could change my mind about that.

So, this is all wonderful. But Kirn points out a dark side.

The richness of music knowledge is something we’ve received because of healthy music communities and music institutions, because of a network of overlapping ecosystems. And it’s important that many of these are independent. I think it’s great that software companies are getting into the action, and I hope they continue to do so. In fact, I think that’s one healthy part of the present ecosystem.

It’s the rest of the ecosystem that’s worrying – the one outside individual brands and what they support. Public music education is getting squeezed in different ways all around the world. Independent content production is, too, even in advertising-supported publications like this one, but more so in other spheres. Worse, I think education around music technology hasn’t even begun to be reconciled with traditional music education – in the sense that people with specialties in one field tend not to have any understanding of the other. And right now, we need both – and both are getting their resources squeezed.

This might feel like I’m going on a tangent, but if your DAW has to teach you how harmony works, it’s worth asking the question – did some other part of the system break down?

Yes it did! Sure, you can learn the fundamentals of rhythm, harmony, and form from any of a thousand schools, courses, or books. But there aren’t many places you can go to learn about it in the context of Beyoncé, Daft Punk, or A Tribe Called Quest. Not many educators are hip enough to include the Sleng Teng riddim as one of the fundamentals. I’m doing my best to rectify this imbalance–that’s what my courses with Soundfly classes are for. But I join Peter Kirn in wondering why it’s left to private companies to do this work. Why isn’t school music more culturally relevant? Why do so many educators insist that you kidslike the wrong music? Why is it so common to get a music degree without ever writing a song? Why is the chasm between the culture of school music and music generally so wide?

Like Kirn, I’m distressed that school music programs are getting their budgets cut. But there’s a reason that’s happening, and it isn’t that politicians and school boards are philistines. Enrollment in school music is declining in places where the budgets aren’t being cut, and even where schools are offering free instruments. We need to look at the content of school music itself to see why it’s driving kids away. Both the content of school music programs and the people teaching them are whiter than the student population. Even white kids are likely to be alienated from a Eurocentric curriculum that doesn’t reflect America’s increasingly Afrocentric musical culture. The large ensemble model that we imported from European conservatories is incompatible with the riot of polyglot individualism in the kids’ earbuds.

While music therapists have been teaching songwriting for years, it’s rare to find it in school music curricula. Production and beatmaking are even more rare. Not many adults can play oboe in an orchestra, but anyone with a guitar or keyboard or smartphone can write and perform songs. Music performance is a wonderful experience, one I wish were available to everyone, but music creation is on another level of emotional meaning entirely. It’s like the difference between watching basketball on TV and playing it yourself. It’s a way to understand your own innermost experiences and the innermost experiences of others. It changes the way you listen to music, and the way you approach any kind of art for that matter. It’s a tool that anyone should be able to have in their kit. Ableton is doing the music education world an invaluable service; I hope more of us follow their example.

Definition

I propose a new web-based accessible rhythm instrument called QWERTYBeats.Traditional instruments are highly accessible to blind and low-vision musicians. Electronic music production tools are not. I look at the history of accessible instruments and software interfaces, give an overview of current electronic music hardware and software, and discuss the design considerations underlying my project.

Historical overview

Acoustic instruments give rich auditory and haptic feedback, and pose little obstacle to blind musicians. We need look no further for proof than the long history of iconic blind musicians like Ray Charles and Stevie Wonder. Even sighted instrumentalists rarely look at their instruments once they have attained a sufficient level of proficiency. Music notation is not accessible, but Braille notation has existed since the language’s inception. Also, a great many musicians both blind and sighted play entirely by ear anyway.

Electronic instruments pose some new accessibility challenges. They may use graphical interfaces with nested menus, complex banks of knobs and patch cables, and other visual control surfaces. Feedback may be given entirely with LED lights and small text labels. Nevertheless, blind users can master these devices with sufficient practice, memorization and assistance. For example, Stevie Wonder has incorporated synthesizers and drum machines in most of his best-known recordings.

Most electronic music creation is currently done not with instruments, but rather using specialized software applications called digital audio workstations (DAWs). Keyboards and other controllers are mostly used to access features of the software, rather than as standalone instruments. The most commonly-used DAWs include Avid Pro Tools, Apple Logic, Ableton Live, and Steinberg Cubase. Mobile DAWs are more limited than their desktop counterparts, but are nevertheless becoming robust music creation tools in their own right. Examples include Apple GarageBand and Steinberg Cubasis. Notated music is commonly composed using score editing software like Sibelius and Finale, whose functionality increasingly overlaps with DAWs, especially in regard to MIDI sequencing.

DAWs and notation editors pose steep accessibility challenges due to their graphical and spatial interfaces, not to mention their sheer complexity. In class, we were given a presentation by Leona Godin, a blind musician who records and edits audio using Pro Tools by means of VoiceOver. While it must have taken a heroic effort on her part to learn the program, Leona demonstrates that it is possible. However, some DAWs pose insurmountable problems even to very determined blind users because they do not use standard operating system elements, making them inaccessible via screen readers.

Technological interventions

There are no mass-market electronic interfaces specifically geared toward blind or low-vision users. In this section, I discuss one product frequently hailed for its “accessibility” in the colloquial rather than blindess-specific sense, along with some more experimental and academic designs.

Ableton Live has become the DAW of choice for electronic music producers. Low-vision users can zoom in to the interface and modify the color scheme. However, Live is inaccessible via screen readers.

In recent years, Ableton has introduced a hardware controller, the Push, which is designed to make the software experience more tactile and instrument-like. The Push combines an eight by eight grid of LED-lit touch pads with banks of knobs, buttons and touch strips. It makes it possible to create, perform and record a piece of music from scratch without looking at the computer screen. In addition to drum programming and sampler performance, the Push also has an innovative melodic mode which maps scales onto the grid in such a way that users can not play a wrong note. Other comparable products exist; see, for example, the Native Instruments Maschine.

There are many pad-based drum machines and samplers. Live’s main differentiator is its Session view, where the pads launch clips: segments of audio or MIDI that can vary in length from a single drum hit to the length of an entire song. Clip launching is tempo-synced, so when you trigger a clip, playback is delayed until the start of the next measure (or whatever the quantization interval is.) Clip launching is a forgiving and beginner-friendly performance method, because it removes the possibility of playing something out of rhythm. Like other DAWs, Live also gives rhythmic scaffolding in its software instruments by means of arpeggiators, delay and other tempo-synced features.

The Push is a remarkable interface, but it has some shortcomings for blind users. First of all, it is expensive, $800 for the entry-level version and $1400 for the full-featured software suite. Much of its feedback is visual, in the form of LED screens and color-coded lighting on the pads. It switches between multiple modes which can be challenging to distinguish even for sighted users. And, like the software it accompanies, the Push is highly complex, with a steep learning curve unsuited to novice users, blind or sighted.

Most DAWs enable users to perform MIDI instruments on the QWERTY keyboard. The most familiar example is the Musical Typing feature in Apple GarageBand.

Musical Typing makes it possible to play software instruments without an external MIDI controller, which is convenient and useful. However, its layout counterintuively follows the piano keyboard, which is an awkward fit for the computer keyboard. There is no easy way to distinguish the black and white keys, and even expert users find themselves inadvertantly hitting the keyboard shortcut for recording while hunting for F-sharp.

The aQWERTYon is a web interface developed by the NYU Music Experience Design Lab specifically intended to address the shortcomings of Musical Typing.

Rather than emulating the piano keyboard, the aQWERTYon draws its inspiration from the chord buttons of an accordion. It fills the entire keyboard with harmonically related notes in a way that supports discovery by naive users. Specifically, it maps scales across the rows of keys, staggered by intervals such that each column forms a chord within the scale. Root notes and scales can be set from pulldown menus within the interface, or preset using URL parameters. It can be played as a standalone instrument, or as a MIDI controller in conjunction with a DAW. Here is a playlist of music I created using the aQWERTYon and GarageBand or Ableton Live:

The aQWERTYon is a completely tactile experience. Sighted users can carefully match keys to note names using the screen, but more typically approach the instrument by feel, seeking out patterns on the keyboard by ear. A blind user would need assistance loading the aQWERTYon initially and setting the scale and root note parameters, but otherwise, it is perfectly accessible. The present project was motivated in large part by a desire to make exploration of rhythm as playful and intuitive as the aQWERTYon makes exploring chords and scales.

Soundplant

The QWERTY keyboard can be turned into a simple drum machine quite easily using a free program called Soundplant. The user simply drags audio files onto a graphical key to have it triggered by that physical key. I was able to create a TR-808 kit in a matter of minutes:

After it is set up and configured, Soundplant can be as effortlessly accessible as the aQWERTYon. However, it does not give the user any rhythmic assistance. Drumming in perfect time is an advanced musical skill, and playing drum machine samples out of time is not much more satisfying than banging on a metal bowl with a spoon out of time. An ideal drum interface would offer beginners some of the rhythmic scaffolding and support that Ableton provides via Session view, arpeggiators, and the like.

Drum machines and their software counterparts offer an alternative form of rhythmic scaffolding. The user sequences patterns in a time-unit box system or piano roll, and the computer performs those patterns flawlessly. The MusEDLab‘s Groove Pizza app is a web-based drum sequencer that wraps the time-unit box system into a circle.

The Groove Pizza was designed to make drum programming more intuitive by visualizing the symmetries and patterns inherent in musical-sounding rhythms. However, it is totally unsuitable for blind or low-vision users. Interaction is only possible through the mouse pointer or touch, and there are no standard user interface elements that can be parsed by screen readers.

Before ever considering designing for the blind, the MusEDLab had already considered the Groove Pizza’s limitations for younger children and users with special needs: there is no “live performance” mode, and there is always some delay in feedback between making a change in the drum pattern and hearing the result. We have been considering ways to make a rhythm interface that is more immediate, performance-oriented and tactile. One possible direction would be to create a hardware version of the Groove Pizza; indeed, one of the earliest prototypes was a hardware version built by Adam November out of a pizza box. However, hardware design is vastly more complex and difficult than software, so for the time being, software promises more immediate results.

The authors create a new mode for a standard MIDI keyboard that maps piano keys to DAW functions like playback, quantization, track selection, and so on. They also add “earcons” (auditory icons) to give sonic feedback when particular functions have been activated that normally only give graphical feedback. For example, one earcon sounds when recording is enabled; another sounds for regular playback. This interface sounds promising, but there are significant obstacles to its adoption. While the authors have released the source code as a free download, that requires a would-be user to be able to compile and run it. This is presuming that they could access the code in the first place; the download link given in the paper is inactive. It is an all-too-common fate of academic projects to never get widespread usage. By posting our projects on the web, the MusEDLab hopes to avoid this outcome.

Statement

Music education philosophy

My project is animated by a constructivist philosophy of music education, which operates by the following axiomatic assumptions:

Learning by doing is better than learning by being told.

Learning is not something done to you, but rather something done by you.

You do not get ideas; you make ideas. You are not a container that gets filled with knowledge and new ideas by the world around you; rather, you actively construct knowledge and ideas out of the materials at hand, building on top of your existing mental structures and models.

The most effective learning experiences grow out of the active construction of all types of things, particularly things that are personally or socially meaningful, that you develop through interactions with others, and that support thinking about your own thinking.

If an activity’s challenge level is beyond than your ability, you experience anxiety. If your ability at the activity far exceeds the challenge, the result is boredom. Flow happens when challenge and ability are well-balanced, as seen in this diagram adapted from Csikszentmihalyi.

Music students face significant obstacles to flow at the left side of the Ability axis. Most instruments require extensive practice before it is possible to make anything that resembles “real” music. Electronic music presents an opportunity here, because even a complete novice can produce music with a high degree of polish quickly. It is empowering to use technologies that make it impossible to do anything wrong; it frees you to begin exploring what you find to sound right. Beginners can be scaffolded in their pitch explorations with MIDI scale filters, Auto-Tune, and the configurable software keyboards in apps like Thumbjam and Animoog. Rhythmic scaffolding is more rare, but it can be had via Ableton’s quantized clip launcher, by MIDI arpeggiators, and using the Note Repeat feature on many drum machines.

QWERTYBeats proposal

My project takes drum machine Note Repeat as its jumping off point. When Note Repeat is activated, holding down a drum pad triggers the corresponding sound at a particular rhythmic interval: quarter notes, eighth notes, and so on. On the Ableton Push, Note Repeat automatically syncs to the global tempo, making it effortless to produce musically satisfying rhythms. However, this mode has a major shortcoming: it applies globally to all of the drum pads. To my knowledge, no drum machine makes it possible to simultaneously have, say, the snare drum playing every dotted eighth note while the hi-hat plays every sixteenth note.

I propose a web application called QWERTYBeats that maps drums to the computer keyboard as follows:

Each row of the keyboard triggers a different drum/beatbox sound (e.g. kick, snare, closed hi-hat, open hi-hat).

Each column retriggers the sample at a different rhythmic interval (e.g. quarter note, dotted eighth note).

Circles dynamically divide into “pie slices” to show rhythmic values.

The rhythm values are shown below by column, with descriptions followed by the time interval as shown as a fraction of the tempo in beats per minute.

quarter note (1)

dotted eighth note (3/4)

quarter note triplet (2/3)

eighth note (1/2)

dotted sixteenth note (3/8)

eighth note triplet (1/3)

sixteenth note (1/4)

dotted thirty-second note (3/16)

sixteenth note triplet (1/6)

thirty-second note (1/8)

By simply holding down different combinations of keys, users can attain complex syncopations and polyrhythms. If the app is synced to the tempo of a DAW or music playback, the user can perform good-sounding rhythms over any song that is personally meaningful to them.

The column layout leaves some unused keys in the upper right corner of the keyboard: “-“, “=”, “[“, “]”, “”, etc. These can be reserved for setting the tempo and other UI elements.

The app defaults to Perform Mode, but clicking Make New Kit opens Sampler mode, where users can import or record their own drum sounds:

Keyboard shortcuts enable the user to select a sound, audition it, record, set start and end point, and set its volume level.

A login/password system enables users to save kits to the cloud where they can be accessed from any computer. Kits get unique URL identifiers, so users can also share them via email or social media.

It is my goal to make the app accessible to users with the widest possible diversity of abilities.

The entire layout will use plain text, CSS and JavaScript to support screen readers.

All user interface elements can be accessed via the keyboard: tab to change the keyboard focus, menu selections and parameter changes via the up and down arrows, and so on.

Perform Mode:

Sampler Mode:

Mobile version

The present thought is to divide up the screen into a grid mirroring the layout of the QWERTY keyboard. User testing will determine whether this will produce a satisfying experience.

Prototype

I created a prototype of the app using Ableton Live’s Session View.

Here is a sample performance:

There is not much literature examining the impact of drum programming and other electronic rhythm sequencing on students’ subsequent ability to play acoustic drums, or to keep time more accurately in general. I can report anecdotally that my own time spent sequencing and programming drums improved my drumming and timekeeping enormously (and mostly inadvertently.) I will continue to seek further support for the hypothesis that electronically assisted rhythm creation builds unassisted rhythmic ability. In the meantime, I am eager to prototype and test QWERTYBeats.

The hippest music teachers help their students create original music. But what exactly does that mean? What even is composition? In this post, I take a look at two innovators in music education and try to arrive at an answer.

Participating students in YCIW as well as my own students at LREI have been using Noteflight for over 6 years to compose music for chamber orchestras, symphony orchestras, jazz ensembles, movie soundtracks, video game music, school band and more – hundreds of compositions.

Before the advent of the aQWERTYon, students needed to enter music into Noteflight either by clicking with the mouse or by playing notes in with a MIDI keyboard. The former method is accessible but slow; the latter method is fast but requires some keyboard technique. The aQWERTYon combines the accessibility of the mouse with the immediacy of the piano keyboard.

For the first time there is a viable way for every student to generate and notate her ideas in a tactile manner with an instrument that can be played by all. We founded Young Composers & Improvisors Workshop so that every student can have the experience of composing original music. Much of my time has been spent exploring ways to emphasize the “experiencing” part of this endeavor. Students had previously learned parts of their composition on instruments after their piece was completed. Also, students with piano or guitar skills could work out their ideas prior to notating them. But efforts to incorporate MIDI keyboards or other interfaces with Noteflight in order to give students a way to perform their ideas into notation always fell short.

The aQWERTYon lets novices try out ideas the way that more experienced musicians do: by improvising with an instrument and reacting to the sounds intuitively. It’s possible to compose without using an instrument at all, using a kind of sudoku-solving method, but it’s not likely to yield good results. Your analytical consciousness, the part of your mind that can write notation, is also its slowest and dumbest part. You really need your emotions, your ear, and your motor cortex involved. Before computers, you needed considerable technical expertise to be able to improvise musical ideas, and remember them long enough to write them down. The advent of recording and MIDI removed a lot of the friction from the notation step, because you could preserve your ideas just by playing them. With the aQWERTYon and interfaces like it, you can do your improvisation before learning any instrumental technique at all.

Student feedback suggests that kids like being able to play along to previously notated parts as a way to find new parts to add to their composition. As a teacher I am curious to measure the effect of students being able to practice their ideas at home using aQWERTYon and then sharing their performances before using their idea in their composition. It is likely that this will create a stronger connection between the composer and her musical idea than if she had only notated it first.

Those of us who have been making original music in DAWs are familiar with the pleasures of creating ideas through playful jamming. It feels like a major advance to put that experience in the hands of elementary school students.

Matt uses progressive methods to teach a traditional kind of musical expression: writing notated scores that will then be performed live by instrumentalists. Matt’s kids are using futuristic tools, but the model for their compositional technique is the one established in the era of Beethoven.

(I just now noticed that the manuscript Beethoven is holding in this painting is in the key of D-sharp. That’s a tough key to read!)

Other models of composition exist. There’s the Lennon and McCartney method, which doesn’t involve any music notation. Like most untrained rock musicians, the Beatles worked from lyric sheets with chords written on them as a mnemonic. The “lyrics plus chords” method continues to be the standard for rock, folk and country musicians. It’s a notation system that’s only really useful if you already have a good idea of how the song is supposed to sound.

Lennon and McCartney originally wrote their songs to be performed live for an audience. They played in clubs for several years before ever entering a recording studio. As their career progressed, however, the Beatles stopped performing live, and began writing with the specific goal of creating studio recordings. Some of those later Beatles tunes would be difficult or impossible to perform live. Contemporary artists like Missy Elliott and Pharrell Williams have pushed the Beatles’ idea to its logical extreme: songs existing entirely within the computer as sequences of samples and software synths, with improvised vocals arranged into shape after being recorded. For Missy and Pharrell, creating the score and the finished recording are one and the same act.

Is it possible to teach the Missy and Pharrell method in the classroom? Alex Ruthmann, MusEDLab founder and my soon-to-be PhD advisor, documented his method for doing so in 2007.

As a middle school general music teacher, I’ve often wrestled with how to engage my students in meaningful composing experiences. Many of the approaches I’d read about seemed disconnected from the real-world musicality I saw daily in the music my students created at home and what they did in my classes. This disconnect prompted me to look for ways of bridging the gap’ between the students’ musical world outside music class and their in-class composing experiences.

It’s an axiom of constructivist music education that students will be most motivated to learn music that’s personally meaningful to them. There are kids out there for whom notated music performed on instruments is personally meaningful. But the musical world outside music class usually follows the Missy and Pharrell method.

[T]he majority of approaches to teaching music with technology center around notating musical ideas and are often rooted in European classical notions of composing (for example, creating ABA pieces, or restricting composing tasks to predetermined rhythmic values). These approaches require students to have a fairly sophisticated knowledge of standard music notation and a fluency working with rhythms and pitches before being able to explore and express their musical ideas through broader musical dimensions like form, texture, mood, and style.

Noteflight imposes some limitations on these musical dimensions. Some forms, textures, moods and styles are difficult to capture in standard notation. Some are impossible. If you want to specify a particular drum machine sound combined with a sampled breakbeat, or an ambient synth pad, or a particular stereo image, standard notation is not the right tool for the job.

Common approaches to organizing composing experiences with synthesizers and software often focus on simplified classical forms without regard to whether these forms are authentic to the genre or to technologies chosen as a medium for creation.

There is nothing wrong with teaching classical forms. But when making music with computers, the best results come from making the music that’s idiomatic to computers. Matt McLean goes to extraordinary lengths to have student compositions performed by professional musicians, but most kids will be confined to the sounds made by the computer itself. Classical forms and idioms sound awkward at best when played by the computer, but electronic music sounds terrific.

The middle school students enrolled in these classes came without much interest in performing, working with notation, or studying the classical music canon. Many saw themselves as “failed” musicians, placed in a general music class because they had not succeeded in or desired to continue with traditional performance-based music classes. Though they no longer had the desire to perform in traditional school ensembles, they were excited about having the opportunity to create music that might be personally meaningful to them.

Here it is, the story of my life as a music student. Too bad I didn’t go to Alex’s school.

How could I teach so that composing for personal expression could be a transformative experience for students? How could I let the voices and needs of the students guide lessons for the composition process? How could I draw on the deep, complex musical understandings that these students brought to class to help them develop as musicians and composers? What tools could I use to quickly engage them in organizing sound in musical and meaningful ways?

Alex draws parallels between writing music and writing English. Both are usually done alone at a computer, and both pose a combination of technical and creative challenges.

Musical thinking (thinking in sound) and linguistic thinking (thinking using language phrases and ideas) are personal creative processes, yet both occur within social and cultural contexts. Noting these parallels, I began to think about connections between the whole-language approach to writing used by language arts teachers in my school and approaches I might take in my music classroom.

In the whole-language approach to writing, students work individually as they learn to write, yet are supported through collaborative scaffolding-support from their peers and the teacher. At the earliest stages, students tell their stories and attempt to write them down using pictures, drawings, and invented notation. Students write about topics that are personally meaningful to them, learning from their own writing and from the writing of their peers, their teacher, and their families. They also study literature of published authors. Classes that take this approach to teaching writing are often referred to as “writers’ workshops”… The teacher facilitates [students’] growth as writers through minilessons, share sessions, and conferring sessions tailored to meet the needs that emerge as the writers progress in their work. Students’ original ideas and writings often become an important component of the curriculum. However, students in these settings do not spend their entire class time “freewriting.” There are also opportunities for students to share writing in progress and get feedback and support from teacher and peers. Revision and extension of students’ writing occur throughout the process. Lessons are not organized by uniform, prescriptive assignments, but rather are tailored to the students’ interests and needs. In this way, the direction of the curriculum and successive projects are informed by the students’ needs as developing writers.

Alex set about creating an equivalent “composers’ workshop,” combining composition, improvisation, and performing with analytical listening and genre studies.

The broad curricular goal of the composers’ workshop is to engage students collaboratively in:

Organizing and expressing musical ideas and feelings through sound with real-world, authentic reasons for and means of composing

Listening to and analyzing musical works appropriate to students’ interests and experiences, drawn from a broad spectrum of sources

Studying processes of experienced music creators through listening to, performing, and analyzing their music, as well as being informed by accounts of the composition process written by these creators.

Alex recommends production software with strong loop libraries so students can make high-level musical decisions with “real” sounds immediately.

While students do not initially work directly with rhythms and pitch, working with loops enables students to begin composing through working with several broad musical dimensions, including texture, form, mood, and affect. As our semester progresses, students begin to add their own original melodies and musical ideas to their loop-based compositions through work with synthesizers and voices.

As they listen to musical exemplars, I try to have students listen for the musical decisions and understand the processes that artists, sound engineers, and producers make when crafting their pieces. These listening experiences often open the door to further dialogue on and study of the multiplicity of musical roles’ that are a part of creating today’s popular music. Having students read accounts of the steps that audio engineers, producers, songwriters, film-score composers, and studio musicians go through when creating music has proven to be informative and has helped students learn the skills for more accurately expressing the musical ideas they have in their heads.

Alex shares my belief in project-based music technology teaching. Rather than walking through the software feature-by-feature, he plunges students directly into a creative challenge, trusting them to pick up the necessary software functionality as they go. Rather than tightly prescribe creative approaches, Alex observes the students’ explorations and uses them as opportunities to ask questions.

I often ask students about their composing and their musical intentions to better understand how they create and what meanings they’re constructing and expressing through their compositions. Insights drawn from these initial dialogues help me identify strategies I can use to guide their future composing and also help me identify listening experiences that might support their work or techniques they might use to achieve their musical ideas.

Some musical challenges are more structured–Alex does “genre studies” where students have to pick out the qualities that define techno or rock or film scores, and then create using those idioms. This is especially useful for younger students who may not have a lot of experience listening closely to a wide range of music.

Rather than devoting entire classes to demonstrations or lectures, Alex prefers to devote the bulk of classroom time to working on the projects, offering “minilessons” to smaller groups or individuals as the need arises.

Teaching through minilessons targeted to individuals or small groups of students has helped to maintain the musical flow of students’ compositional work. As a result, I can provide more individual feedback and support to students as they compose. The students themselves also offer their own minilessons to peers when they have designed to teach more about advanced features of the software, such as how to record a vocal track, add a fade-in or fade-out, or copy their musical material. These technology skills are taught directly to a few students, who then become the experts in that skill, responsible for teaching other students in the class who need the skill.

Not only does the peer-to-peer learning help with cultural authenticity, but it also gives students invaluable experience with the role of teacher.

One of my first questions is usually, “Is there anything that you would like me to listen for or know about before I listen?” This provides an opportunity for students to seek my help with particular aspects of their composing process. After listening to their compositions, I share my impressions of what I hear and offer my perspective on how to solve their musical problems. If students choose not to accept my ideas, that’s fine; after all, it’s their composition and personal expression… Use of conferring by both teacher and students fosters a culture of collaboration and helps students develop skills in peer scaffolding.

Alex recommends creating an online gallery of class compositions. This has become easier to implement since 2007 with the explosion of blog platforms like Tumblr, audio hosting tools like SoundCloud, and video hosts like YouTube. There are always going to be privacy considerations with such platforms, but there is no shortage of options to choose from.

Once a work is online, students can listen to and comment on these compositions at home outside of class time. Sometimes students post pieces in progress, but for the most part, works are posted when deemed “finished” by the composer. The online gallery can also be set up so students can hear works written by participants in other classes. Students are encouraged to listen to pieces published online for ideas to further their own work, to make comments, and to share these works with their friends and family. The realworld publishing of students’ music on the Internet seems to contribute to their motivation.

Assessing creative work is always going to be a challenge, since there’s no objective basis to assess it on. Alex looks at how well a student composer has met the goal of the assignment, and how well they have achieved their own compositional intent.

The word “composition” is problematic in the context of contemporary computer-based production. It carries the cultural baggage of Western Europe, the idea of music as having a sole identifiable author (or authors.) The sampling and remixing ethos of hip-hop and electronica are closer to the traditions of non-European cultures where music may be owned by everyone and no one. I’ve had good results bringing remixing into the classroom, having students rework each others’ tracks, or beginning with a shared pool of audio samples, or doing more complex collaborative activities like musical shares. Remixes are a way of talking about music via the medium of music, and remixes of remixes can make for some rich and deep conversation. The word “composition” makes less sense in this context. I prefer the broader term “production”, which includes both the creation of new musical ideas and the realization of those ideas in sound.

So far in this post, I’ve presented notation-based composition and loop-based production as if they’re diametrical opposites. In reality, the two overlap, and can be easily combined. A student can create a part as a MIDI sequence and then convert it to notation, or vice versa. The school band or choir can perform alongside recorded or sequenced tracks. Instrumental or vocal performances can be recorded, sampled, and turned into new works. Electronic productions can be arranged for live instruments, and acoustic pieces can be reconceived as electronica. If a hip-hop track can incorporate a sample of Duke Ellington, there’s no reason that sample couldn’t be performed by a high school jazz band. The possibilities are endless.

The Ed Sullivan Fellows program is an initiative by the NYU MusEDLab connecting up-and-coming hip-hop musicians to mentors, studio time, and creative and technical guidance. Our session this past Saturday got off to an intense start, talking about the role of young musicians of color in a world of the police brutality and Black Lives Matter. The Fellows are looking to Kendrick Lamar and Chance The Rapper to speak social and emotional truths through music. It’s a brave and difficult job they’ve taken on.

Eventually, we moved from heavy conversation into working on the Fellows’ projects, which this week involved branding and image. I was at kind of a loose end in this context, so I set up the MusEDLab’s Push controller and started playing around with it. Rohan, one of the Fellows, immediately gravitated to it, and understandably so.

Rohan tried out a few drum sounds, then some synths. He quickly discovered a four-bar synth loop that he wanted to build a track around. He didn’t have any Ableton experience, however, so I volunteered to be his co-producer and operate the software for him.

We worked out some drum parts, first with a hi-hat and snare from the Amen break, and then a kick, clap and more hi-hats from Ableton’s C78 factory instrument. For bass, Rohan wanted that classic booming hip-hop sound you hear coming from car stereos in Brooklyn. He spotted the Hip-Hop Sub among the presets. We fiddled with it and he continued to be unsatisfied until I finally just put a brutal compressor on it, and then we got the sound he was hearing in his head.

While we were working, I had my computer connected to a Bluetooth speaker that was causing some weird and annoying system behavior. At one point, iTunes launched itself and started playing a random song under Rohan’s track, “I Can’t Realize You Love Me” by Duke Ellington and His Orchestra, featuring The Harlem Footwarmers and Sid Garry.

Rohan liked the combination of his beat and the Ellington song, so I sampled the opening four bars and added them to the mix. It took me several tries to match the keys, and I still don’t think I really nailed it, but the hip-hop kids have broad tolerance for chord clash, and Rohan was undisturbed.

Once we had the loops assembled, we started figuring out an arrangement. It took me a minute to figure out that when Rohan refers to a “bar,” he means a four-measure phrase. He’s essentially conflating hypermeasures with measures. I posted about it on Twitter later and got some interesting responses.

@ethanhein possibly an artifact of working with grid based music software and loopable chunks of music.

In a Direct Message, Latinfiddler also pointed out that Latin music calls two measures a “bar” because that’s the length of one cycle of the clave.

Thinking about it further, there’s yet another reason to conflate measures with hypermeasures, which is the broader cut-time shift taking place in hip-hop. All of the young hip-hop beatmakers I’ve observed lately work at half the base tempo of their DAW session. Rohan, being no exception, had the session tempo set to 125 bpm, but programmed a beat with an implied tempo of 62.5 bpm. He and his cohort put their backbeats on beat three, not beats two and four, so they have a base grid of thirty-second notes rather than sixteenth notes. A similar shift took place in the early 1960s when the swung eighth notes of jazz rhythm gave way to the swung sixteenth notes of funk.

Here’s Rohan’s track as of the end of our session:

By the time we were done working, the rest of the Fellows had gathered around and started freestyling. The next step is to record them rapping and singing on top. We also need to find someone to mix it properly. I understand aspects of hip-hop very well, but I mix amateurishly at best.

All the way around, I feel like a learn a ton about music whenever I work with young hip-hop musicians. They approach the placement of sounds in the meter in ways that would never occur to me. I’m delighted to be able to support them technically in realizing their ideas, it’s a privilege for me.

My youngest private music production student is a kid named Ilan. He makes moody trip-hop and deep house using Ableton Live. For our session today, Ilan came in with a downtempo, jazzy hip-hop instrumental. I helped him refine and polish it, and then we talked about his ideas for what kind of vocal might work on top. He wanted an emcee to flow over it, so I gave him my folder of hip-hop acapellas I’ve collected. The first one he tried was “Fu-Gee-La [Refugee Camp Remix]” by the Fugees.

I had it all warped out already, so all he had to do was drag and drop it into his session and press play. It sounded great, so he ran with it. Here’s what he ended up with:

At this point, let me clarify something. To his knowledge, Ilan had never heard “Fu-Gee-La” before using it in his track. His first exposure was the acapella over his own instrumental. His track is quite a bit faster than the original (well, technically, it’s slower, but the kids these days like their rapping doubletime.) Also, we needed to pitch the acapella down a minor third to match the key of Ilan’s instrumental. As of this writing, he has heard his remix about a thousand more times than the original.

Hip-hop’s sampling culture was still radical back in the 90s when “Fu-Gee-La” was released, but has since become absorbed into mainstream sensibilities. Ilan is ambitious and talented, but his sensibilities are well in keeping with most of his millennial peers. So it’s worth looking into his norms and values around authorship and ownership. During our session, he was interested in the Fugees song simply as raw material for his own creativity, not as a self-contained work that needed to be “appreciated” first (or ever.) Ilan’s concerns about where he sources his sounds comes down one hundred percent to expediency. He buys sounds from the Ableton web site because that’s easy. The same goes for buying tracks from iTunes, if they surface with a quick search. Otherwise Ilan just does YouTube to mp3 conversion. I’ve never heard him voice any concern about the idea of intellectual property, or any desire to seek anyone’s permission.

So here we have a young musician who created an original track, and then after the fact layered in a commercially released hip-hop vocal track on a whim. If that one hadn’t worked, he would have just dropped in another one chosen more or less at random. This kind of effortless drag-and-drop remixing requires some facility with Ableton Live, which is expensive and has a learning curve. But this practice is easier than it was five years ago, and is only going to get easer. Music educators: are we ready for a world where this kind of creativity is so accessible? Rights holders: do you know just how little the kids know or care about the concept of musical intellectual property? And musicians: have you experienced the pleasure and inspiration of freely mixing your ideas with everyone else’s? This is a crazy time we live in.

I’m currently working with the Ed Sullivan Fellows program, an initiative of the NYU MusEDLab where we mentor up and coming rappers and producers. Many of them are working with beats they got from YouTube or SoundCloud. That’s fine for working out ideas, but to get to the next level, the Fellows need to be making their own beats. Partially this is for intellectual property reasons, and partially it’s because the quality of mp3s you get from YouTube is not so good. Here’s a collection of resources and ideas I collected for them, and that you might find useful too.

What should you use?

There are a lot of digital audio workstations (DAWs) out there. All of them have the same basic set of functions: a way to record and edit audio, a MIDI sequencer, and a set of samples and software instruments. My DAW of choice is Ableton Live. Most of the Sullivan Fellows favor FL Studio. Mac users naturally lean toward GarageBand and Logic. Other common tools for hip-hop producers include Reason, Pro Tools, Maschine, and in Europe, Cubase.

Traditional DAWs are not the only option. Soundtrap is a browser-based DAW that’s similar to GarageBand, but with the enormous advantage that it runs entirely in the web browser. It also offers some nifty features like built-in Auto-Tune at a fraction of the usual price. The MusEDLab’s own Groove Pizza is an accessible browser-based drum sequencer. Looplabs is another intriguing browser tool.

Mobile apps are not as robust or full-featured as desktop DAWs yet, but some of them are getting there. The iOS version of GarageBand is especially tasty. Figure makes great techno loops, though you’ll need to assemble them into songs using another tool. The Launchpad app is a remarkably easy and intuitive one. See my full list of recommendations.

Where do you get sounds?

DAW factory sounds

Every DAW comes with a sample library and a set of software instruments. Pros: they’re royalty-free. Cons: they tend to be generic-sounding and overused. Be sure to tweak the presets.

Sample libraries and instrument packs

The internet is full of third-party sound libraries. They range widely in price and quality. Pros: like DAW factory sounds, library sounds are also royalty-free, with greatly wider variety available. Cons: the best libraries are expensive.

Humans playing instruments

You could record music the way it was played from the Stone Age through about 1980. Pros: you get human feel, creativity, improvisation, and distinctive instrumental timbres and techniques. Cons: humans are expensive and impractical to record well.

Your record collection

Using more DJ-oriented tools like Ableton, it’s perfectly effortless to pull sounds out of any existing recording. Pros: bottomless inspiration, and the ability to connect emotionally to your listener through sounds that are familiar and meaningful to them. Cons: if you want to charge money, you will probably need permission from the copyright holders, and that can be difficult and expensive. Even giving tracks away on the internet can be problematic. I’ve been using unauthorized samples for years and have never been in any trouble, but I’ve had a few SoundCloud takedowns.

What sounds do you need?

Drums

Most hip-hop beats revolve around the components of the standard drum kit: kicks, snares, hi-hats (open and closed), crash cymbals, ride cymbals, and toms. Handclaps and finger snaps have become part of the standard drum palette as well. There are two kinds of drum sounds, synthetic (“fake”) and acoustic (“real”).

Synthetic drums are the heart and soul of hip-hop (and most other pop and dance music at this point.) There are tons of software and hardware drum machines out there, but there are three in particular you should be aware of.

Roland TR-808: If you could only have one drum machine for hip-hop creation, this would be the one. Every DAW contains sampled or simulated 808 sounds, sometimes labeled “old-skool” or something similar. It’s an iconic sound for good reason.

Roland TR-808: A cousin of the 808 that’s traditionally used more for techno. Still, you can get great hip-hop sounds out of it too. Your DAW is certain to contain some 909 sounds, often labeled with some kind of dance music terminology.

LinnDrum: The sound of the 80s. Think Prince, or Hall And Oates. Not as ubiquitous in DAWs as the 808 and 909, but pretty common.

Acoustic drums are less common in hip-hop, though not unheard of; just ask Questlove.

Some hip-hop producers use live drummers, but it’s much easier to use sampled acoustic drums. Samples are also a good source of Afro-Cuban percussion sounds like bongos, congas, timbales, cowbells, and so on. Also consider using “non-musical” percussion sounds: trash can lids, pots and pans, basketballs bouncing, stomping on the floor, and so on.

And how do you learn where to place these drum sounds? Try the specials on the Groove Pizza. Here’s an additional hip-hop classics to experiment with, the beat from “Nas Is Like” by Nas.

Bass

Hip-hop uses synth bass the vast majority of the time. Your DAW comes with a variety of synth bass sounds, including the simple sine wave sub, the P-Funk Moog bass, dubstep wobbles, and many others. For more unusual bass sounds, try very low-pitched piano or organ. Bass guitar isn’t extremely common in current hip-hop, but it’s worth a try. If you want a 90s Tribe Called Quest vibe, try upright bass.

In the past decade, some hip-hop producers have followed Kanye West’s example and used tuned 808 kick drums to play their basslines. Kanye has used it on all of his albums since 808s and Heartbreak. It’s an amazing solution; those 808 kicks are huge, and if they’re carrying the bassline too, then your low end can be nice and open. Another interesting alternative is to have no bassline at all. It worked for Prince!

And what notes should your bass be playing? If you have chords, the obvious thing is to have the bass playing the roots. You can also have the bass play complicated countermelodies. We made a free online course called Theory for Producers to help you figure these things out.

Chords

Usually your chords are played on some combination of piano, electric piano, organ, synth, strings, guitar, or horns. Vocal choirs are nice too. Once again, consult Theory for Producers for inspiration. Be sure to try out chords with the aQWERTYon, which was specifically designed for this very purpose.

Leads

The same instruments that you use for chords also work fine for melodies. In fact, you can think of melodies as chords stretched out horizontally, and conversely, you can think of chords as melodies stacked up vertically.