Tag: production

Ableton recently launched a delightful web site that teaches the basics of beatmaking, production and music theory using elegant interactives. If you’re interested in music education, creation, or user experience design, you owe it to yourself to try it out.

If you’ve been following the work of the NYU Music Experience Design Lab, you might notice some strong similarities between Ableton’s site and our tools. That’s no coincidence. Dennis and I have been having an informal back and forth on the role of technology in music education for a few years now. It’s a relationship that’s going to get a step more formal this fall at the 2017 Loop Conference – more details on that as it develops.

Meanwhile, Peter Kirn’s review of the Learning Music site raises some probing questions about why Ableton might be getting involved in education in the first place. But first, he makes some broad statements about the state of the musical world that are worth repeating in full.

I think there’s a common myth that music production tools somehow take away from the need to understand music theory. I’d say exactly the opposite: they’re more demanding.

Every musician is now in the position of composer. You have an opportunity to arrange new sounds in new ways without any clear frame from the past. You’re now part of a community of listeners who have more access to traditions across geography and essentially from the dawn of time. In other words, there’s almost no choice too obvious.

The music education world has been slow to react to these new realities. We still think of composition as an elite and esoteric skill, one reserved only for small class of highly trained specialists. Before computers, this was a reasonable enough attitude to have, because it was mostly true. Not many of us can learn an instrument well enough to compose with it, then learn to notate our ideas. Even fewer of us will be able to find musicians to perform those compositions. But anyone with an iPhone and twenty dollars worth of apps can make original music using an infinite variety of sounds, and share that music online to anyone willing to listen. My kids started playing with iOS music apps when they were one year old. With the technical barriers to musical creativity falling away, the remaining challenge is gaining an understanding of music itself, how it works, why some things sound good and others don’t. This is the challenge that we as music educators are suddenly free to take up.

There’s an important question to ask here, though: why Ableton?

To me, the answer to this is self-evident. Ableton has been in the music education business since its founding. Like Adam Bell says, every piece of music creation software is a de facto education experience. Designers of DAWs might even be the most culturally impactful music educators of our time. Most popular music is made by self-taught producers, and a lot of that self-teaching consists of exploring DAWs like Ableton Live. The presets, factory sounds and affordances of your DAW powerfully inform your understanding of musical possibility. If DAW makers are going to be teaching the world’s producers, I’d prefer if they do it intentionally.

So far, there has been a divide between “serious” music making tools like Ableton Live and the toy-like iOS and web apps that my kids use. If you’re sufficiently motivated, you can integrate them all together, but it takes some skill. One of the most interesting features of Ableton’s web site, then, is that each interactive tool includes a link that will open up your little creation in a Live session. Peter Kirn shares my excitement about this feature.

There are plenty of interactive learning examples online, but I think that “export” feature – the ability to integrate with serious desktop features – represents a kind of breakthrough.

Ableton Live is a superb creation tool, but I’ve been hesitant to recommend it to beginner producers. The web site could change my mind about that.

So, this is all wonderful. But Kirn points out a dark side.

The richness of music knowledge is something we’ve received because of healthy music communities and music institutions, because of a network of overlapping ecosystems. And it’s important that many of these are independent. I think it’s great that software companies are getting into the action, and I hope they continue to do so. In fact, I think that’s one healthy part of the present ecosystem.

It’s the rest of the ecosystem that’s worrying – the one outside individual brands and what they support. Public music education is getting squeezed in different ways all around the world. Independent content production is, too, even in advertising-supported publications like this one, but more so in other spheres. Worse, I think education around music technology hasn’t even begun to be reconciled with traditional music education – in the sense that people with specialties in one field tend not to have any understanding of the other. And right now, we need both – and both are getting their resources squeezed.

This might feel like I’m going on a tangent, but if your DAW has to teach you how harmony works, it’s worth asking the question – did some other part of the system break down?

Yes it did! Sure, you can learn the fundamentals of rhythm, harmony, and form from any of a thousand schools, courses, or books. But there aren’t many places you can go to learn about it in the context of Beyoncé, Daft Punk, or A Tribe Called Quest. Not many educators are hip enough to include the Sleng Teng riddim as one of the fundamentals. I’m doing my best to rectify this imbalance–that’s what my courses with Soundfly classes are for. But I join Peter Kirn in wondering why it’s left to private companies to do this work. Why isn’t school music more culturally relevant? Why do so many educators insist that you kidslike the wrong music? Why is it so common to get a music degree without ever writing a song? Why is the chasm between the culture of school music and music generally so wide?

Like Kirn, I’m distressed that school music programs are getting their budgets cut. But there’s a reason that’s happening, and it isn’t that politicians and school boards are philistines. Enrollment in school music is declining in places where the budgets aren’t being cut, and even where schools are offering free instruments. We need to look at the content of school music itself to see why it’s driving kids away. Both the content of school music programs and the people teaching them are whiter than the student population. Even white kids are likely to be alienated from a Eurocentric curriculum that doesn’t reflect America’s increasingly Afrocentric musical culture. The large ensemble model that we imported from European conservatories is incompatible with the riot of polyglot individualism in the kids’ earbuds.

While music therapists have been teaching songwriting for years, it’s rare to find it in school music curricula. Production and beatmaking are even more rare. Not many adults can play oboe in an orchestra, but anyone with a guitar or keyboard or smartphone can write and perform songs. Music performance is a wonderful experience, one I wish were available to everyone, but music creation is on another level of emotional meaning entirely. It’s like the difference between watching basketball on TV and playing it yourself. It’s a way to understand your own innermost experiences and the innermost experiences of others. It changes the way you listen to music, and the way you approach any kind of art for that matter. It’s a tool that anyone should be able to have in their kit. Ableton is doing the music education world an invaluable service; I hope more of us follow their example.

I use a project-based approach to teaching music technology. Technical concepts stick with you better if you learn them in the course of making actual music. Here’s the list of projects I assign to my college classes and private students. I’ve arranged them from easiest to hardest. The first five projects are suitable for a beginner-level class using any DAW–my beginners use GarageBand. The last two projects are more advanced and require a DAW with sophisticated editing tools and effects, like Ableton Live. If you’re a teacher, feel free to use these (and let me know if you do). Same goes for all you bedroom producers and self-teachers.

The projects are agnostic as to musical content, style or genre. However, the computer is best suited to making electronic music, and most of these projects work best in the pop/hip-hop/techno sphere. Experimental, ambient or film music approaches also work well. Many of them draw on the Disquiet Junto. Enjoy.

Loops

Assignment: Create a song using only existing loops. You can use these or these, or restrict yourself to the loops included with your DAW. Do not use any additional sounds or instruments.

For beginners, I like to separate this into two separate assignments. First, create a short (two or four bar) phrase using four to six instrument loops and beats. Then use that set of loops as the basis of a full length track, by repeating, and by having sounds enter and exit.

Concepts:

Basic DAW functions

Listening like a producer

Musical form and song structures

Intellectual property, copyright and authorship

Hints:

MIDI loops are easier to edit and customize than audio loops.

Try slicing audio loops into smaller segments. Use only the front or back half of the loop. Or rearrange segments into a different order.

MIDI

Assignment: Create a piece of music using MIDI and software instruments. Do not record or import any audio. You can use MIDI from any source, including: playing keyboards, drum pads or other interfaces; drawing in the piano roll; importing scores from notation programs; downloading MIDI files from the internet (for example, from here); or using the Audio To MIDI function in your DAW.

I don’t treat this as a composition exercise (unless students want to make it one.) Feel free to use an existing piece of music. The only requirement is that the end result has to sound good. Simply dragging a classical or pop MIDI into the DAW is likely to sound terrible unless you put some thought into your instrument choices. If you do want to create something original, try these compositional prompts.

Rather than playing back a Bach keyboard piece on piano or harpsichord, set your instrument to drums or percussion, and get ready for joy.

Found sound

Assignment: Record a short environmental sound and incorporate it into a piece of music. You can edit and process your found sound as you see fit. Variation: use existing sounds from Freesound.

Concepts:

Audio recording, editing, and effects

The musical potential of “non-musical” sounds

Hints:

Students usually record their sounds with their phones, and the resulting recording quality is usually bad. Try using EQ, compression, delay, reverb, distortion, and other effects to mitigate or enhance poor sound quality and background noise.

Peer remix

Assignment: Remix a track by one of your classmates (or friends, or a stranger on the internet.) Feel free to incorporate other pieces of music as well. Follow your personal definition of the word “remix.” That might mean small edits and adjustments to the mix and effects, or a radical reworking leading to complete transformation of the source material.

There are endless variations on the peer remix. Try the “metaremix,” where students remix each others’ remixes, to the nth degree as time permits. Also, do group remix activities like Musical Shares or FX Roulette.

Concepts:

Collaboration and authorship

Sampling

Mashups

Evolution of musical ideas

Musical critique using musical language

Hints:

A change in tempo can have dramatic effects on the mood and feel of a track.

Adding sounds is the obvious move, but don’t be afraid to remove things too.

Self remix

Assignment: Remix one of your own projects, using the same guidelines as the peer remix. This is a good project for the end of the semester/term.

Song transformation

Assignment: Take an existing song and turn it into a new song. Don’t use any additional sounds or MIDI.

Concepts:

Advanced audio editing and effects

Musical form and structure

The nature of originality

Hints:

You can transform short segments simply by repeating them out of context. For example, try taking single chords or lyrical phrases and looping them.

Shared sample

Assignment: Take a short audio sample (five seconds or less) and build a complete piece of music out of it. Do not use any other sounds. This is the most difficult assignment here, and the most rewarding one if you can pull it off successfully.

Concepts:

Advanced audio editing and effects

Musical form and structure

The nature of originality

Hints:

Pitch shifting and timestretching are your friends.

Short bursts of noise can be tuned up and down to make drums.

Extreme timestretching produces great ambient textures.

Writing assignments

I like to have students document their process in blog posts. I ask: What sounds and techniques did you use? Why did you use them? Are you happy with the end result? Given unlimited time and expertise, what changes would you make? Do you consider this to be a valid form of musical creativity?

This semester I also asked students to write reviews of each others’ work in the style of their preferred music publication. In the future, I plan to have students write a review of an imaginary track, and then assign other students to try to create the track being described.

Further challenges

The projects above were intended to be used for a one-semester college class. If I were teaching over a longer time span or I needed more assignments, I would draw from the Disquiet Junto, Making Music by Dennis DeSantis, or the Oblique Strategies cards. Let me know in the comments if you have additional recommendations.

I’m currently working with the Ed Sullivan Fellows program, an initiative of the NYU MusEDLab where we mentor up and coming rappers and producers. Many of them are working with beats they got from YouTube or SoundCloud. That’s fine for working out ideas, but to get to the next level, the Fellows need to be making their own beats. Partially this is for intellectual property reasons, and partially it’s because the quality of mp3s you get from YouTube is not so good. Here’s a collection of resources and ideas I collected for them, and that you might find useful too.

What should you use?

There are a lot of digital audio workstations (DAWs) out there. All of them have the same basic set of functions: a way to record and edit audio, a MIDI sequencer, and a set of samples and software instruments. My DAW of choice is Ableton Live. Most of the Sullivan Fellows favor FL Studio. Mac users naturally lean toward GarageBand and Logic. Other common tools for hip-hop producers include Reason, Pro Tools, Maschine, and in Europe, Cubase.

Traditional DAWs are not the only option. Soundtrap is a browser-based DAW that’s similar to GarageBand, but with the enormous advantage that it runs entirely in the web browser. It also offers some nifty features like built-in Auto-Tune at a fraction of the usual price. The MusEDLab’s own Groove Pizza is an accessible browser-based drum sequencer. Looplabs is another intriguing browser tool.

Mobile apps are not as robust or full-featured as desktop DAWs yet, but some of them are getting there. The iOS version of GarageBand is especially tasty. Figure makes great techno loops, though you’ll need to assemble them into songs using another tool. The Launchpad app is a remarkably easy and intuitive one. See my full list of recommendations.

Where do you get sounds?

DAW factory sounds

Every DAW comes with a sample library and a set of software instruments. Pros: they’re royalty-free. Cons: they tend to be generic-sounding and overused. Be sure to tweak the presets.

Sample libraries and instrument packs

The internet is full of third-party sound libraries. They range widely in price and quality. Pros: like DAW factory sounds, library sounds are also royalty-free, with greatly wider variety available. Cons: the best libraries are expensive.

Humans playing instruments

You could record music the way it was played from the Stone Age through about 1980. Pros: you get human feel, creativity, improvisation, and distinctive instrumental timbres and techniques. Cons: humans are expensive and impractical to record well.

Your record collection

Using more DJ-oriented tools like Ableton, it’s perfectly effortless to pull sounds out of any existing recording. Pros: bottomless inspiration, and the ability to connect emotionally to your listener through sounds that are familiar and meaningful to them. Cons: if you want to charge money, you will probably need permission from the copyright holders, and that can be difficult and expensive. Even giving tracks away on the internet can be problematic. I’ve been using unauthorized samples for years and have never been in any trouble, but I’ve had a few SoundCloud takedowns.

What sounds do you need?

Drums

Most hip-hop beats revolve around the components of the standard drum kit: kicks, snares, hi-hats (open and closed), crash cymbals, ride cymbals, and toms. Handclaps and finger snaps have become part of the standard drum palette as well. There are two kinds of drum sounds, synthetic (“fake”) and acoustic (“real”).

Synthetic drums are the heart and soul of hip-hop (and most other pop and dance music at this point.) There are tons of software and hardware drum machines out there, but there are three in particular you should be aware of.

Roland TR-808: If you could only have one drum machine for hip-hop creation, this would be the one. Every DAW contains sampled or simulated 808 sounds, sometimes labeled “old-skool” or something similar. It’s an iconic sound for good reason.

Roland TR-808: A cousin of the 808 that’s traditionally used more for techno. Still, you can get great hip-hop sounds out of it too. Your DAW is certain to contain some 909 sounds, often labeled with some kind of dance music terminology.

LinnDrum: The sound of the 80s. Think Prince, or Hall And Oates. Not as ubiquitous in DAWs as the 808 and 909, but pretty common.

Acoustic drums are less common in hip-hop, though not unheard of; just ask Questlove.

Some hip-hop producers use live drummers, but it’s much easier to use sampled acoustic drums. Samples are also a good source of Afro-Cuban percussion sounds like bongos, congas, timbales, cowbells, and so on. Also consider using “non-musical” percussion sounds: trash can lids, pots and pans, basketballs bouncing, stomping on the floor, and so on.

And how do you learn where to place these drum sounds? Try the specials on the Groove Pizza. Here’s an additional hip-hop classics to experiment with, the beat from “Nas Is Like” by Nas.

Bass

Hip-hop uses synth bass the vast majority of the time. Your DAW comes with a variety of synth bass sounds, including the simple sine wave sub, the P-Funk Moog bass, dubstep wobbles, and many others. For more unusual bass sounds, try very low-pitched piano or organ. Bass guitar isn’t extremely common in current hip-hop, but it’s worth a try. If you want a 90s Tribe Called Quest vibe, try upright bass.

In the past decade, some hip-hop producers have followed Kanye West’s example and used tuned 808 kick drums to play their basslines. Kanye has used it on all of his albums since 808s and Heartbreak. It’s an amazing solution; those 808 kicks are huge, and if they’re carrying the bassline too, then your low end can be nice and open. Another interesting alternative is to have no bassline at all. It worked for Prince!

And what notes should your bass be playing? If you have chords, the obvious thing is to have the bass playing the roots. You can also have the bass play complicated countermelodies. We made a free online course called Theory for Producers to help you figure these things out.

Chords

Usually your chords are played on some combination of piano, electric piano, organ, synth, strings, guitar, or horns. Vocal choirs are nice too. Once again, consult Theory for Producers for inspiration. Be sure to try out chords with the aQWERTYon, which was specifically designed for this very purpose.

Leads

The same instruments that you use for chords also work fine for melodies. In fact, you can think of melodies as chords stretched out horizontally, and conversely, you can think of chords as melodies stacked up vertically.

The first song on Kanye West’s Life Of Pablo album, and my favorite so far, is the beautiful, gospel-saturated “Ultralight Beam.” See Kanye and company perform it live on SNL.

The song uses only four chords, but they’re an interesting four: C minor, E-flat major, A-flat major, and G7. To find out why they sound so good together, let’s do a little music theory.

“Ultralight Beam” is in the key of C minor, and three of the four chords come from the C natural minor scale, shown below. Click the image to play the scale in the aQWERTYon (requires Chrome).

To make a chord, start on any scale degree, then skip two degrees clockwise, and then skip another two, and so on. To make C minor, you start on C, then jump to E-flat, and then to G. To make E-flat major, you start on E-flat, then jump to G, and then to B-flat. And to make A-flat major, you start on A-flat, then jump to C, and then to E-flat. Simple enough so far.

The C natural minor scale shares its seven notes with the E-flat major scale:

All we’ve really done here is rotate the circle three slots counterclockwise. All the relationships stay the same, and you can form the same chords in the same way. The two scales are so closely related that if noodle around on C natural minor long enough, it starts just sounding like E-flat major. Try it!

The last of the four chords in “Ultralight Beam” is G7, and to make it, we need a note that isn’t in C natural minor (or E-flat major): the leading tone, B natural. If you take C natural minor and replace B-flat with B natural, you get a new scale: C harmonic minor.

If you make a chord starting on G from C natural minor, you get G minor (G, B-flat, D). The chord sounds fine, and you could use it with the other three above without offending anyone. But if you make the same chord using C harmonic minor, you get G major (G, B, D). This is a much more dramatic and exciting sound. If you add one more chord degree, you get G7 (G, B, D, F), known as the dominant chord in C minor. In the diagram below, the G7 chord is in blue, and C minor is in green.

Feel how much more intensely that B natural pulls to C than B-flat did? That’s what gives the song its drama, and what puts it unambivalently in C minor rather than E-flat major.

“Ultralight Beam” has a nice chord progression, but that isn’t its most distinctive feature. The thing that jumps out most immediately is the unusual beat. Nearly all hip-hop is in 4/4 time, where each measure is subdivided into four beats, and each of those four beats is subdivided into four sixteenth notes. “Ultralight Beam” uses 12/8 time, which was prevalent in the first half of the twentieth century, but is a rarity now. Each measure still has four beats in it, but these beats are subdivided into three beats rather than four.

The track states this rhythm very obliquely. The drum track is comprised almost entirely of silence. The vocals and other instruments skip lightly around the beat. Chance The Rapper’s verse in particular pulls against the meter in all kinds of complex ways.

The song’s structure is unusual too, a wide departure from the standard “verse-hook-verse-hook”.

The intro is six bars long, two bars of ambient voices, four bars over the chord progression. The song proper begins with just the first half of the chorus (known in hip-hop circles as the hook.) Kanye has an eight bar verse, followed by the first full chorus. Kelly Price gets the next eight bar verse. So far, so typical. But then, where you expect the next chorus, The-Dream gets his four-bar verse, followed by Chance The Rapper’s ecstatic sixteen-bar verse. Next is what feels like the last chorus, but that’s followed by Kirk Franklin’s four bar verse, and then a four-bar outtro with just the choir singing haunting single words. It’s strange, but it works. Say what you want about Kanye as a public figure, but as a musician, he is in complete control of his craft.

I’m delighted to announce the launch of a new interactive online music course called Theory for Producers: The Black Keys. It’s a joint effort by Soundfly and the NYU MusEDLab, representing the culmination of several years worth of design and programming. We’re super proud of it.

The course makes the abstractions of music theory concrete by presenting them in the form of actual songs you’re likely to already know. You can play and improvise along with the examples right in the web browser using the aQWERTYon, which turns your computer keyboard into an easily playable instrument. You can also bring the examples into programs like Ableton Live or Logic for further hands-on experimentation. We’ve spiced up the content with videos and animations, along with some entertaining digressions into the Stone Age and the auditory processing abilities of frogs.

So what does it mean that this is music theory for producers? We’re organizing the material in a way that’s easiest and most relevant to people using computers to create the dance music of the African diaspora: techno, hip-hop, and their various pop derivatives. This music carries most of its creative content outside of harmony: in rhythm, timbre, and repetitive structure. The harmony is usually static, sitting on a loop of a few chords or just a single mode. Alongside the standard (Western) major and minor scales, you’re just as likely to encounter more “exotic” (non-Western) sounds.

Music theory classes and textbooks typically begin with the C major scale, because it’s the easiest scale to represent and read in music notation. However, C major is not necessarily the most “basic” or fundamental scale for our intended audience. Instead, we start with E-flat minor pentatonic, otherwise known as the black keys on the piano. The piano metaphor is ubiquitous both in electronic music hardware and software, and pentatonics are even easier to play on piano than diatonic scales. E-flat minor pentatonic is more daunting in notated form than C major, but since dance and hip-hop producers tend not to be able to read music anyway, that’s no obstacle. And if producers want to use keys other than E-flat minor (or G-flat major), they can keep playing the black keys and then transpose the MIDI later.

The Black Keys is just the first installment in Theory For Producers. Next, we’ll do The White Keys, otherwise known as the modes of C major. We’re planning to start that course not with C major itself, but with G Mixolydian mode, because it’s a more familiar sound in Afrodiasporic music than straight major. After that, we’ll do a course about chords, and one about rhythm. We hope you sign up!

If you’ve ever wondered what it is that a music producer does exactly, David Bowie’s “Space Oddity” is a crystal clear example. To put it in a nutshell, a producer turns this:

Into this:

It’s also interesting to listen to the first version of the commercial recording, which is better than the demo, but still nowhere near as majestic as the final version. The Austin Powers flute solo is especially silly.

Should we even consider these three recordings to be the same piece of music? On the one hand, they’re all the same melody and chords and lyrics. On the other hand, if the song only existed in its demo form, or in the awkward Austin Powers version, it would never have made the impact that it did. Some of the impact of the final version lies in better recording techniques and equipment, but it’s more than that. The music takes on a different meaning in the final version. It’s bigger, trippier, punchier, tighter, more cinematic, more transporting, and in general about a thousand times more effective.

The producer’s job is to marshall the efforts of songwriters, arrangers, performers and engineers to create a good-sounding recording. (The producer might also be a songwriter, arranger, performer, and/or engineer.) Producers are to songs what directors are to movies, or showrunners are to television.

When you’re thinking about a piece of recorded music, you’re really talking about three different things:

The underlying composition, the part that can be represented on paper. Albin Zak calls this “the song.”

The performance of the song.

The finished recording, after overdubbing, mixing, editing, effects, and all the rest. Albin Zak calls this “the track.”

I had always assumed that Tony Visconti produced “Space Oddity,” since he produced a ton of other Bowie classics. As it turns out, though, Visconti was underwhelmed by the song, so he delegated it to his assistant, Gus Dudgeon. So what is it that Gus Dudgeon did precisely? First let’s separate out what he didn’t do.

You can hear from the demo that the chords, melody and lyrics were all in place before Bowie walked into the studio. They’re the parts reproduced by the subway busker I heard singing “Space Oddity” this morning. The demo includes a vocal arrangement that’s similar to the final one, aside from some minor phrasing changes. The acoustic guitar and Stylophone are in place as well. (I had always thought it was an oboe, but no, that droning sound is a low-tech synth.)

Gus Dudgeon took a song and a partial arrangement, and turned it into a track. He oversaw the addition of electric guitar, bass, drums, strings, woodwinds, and keyboards. He coached Bowie and the various studio musicians through their performances, selected the takes, and decided on effects like echoes and reverb. He supervised the mixing, which not only sets the relative loudness of the various sounds, but also affects their perceived location and significance. In short, he designed the actual sounds that you hear.

If you want to dive deep into the track, you’re in luck, because Bowie officially released the multitrack stems. Some particular points of interest:

The bassist, Herbie Flowers, was a rookie. The “Space Oddity” session was his first. He later went on to create the staggeringly great dual bass part in Lou Reed’s “Walk On The Wild Side.”

The strings were arranged and conducted by the multifaceted Paul Buckmaster, who a few years later would work with Miles Davis on the conception of On The Corner. Buckmaster’s cello harmonics contribute significantly to the psychedelic atmosphere–listen to the end of the stem labeled “Extras 1.”

The live strings are supplemented by Mellotron, played by future Yes keyboardist Rick Wakeman, he of the flamboyant gold cape.

Tony Visconti plays some flute and unspecified woodwinds, including the distinctive saxophone run that leads into the instrumental sections.

The big difference between the sixties and the present is that the track has assumed ever-greater importance relative to the song and the performance. In the age of MIDI and digital audio editing, live performance has become a totally optional component of music. The song is increasingly inseparable from the sounds used to realize it, especially in synth-heavy music like hip-hop and EDM. This shift gives the producer ever-greater importance in the creative process. There is really no such thing as a “demo” anymore, since anyone with a computer can produce finished-sounding tracks in their bedroom. If David Bowie were a kid now, he’d put together “Space Oddity” in GarageBand or FL Studio, with a lavish soundscape part of the conception from the beginning.

I want my students to understand that the words “producer” and “musician” are becoming synonymous. I want them to know that they can no longer focus solely on composition or performance and wait for someone else to craft a track around them. The techniques used to make “Space Oddity” were esoteric and expensive to realize at the time. Now, they’re easily within reach. But while the technology is more accessible, you still have to have the ideas. This is why it’s so valuable to study great producers like Tony Visconti and Gus Dudgeon: they’re a goldmine of sonic inspiration.