Acoustic Theory For Electronic Musicians

In this class, Kurt Harland Larson of Information Society presents a window into basic acoustics and sound theory in a way relevant to electronic musicians. Synthesizers (and their software equivalents) produce sound directly, algorithmically, in a way which requires the user to think about their sounds as raw phenomena, rather than well-established vocabulary aligned with physical devices such as flutes, mechanical sirens or whoopie cushions. This method of thinking can be greatly enhanced by a basic understanding of the physical/cognitive process we call "sound". In this class we will develop this basic understanding of how sound works, how synthesizers make sound, and how what the synths do interacts with acoustics and your brain.

SYNOPSIS

Topics covered will include:

- A starter overview of the physics of sound

- Harmonics

- The harmonic sequence

- Show sonogram(s) of simple waveforms (Compare sine, saw and square)

- How "upper harmonics" relate to "high frequencies"

- Synth waveforms

- The concept of "Analogue"; how delta-voltage becomes sound

- How synth waveforms map onto actual sounds

- The intended waveform vs. how the speaker actually behaves

- Psycho-acoustics and related concepts

- How mid-range filter modulation tickles the speech-recognition processing of the brain

- Perceived loudness versus actual loudness

- Signification: Literality vs significance: How sounds become a vocabulary

Attendees will come away with an improved understanding of what sound physically is, some of the mystery parts of the synth-to-ear pipeline illuminated, and more and better understanding of how people relate