Making music with Csound

Composing with Csound

When Max Mathews created Music V [5], the forerunner of Csound and many other computer music languages, he intended it to be easily and quickly comprehensible to musicians. Many terms are similar to their counterparts in standard instrumental music practice, and the presence of a scoring language is a clear indicator of Dr. Mathews intentions. Real-time performance was a far-off prospect then – hence, the need for a score facility.

Perhaps thanks to its limitations, Csound's score language is quickly comprehended. A score consists of one or more events – which may or may not be musical notes – added to an event list. Each event contains a series of parameter fields (p-fields) that control the output of the event's specified instrument.

The first three p-fields are predefined and cannot be changed. P1 indicates the number of the instrument to be controlled by the event, p2 is the delta start time of the event, and p3 is its duration. The p-fields from p4 on are user-defined, with their number varying with the particular instrument's requirements. Values for p-fields can be defined by direct entry, reference to a Csound macro, or by evaluating an expression.

Events in an event list can be ordered in any way, Csound will sort the list into a time-ordered series before running or compiling it. However, if events are arranged in temporal succession, you can utilize the score language operators for carrying (.), incrementing (+), and ramping (</>) values between events (Csound will interpolate the values between a ramp's boundaries). The score syntax also includes controls for section repeat, mute, skip, advance, and tempo. Csound's score language may be limited, but it has some neat features.

A high degree of detail can be specified by a Csound event. However, although manually editing a Csound score is possible, producing a lengthy score in this way is a formidable task. Unfortunately, the ability to create such detailed specification also creates more work for the composer. However, help is available, thanks to the developers of Csound's front ends and production environments.

Old-school Csounders may have had occasion to use the Cscore API, a package of C functions designed for warping an existing Csound score. Cscore is still around and is still quite usable – see the latest manual examples – but it requires working knowledge of a C development environment. These days, the functions in Cscore are perhaps better handled more directly and by a more modern language. Surprise, surprise – Csound includes a family of opcodes that interface directly with Python, allowing expressions and other code in that language to be evaluated within Csound itself.

Input and/or output in Western music notation is not supported directly. However, the Rosegarden sequencer and the Denemo notation program can function as notation-based front ends for Csound. Additionally, at least one user has reported on work integrating MusicXML with CsoundAC, a Csound-based environment for algorithmic music composition. Further possibilities exist with Rick Taube's GRACE/CM, an excellent Lisp-based algorithmic composition system. GRACE/CM supports the FOMUS transcription utility and includes Csound in its target formats, thus generating a real-time performance, a score in standard music notation, and a Csound-ready event list, all at once.

Getting the Message Out

Third-party developers can access Csound's powers through the Csound API. Basic access requires only the inclusion of the csound.h header in your code and the presence of the libcsound.so shared library. The API empowers many third-party projects, including Rory Walsh's Cabbage, Andres Cabrera's CsoundQt, and Jean-Pierre Lemoine's AVSynthesis (described here later).

WinXound (Figure 1), blue (Figure 2), and CsoundQT (Figure 3) are general-purpose development environments for normal users of Csound. These programs offer code completion, syntax highlighting, online help, integrated graphics, and other amenities for sound and music production with Csound.

Figure 1: Stefano Bonetti's WinXound 3.4.0.

Figure 2: Steven Yi's blue 0.120 (screenshot by Steven Yi).

Figure 3: Andres Cabrera's CsoundQt 0.8.0.

CsoundAC, athenaCL, and CM/GRACE are environments specialized for algorithmic composition, generating event lists that can be formatted for targeted synthesis engines such as Csound and SuperCollider. Cecilia (Figure 4) and AVSynthesis are similar Csound-based production environments with more complex GUIs.

Figure 4: Jean Piche's Cecilia 4.1beta.

Rory Walsh's Cabbage (Figure 5) is a unique project. Cabbage's prime directive is the simplified creation of audio applications and plugins based on the Csound API. I've been working with the codebase for a while, and I admit that it's a dream come true to invoke a Csound-powered synthesizer from within the Ardour digital audio workstation.

Figure 5: Cabbage.

Currently, the program creates standalone applications and native Linux VST plugins, but Rory continues to expand Cabbage's capabilities. LV2 plugin creation is coming soon, along with features to make it even easier to roll your own Csound-driven plugins. By the way, you can find links to these and other Csound assistants on the Csound Helpers page [6].

Regarding the outside world, Csound tries to get along with just about everyone. With modern machines, you can easily play a Csound synthesizer with a real or virtual keyboard, or any other controller – hardware or software – that sends comprehensible event messages. MIDI is well supported, as is the less well known but arguably more powerful OSC messaging protocol, and the JACK transport control is available for Linux and OS X Csounders.

Wii controllers and other Bluetooth devices can be used as external controllers, and I recommend checking out the IanniX and GeoSonix programs for experimental work with a very different sort of sequencer. Csound even extends good tidings to other software synthesis environments – users of MAX/MSP or Pure Data can access Csound's powers via the csound~ object.

If you're already proficient in C/C++, Java, or Python, you can utilize Csound's bindings to those languages. Write your code in your preferred language and import Csound's services through the specific language interface, and you have Csound in your app. For example, AVSynthesis uses the Java interface (csnd.jar) to access its Csound engine.

Csound6 is the first version of Csound that supports live coding methods. Live coding is an emerging practice that combines algorithmic composition with improvisation (i.e., the programmer is coding in real time in much the same way that an instrumentalist improvises). Check out Rory Walsh's videos [7] for a convincing demonstration of how it's done in Csound.

Simple Csound

My first example is the Csound equivalent of the novice coder's "Hello, world!" program; that is, I'll create a simple instrument and play a note on it. The example is in Csound's CSD file format with inline comments and saved as simple.csd (Listing 1). I wanted to clarify everything in this example, hence the excessive comments. Sorry about that.

Listing 1

Csound Synthesizer

<CsoundSynthesizer>
;;; The CsoundSynthesizer tag initializes the file format.
<CsOptions>
;;; Performance-time options, suppressing graphics and messaging,
;;; and selecting your machine's default device for audio output.
-d -m0 -g -f -odac
;;; Select these non-realtime options for a nice floating-point WAV file.
;;; -d -m0 -g -f -oSimple.wav
</CsOptions>
<CsInstruments>
;;; Now we define an instrument. First we declare some global
;;; globals in a header. Csound provides default values for this
;;; section, the numbers here suit my hardware.
sr = 48000
ksmps = 64
nchnls = 2
instr 1 ; Each instrument has a number.
kamp = p4 ; Assign the value found in the score's fourth parameter field to a variable called kamp.
kfreq = p5 ; Assign the value found in the score's fifth p-field to a variable called kfreq.
ifn = p6 ; Assign the value found in the score's sixth p-field to a variable called ifn.
kenv linseg 0,p3*.50,1,p3*.50,0 ; Create a simple "rise and fall" envelope function with the linseg opcode.
asig oscil kamp*kenv,kfreq,ifn ; Assign the values for kamp (scaled by kenv), kfreq, and ifn to the opcode's
; slots for amplitude, frequency, and function table. Name the output asig.
outs asig, asig-1 ; Send the asig and asig-1 values to the stereo output channels.
endin ; This instrument definition is ended.
</CsInstruments>
<CsScore>
;;; The score section provides values to the instrument on a per-event basis.
;;; In this simple score we have one function table and one note-event.
f1 0 8192 10 1 ; A Csound stored function table, GEN10, for a sine wave with 8192 points and a single harmonic.
i1 0 5 10000 440 1 ; Instructions for instrument 1. Starting at delta time 0, for five seconds play a moderately
; loud note with a frequency of 440 Hz and the waveform stored in function table f1.
e ; End score.
</CsScore>
</CsoundSynthesizer>

If you're new to Csound, I recommend using the excellent CsoundQt IDE, but any common text editor can be used to write a Csound program (I also like vi/vim with Luis Jure's helpful extensions for coding Csound). Whatever editor you use, just be sure to save your program in plain text format.

If you're old school, you can compile this example at a command prompt:

$ csound simple.csd

If you're using CsoundQt or another Csound IDE, you can just click on its Run button. If all goes well, Csound will produce a WAV file that sounds exactly as described in the score section comments (i.e., like a sine wave oscillating at 440Hz for five seconds). Over that period, the wave's loudness will start at 0, reach peak (1) midway through the note, and fall back to 0 at the end.

If all does not go well, Csound will issue a (hopefully) helpful error message. In my experience, Csound itself is rarely the problem, although new and experimental features might be less than completely stable. Most initial errors are caused by mistakes in code syntax or a faulty design.

Csound's language syntax is simple, direct, and easy to comprehend. In a typical instrument statement, the named output appears first, followed by the operator or opcode employed and its parameter set. Some opcodes, such as the outs opcode, take no output name, but most will be used as seen in the example here. The number of parameters varies per opcode – some have none, some have dozens. Each parameter can be evaluated directly in the instrument or in its associated p-field in a score-event definition.

For the typical user, Csound's opcodes are the system's real wealth. Each opcode is a "black box" of some type relevant to Csound's objectives. For example, the oscil opcode provides an oscillator with three parameters for frequency (pitch), amplitude (loudness), and wavetable (timbre). Other audio programming environments have their own sets of opcodes, but few – if any – challenge the variety available for Csound. If the built-in opcodes aren't enough for you, yet another variety is available through the user-defined opcode database. Users can roll their own audio and MIDI opcodes, without a deep knowledge of Csound's internal code.

A word about those a, k, and i prefixes: They refer to rates of data processing, defined as audio (a), control (k), and init (i) rates. The audio and control rates are defined by the instrument header block, and the init value is a fixed value determined at the initialization stage when Csound processes a score event. These rates can have audible effects on the results of your designs, but I don't have space to get into those intricacies here. See the relevant chapters in the Csound manual for more information.

I've specified the amplitude and frequency values in their raw forms. The kamp amplitude value is taken from a range of positive integers ranging from 0 to 32768; the kfreq frequency value is stated in hertz (cycles per second) and can be any value within the range of audible frequencies.

Csound does provide conversion opcodes that let musicians enter data in more familiar terms, such as decibels and pitch classes (note names), and if you use a suitable development environment, you can use a piano-roll display to enter events along a graphic timeline.

Linux has truly started to compete with Windows and MacOS as a platform for professional sound applications. Linux Multimedia Studio (LMMS) is a Linux sound tool that packs a variety of impressive features into a neat bundle.