I'm using the JACK server for real-time audio I/O and ALSA for real-time MIDI I/O. The -Ma flag opens Csound to MIDI input from all available MIDI ports; the other flags have been explained. The -o and -i output/input flags have been specifically defined, and the --expression-opt flag basically tells Csound to hurry things along; it's good to use it in real-time performance.

Next, I'll tweak the instrument and score code. I want to derive my instrument's frequency and amplitude values from the MIDI note and velocity values received from the keyboard, a task perfectly suited to the midinoteoncps opcode. To do this, add the following line at the start of the instrument code,

midinoteoncps p5, p4

and change the assignment for ifn:

ifn = 1

Then, you can redefine the kenv statement with a MIDI envelope type:

kenv madsr 0.5, 0.8, 0.8, 0.5

The instrument is now prepared for incoming MIDI signals. Add the following line to the score:

f0 3600

Comment out the i1 line, and leave the f1 line intact. The f0 event initializes and leaves Csound in a receiving state for one hour, which should be time enough to connect a MIDI keyboard to a free USB port. Because I told Csound to receive on any and all MIDI ports, I can run the modified CSD file as before, but this time I'll hear nothing until I play a note on the keyboard.

If this all works, you'll hear your notes at a low volume, so you can boost the output by increasing kamp's value:

kamp = p4*10

You can apply an envelope to kamp for a gentler start and finish to your notes:

asig oscil kamp*kenv, kfreq, ifn

The revised instrument, minus excessive tags and comments, is shown in Listing 2.

More complex MIDI opcodes are available; this example merely demonstrates how to switch a deferred-time score to one with MIDI-controlled real-time output. I can extend the example further to include amenities, such as a GUI with MIDI parameter control, but I leave that experiment to the diligent reader.

Documentation

Csound's documentation is rich and varied. Along with the official manual – available online and downloadable in HTML, PDF, and CHM formats – you can access the online FLOSS manual, purchase any or all of the six full-length books about Csound, and download extensive collections of instruments and scores.

Dozens of well-produced videos about Csound can be viewed on YouTube and Vimeo, and many pieces of Csound-made music have been uploaded to csounds.com [1] and SoundCloud [8]. Users and developers communicate over the expected channels, including mailing lists, dedicated websites, and social networking services. Face-to-face encounters are encouraged by the International Csound Conference, the second of which was held in Boston in October 2013. Csounders are also usually present in strength at related meetings, such as the Linux Audio Conference and the International Computer Music Conference.

As lead developer John ffitch emphasizes in the monthly list reminder, participation is open to all, regardless of skill level or experience. The user and development communities overlap, with most developers contributing compositions as well as code.

Musically, Csound is style-agnostic, with a history rich in association with the general development of computer music technologies. Richard Boulanger's "Trapped In Convert" and James Dashow's "In Winter Shine" are notable examples of Csound's capabilities in the 1980s. Today, popular music styles may include Csound in a generative or processing role. Boulanger's students have worked with Trent Reznor and Richard James (Aphex Twin) in projects involving Csound. Boulanger himself has been central to the development of the Csound4Live series of plugins that bring Csound's powers to users of Abelton Live, one of the most popular music production programs for Windows.

Csound can be studied in programs at various universities and colleges around the world. Thanks to the presence of Dr. Boulanger, a full course in Csound is taught at the Berklee College in Boston, where Boulanger has collected a talented crew of students. Many other universities and colleges around the world offer courses either focusing on Csound or using it as adjunct software for signal processing studies.

Victor Lazzarini teaches Csound at the National University in Maynooth, Ireland, where he has attracted a very talented team, including Cabbage developer Rory Walsh. Not surprisingly, chief Csound developer John ffitch (primus inter pares) uses Csound in his DSP courses. If you'd like to study Csound at the university level, check the Csound mail list and ask about currently available courses.

Some splendid compositions have come from Csound-based composers. For example, composers Art Hunkins and Dave Seidel have been productive with Csound's intonation capabilities, Oeyvind Brandtsegg has designed wonderful installations using Csound for audio processing, Peiman Khosravi's work with sound localization is very inspiring, and the music of Michael Gogins explores the strange spaces opened by chaotic and other probabilistic mathematics. In truth, a lot more Csound-based music is available out there, and it's definitely worth looking for and listening to.

Personal Notes

My experience with Csound goes back to 1989. By that time, MIDI's limitations were starting to chafe against my compositional needs, and I was looking at other approaches to creating music with the computer. CPU architectures for personal computers were becoming powerful enough to run Csound, so I could test it on capable hardware – at that time a 486 running MS-DOS. I was hooked on Csound after only a brief trial period, and I've stayed hooked. When I started experimenting with Linux, I was pleased to learn that Csound could run on it, too.

For the past five years, I've been working with AVSynthesis (Figure 6), a mixed-media environment designed for creating complex interactions between its audio and graphics lobes. OpenGL handles the 3D graphics animation and transformations, while Csound lifts the audio weight. The halves can be severed, so my work includes graphics-only, audio-only, and mixed-media pieces. AVS includes a fixed number of predesigned, high-quality instruments and signal processors, all with user-definable parameter control ranges. The program operates in real-time and non-real-time modes, but I favor its non-real-time aspect.

Figure 6: AVSynthesis 40_5_19.

Each instrument plays a score defined by the instrument's selected composition mode, and each composition mode presents a unique interface for generating score events. Four modes are currently available: a 64-step sequencer, an analog-style sequencer, a piano-roll score, and a GUI for Andres Bartetzki's Cmask, which is a program designed for generating score events for audio synthesis environments like Csound. External control is possible with MIDI and OSC protocols, and output includes a sound file, a graphics sequence, and a properly configured Csound CSD file.

AVSynthesis is not a generalized front end for Csound. You can't plug in new instrument and/or processing modules, and there are other purposely fixed aspects to its design. It is an instrument designed for composition and performance, with its audio I/O completely based on Csound. In that regard, for my purposes, I consider it a Csound-based environment, restricted but powerful, and certainly an excellent representation of Csound's capabilities.

My use of Csound includes a background wind sound called "The Spring Of 23" [9], works combining acoustic instruments with Csound (live and pre-recorded), and strictly Csound pieces, such as "Vespers" (played at the 2013 Linux Audio Conference) and "Alba" (played at Csound Conference 2013). In my opinion, Csound can be used to make any kind of music, although it is certainly better suited to some forms than others.

Linux has truly started to compete with Windows and MacOS as a platform for professional sound applications. Linux Multimedia Studio (LMMS) is a Linux sound tool that packs a variety of impressive features into a neat bundle.