Realtime Audio on iOS Tutorial: Making a Mandolin

Update

As the MoMu toolkit has become out of date, I have re-written this tutorial using The Amazing Audio Engine as the audio engine and the latest Synthesis Toolkit in C++ code. Please visit the updated tutorial here:

—

Wouldn’t it be great to pull out an all-powerful musical instrument out of your pocket whenever inspiration strikes? This attracts musically-minded programmers towards the iPhone, as it has all the computing power needed to become that instrument. We soon find out that getting the thing to produce a sound isn’t difficult: just insert an audio file into your project and tell an AVAudioPlayer object to play it. Five lines of code, at most.

But there are a lot of disadvantages with relying on premade audio files as the sound sources for your mobile instrument. Firstly, you don’t have the flexibility to control the sound in realtime: you can’t, for instance, change its pitch or insert a reverb on it while it plays. And modifying the audio file in the background will take time, meaning your users will have to put up with latency between their input and the resulting sound. There are also issues of performance, as good quality audio files take up a lot more disk space and memory.

What you want is your app to generate and control its own sounds, and for that you need to be able to process the audio at the sample level. But it turns out that the setup required to access and control audio samples on iOS involves a considerable amount of time (and an in-depth knowledge of the mushrooms Apple engineers were eating at the time they came up with the API for the iPhone’s audio hardware).

This tutorial will show you how to access, generate, and control audio samples on an iOS app, using two freely available open-source libraries, one that will set up our low-latency Audio Session (the MoMu Toolkit), and another that will generate (i.e. synthesize) the sounds (the Synthesis Toolkit in C++).

Setting up the Xcode Project

1) Open up Xcode and create a new project.

2) In the column on the left, under the heading “iOS”, pick “Application”, and then choose “Single View Application”. Click “Next”.

3) In the next screen:

Enter “Mandolin” as the “Product Name”

“Company Identifier” can be your name.

No need to enter something for “Class Prefix”

Pick “iPhone” as “Device Family”.

Tick “Use Automatic Reference Counting”.

4) Click “Next” and save the project somewhere dear to you.

5) We need to import iOS’s audio processing frameworks for the MoMu code to work. Click on the new project’s icon (top left), scroll down to where it says “Linked Frameworks and Libraries”, click the ‘+’ button.

6) Click on AudioToolbox.framework and then “Add”.

7) The MoMu Toolkit is a hybrid of C, C++, and Objective-C (also known as Objective-C++), so we need to tell Xcode that our project’s source files are in that language too. To do this, rename AppDelegate.m to AppDelegate.mm and rename ViewController.m to ViewController.mm.

The AudioData struct will contain all the sound-generating or sound-modifying objects active in the callback function. For now, it contains a Mandolin object that will generate the mandolin sounds.

15) Time to set up MoMu. MoMu needs us to define our sample rate, how big the audio processing buffer is, and if it’s stereo we’re dealing with or not. Go to ViewController.mm and type the following under #import “mo_audio.h”:

#define SRATE 44100#define FRAMESIZE 128#define NUMCHANNELS 2

16) And just below that, paste the declaration of our Audio Callback function:

This function is our render callback. It gets called hundreds of times a second and processes in real-time the frames that contain the samples. The magic happens inside the for loop: we assign the output of the mandolin to the sample that will be output (to our headphones). We’ll thus hear silence if the mandolin is not struck, and the mandolin’s sound when the mandolin’s is plucked.

19) It’s now time to pluck the mandolin! We’ll strike it by pressing a button we’ll insert in our nib file or storyboard, so let’s define the code for that IBAction:

-(IBAction)pluckMyMandolin{
audioData.myMandolin->pluck(1);
}

We’re calling the function pluck() on our instance of the STK’s Mandolin. As you can see from the documentation for it, this method takes at least one parameter, in our case it’s a float defining the amplitude with which we want to pluck the strings. Be familiar with the docs for the STK classes if you want to be a good instrument-maker, son.

20) Now open ViewController.xib (or MainStoryBoard.storyboard), drag a simple Round Rect Button into the view. Ctrl+drag from the button to the File’s Owner, let go, and select the pluckMyMandolin method.

21) Run the app and pluck away.

If you run the app on the Simulator, the console may, on some older versions of Xcode, print out a long and menacing Error loading /System/Library/Extensions/AudioIPCDriver.kext/Contents/Resources/AudioIPCPlugIn.bundle/Contents/MacOS/AudioIPCPlugIn:. This sometimes happens when the Simulator can’t find a framework that’s only included in iOS (in our case it’s the AudioToolbox framework we imported earlier). The Simulator will then use its Mac-based counterpart, so the mandolin sound should play nonetheless.

I’ve noticed the initial plucking too, it’s probably due to the constructor of the STK’s Instrmnt class. One workaround is to do data->myMandolin->tick() * someInt, where someInt is initially zero and then set to 1 on the first pluck.

How would I go about creating a class for another instrument, say a guitar or ukulele? It looks like all I’d need to do would be to create another class derived from PluckTwo but is there any info available as to what the contents to the corresponding raw sound files should be?