Walkthrough: C++

This page provides a walkthrough of a simple machine learning situation, and is intended to demonstrate a typical machine learning architecture.

Before you start

Go to the Download and Setup page and follow the instructions to get the RAPID-MIX API. The code discussed on this page is the example called HelloRapidMix, in the /examples folder.

What we’re doing

It is possible to think of a machine learning algorithm like any other algorithm or transfer function — the user provides an input, the algorithm is applied, and a specific output is returned. For this example, we are going to try to make a machine learning algorithm, or model, that takes MIDI note numbers as input, and outputs the frequency in Hertz that corresponds with that note. That formula can be expressed mathematically like this:

dis the MIDI note number and f is the output frequency. We don’t really need machine learning to do this conversion. Rather, we’re looking at a familiar case and seeing if and how we can reproduce it in a new way.

Choose your machine learning

The RAPID-MIX API gives users access to many machine learning algorithms that fit different use cases. For this case, we want a static algorithm. (Terms in bold are also defined on the General Concepts page.) We’re choosing static because we are just looking at individual MIDI note numbers; the concept of time is not involved. If we were looking at a melody with rhythm, we would prefer a temporal algorithm.

Classification algorithms identify data by discrete label — this is a circle, that is a square, etc. We might use one to identify pitch class. But in this case, we want a continuous range of frequencies, including values between or outside of examples we might provide. For that, we want regression.

The simplest static regression object can be created like this:

C++

1

2

3

#include "rapidmix.h"

rapidmix::staticRegression mtofRegression;

You need to include the rapidmix header, and create a a staticRegression object in the rapidmix namespace.

Training Data

The other object that this application requires is a place to store examples, or training data, that the machine learning algorithm can use to train. There is one training data class that works with any algorithm. It can be instantiated like this:

This new trainingData object has some complexity, because it can handle both temporal and static data. But, we can ignore the temporal features for now and use it simply.

Adding elements

An element, or example, is a data point that contains a specific output associated with a specific input. When we train, we are tell the algorithm that “When I send in this input, I want to get back this output.” Both inputs and outputs can be single values or multiple values. In practice, we can send algorithms input values that are not in the training data and expect (or hope) to get a reasonable output.

We’ve added frequency values for MIDI notes 54, 60, 66, and 72, giving us an even distribution of data points in the span of two octaves.

Train

Training is the process of adjusting the internal parameters of a given model to the specific training data provided. Ideally, the trained model will give a reasonable approximation of the desired output, given a specific input. The actual computation that does the training can be very different for different algorithms. Neural Networks can take a relatively large amount of computation to train, but then they run quite quickly. Other algorithms offer different compromises between training and running efficiencies.

For any algorithm, the train() method, with a trainingData object as it’s argument, will train using the default settings. For example:

C++

1

2

//Train the machine learning model with the data

mtofRegression.train(myData);

Run

Once an algorithm is trained, you can give it input and get output back. That method is also fairly simple:

C++

1

2

3

//Run the trained model on the user input

std::vector<double>inputVec={double(newNote)};

doublefreqHz=mtofRegression.run(inputVec)[0];

Like training data, both the input and the output are vectors of doubles. In this case we’re only concerned with single values, but we still need to deal with vectors.

Finished code

On the next page, all of the machine learning code is wrapped in a simple console app that takes in input and returns a frequency.

You might notice that the frequency values are not very similar to what we would hope for, given the known formula for calculating frequency from MIDI values. It turns out that the default staticRegression algorithm (a Neural Network with one hidden layer, each layer having as many nodes as there are inputs) is not very good at representing this type of function. For better performance, we might try a different algorithm.

Part 2: xmmStaticRegression

This is how we do it with a different model:

C++

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

rapidmix::xmmConfig xcfg;

xcfg.relativeRegularization=0.1;

rapidmix::xmmStaticRegression myGmr(xcfg);

rapidmix::trainingData myData;

//Record one-element phrases

std::vector<double>input={48};

std::vector<double>output={130.81};

myData.recordSingleElement("lab1",input,output);

input={54};

output={185.00};

myData.addElement("lab2",input,output);

myData.stopRecording();

//Train

myGmr.train(myData);

//Run the trained model on the user input

std::vector<double>inputVec={double(newNote)};

doublefreqHz=myGmr.run(inputVec)[0];

There are a few significant differences in this example. It has been adapted to use a different regression algorithm: xmmStaticRegression. This algorithm take a little bit of preliminary configuration. A configuration object is created using rapidmix::xmmConfig, and it is configured using relativeRegularization(). That object is passed into the constructor of xmmStaticRegression.