RANDOM

This chapter is in three parts. Part I provides a general introduction to the concepts behind random numbers and how to work with them in Csound. Part II focusses on a more mathematical approach. Part III introduces a number of opcodes for generating random numbers, functions and distributions and demonstrates their use in musical examples.

I. GENERAL INTRODUCTION

Random is Different

The term random derives from the idea of a horse that is running so fast it becomes 'out of control' or 'beyond predictability'.1 Yet there are different ways in which to run fast and to be out of control; therefore there are different types of randomness.

We can divide types of randomness into two classes. The first contains random events that are independent of previous events. The most common example for this is throwing a die. Even if you have just thrown three '1's in a row, when thrown again, a '1' has the same probability as before (and as any other number). The second class of random number involves random events which depend in some way upon previous numbers or states. Examples here are Markov chains and random walks.

The use of randomness in electronic music is widespread. In this chapter, we shall try to explain how the different random horses are moving, and how you can create and modify them on your own. Moreover, there are many pre-built random opcodes in Csound which can be used out of the box (see the overview in the Csound Manual). The final section of this chapter introduces some musically interesting applications of them.

Random Without History

A computer is typically only capable of computation. Computations are deterministic processes: one input will always generate the same output, but a random event is not predictable. To generate something which looks like a random event, the computer uses a pseudo-random generator.

The pseudo-random generator takes one number as input, and generates another number as output. This output is then the input for the next generation. For a huge amount of numbers, they look as if they are randomly distributed, although everything depends on the first input: the seed. For one given seed, the next values can be predicted.

Uniform Distribution

The output of a classical pseudo-random generator is uniformly distributed: each value in a given range has the same likelihood of occurence. The first example shows the influence of a fixed seed (using the same chain of numbers and beginning from the same location in the chain each time) in contrast to a seed being taken from the system clock (the usual way of imitating unpredictability). The first three groups of four notes will always be the same because of the use of the same seed whereas the last three groups should always have a different pitch.

Note that a pseudo-random generator will repeat its series of numbers after as many steps as are given by the size of the generator. If a 16-bit number is generated, the series will be repeated after 65536 steps. If you listen carefully to the following example, you will hear a repetition in the structure of the white noise (which is the result of uniformly distributed amplitudes) after about 1.5 seconds in the first note.2 In the second note, there is no perceivable repetition as the random generator now works with a 31-bit number.

The way to set the seed differs from opcode to opcode. There are several opcodes such as rand featured above, which offer the choice of setting a seed as input parameter. For others, such as the frequently used random family, the seed can only be set globally via the seed statement. This is usually done in the header so a typical statement would be:

Random number generation in Csound can be done at any rate. The type of the output variable tells you whether you are generating random values at i-, k- or a-rate. Many random opcodes can work at all these rates, for instance random:

In the first case, a random value is generated only once, when an instrument is called, at initialisation. The generated value is then stored in the variable ires. In the second case, a random value is generated at each k-cycle, and stored in kres. In the third case, in each k-cycle as many random values are stored as the audio vector has in size, and stored in the variable ares. Have a look at example 03A12_Random_at_ika.csd to see this at work. Chapter 03A tries to explain the background of the different rates in depth, and how to work with them.

Other Distributions

The uniform distribution is the one each computer can output via its pseudo-random generator. But there are many situations you will not want a uniformly distributed random, but any other shape. Some of these shapes are quite common, but you can actually build your own shapes quite easily in Csound. The next examples demonstrate how to do this. They are based on the chapter in Dodge/Jerse3 which also served as a model for many random number generator opcodes in Csound.4

Linear

A linear distribution means that either lower or higher values in a given range are more likely:

To get this behaviour, two uniform random numbers are generated, and the lower is taken for the first shape. If the second shape with the precedence of higher values is needed, the higher one of the two generated numbers is taken. The next example implements these random generators as User Defined Opcodes. First we hear a uniform distribution, then a linear distribution with precedence of lower pitches (but longer durations), at least a linear distribution with precedence of higher pitches (but shorter durations).

Triangular

In a triangular distribution the values in the middle of the given range are more likely than those at the borders. The probability transition between the middle and the extrema are linear:

The algorithm for getting this distribution is very simple as well. Generate two uniform random numbers and take the mean of them. The next example shows the difference between uniform and triangular distribution in the same environment as the previous example.

More Linear and Triangular

Having written this with some very simple UDOs, it is easy to emphasise the probability peaks of the distributions by generating more than two random numbers. If you generate three numbers and choose the smallest of them, you will get many more numbers near the minimum in total for the linear distribution. If you generate three random numbers and take the mean of them, you will end up with more numbers near the middle in total for the triangular distribution.

If we want to write UDOs with a flexible number of sub-generated numbers, we have to write the code in a slightly different way. Instead of having one line of code for each random generator, we will use a loop, which calls the generator as many times as we wish to have units. A variable will store the results of the accumulation. Re-writing the above code for the UDO trirnd would lead to this formulation:

To get this completely flexible, you only have to get iMaxCount as input argument. The code for the linear distribution UDOs is quite similar. -- The next example shows these steps:

Uniform distribution.

Linear distribution with the precedence of lower pitches and longer durations, generated with two units.

The same but with four units.

Linear distribution with the precedence of higher pitches and shorter durations, generated with two units.

The same but with four units.

Triangular distribution with the precedence of both medium pitches and durations, generated with two units.

The same but with six units.

Rather than using different instruments for the different distributions, the next example combines all possibilities in one single instrument. Inside the loop which generates as many notes as desired by the iHowMany argument, an if-branch calculates the pitch and duration of one note depending on the distribution type and the number of sub-units used. The whole sequence (which type first, which next, etc) is stored in the global array giSequence. Each instance of instrument "notes" increases the pointer giSeqIndx, so that for the next run the next element in the array is being read. If the pointer has reached the end of the array, the instrument which exits Csound is called instead of a new instance of "notes".

With this method we can build probability distributions which are very similar to exponential or gaussian distributions.5 Their shape can easily be formed by the number of sub-units used.

Scalings

Random is a complex and sensible context. There are so many ways to let the horse go, run, or dance -- the conditions you set for this 'way of moving' are much more important than the fact that one single move is not predictable. What are the conditions of this randomness?

Which Way. This is what has already been described: random with or without history, which probability distribution, etc.

Which Range. This is a decision which comes from the composer/programmer. In the example above I have chosen pitches from Midi Note 36 to 84 (C2 to C6), and durations between 0.1 and 2 seconds. Imagine how it would have been sounded with pitches from 60 to 67, and durations from 0.9 to 1.1 seconds, or from 0.1 to 0.2 seconds. There is no range which is 'correct', everything depends on the musical idea.

Which Development. Usually the boundaries will change in the run of a piece. The pitch range may move from low to high, or from narrow to wide; the durations may become shorter, etc.

Which Scalings. Let us think about this more in detail.

In the example above we used two implicit scalings. The pitches have been scaled to the keys of a piano or keyboard. Why? We do not play piano here obviously ... -- What other possibilities might have been instead? One would be: no scaling at all. This is the easiest way to go -- whether it is really the best, or simple laziness, can only be decided by the composer or the listener.

Instead of using the equal tempered chromatic scale, or no scale at all, you can use any other ways of selecting or quantising pitches. Be it any which has been, or is still, used in any part of the world, or be it your own invention, by whatever fantasy or invention or system.

As regards the durations, the example above has shown no scaling at all. This was definitely laziness...

The next example is essentially the same as the previous one, but it uses a pitch scale which represents the overtone scale, starting at the second partial extending upwards to the 32nd partial. This scale is written into an array by a statement in instrument 0. The durations have fixed possible values which are written into an array (from the longest to the shortest) by hand. The values in both arrays are then called according to their position in the array.

Random With History

There are many ways a current value in a random number progression can influence the next. Two of them are used frequently. A Markov chain is based on a number of possible states, and defines a different probability for each of these states. A random walk looks at the last state as a position in a range or field, and allows only certain deviations from this position.

Markov Chains

A typical case for a Markov chain in music is a sequence of certain pitches or notes. For each note, the probability of the following note is written in a table like this:

This means: the probability that element a is repeated, is 0.2; the probability that b follows a is 0.5; the probability that c follows a is 0.3. The sum of all probabilities must, by convention, add up to 1. The following example shows the basic algorithm which evaluates the first line of the Markov table above, in the case, the previous element has been 'a'.

The probabilities are 0.2 0.5 0.3. First a uniformly distributed random number between 0 and 1 is generated. An acculumator is set to the first element of the line (here 0.2). It is interrogated as to whether it is larger than the random number. If so then the index is returned, if not, the second element is added (0.2+0.5=0.7), and the process is repeated, until the accumulator is greater or equal the random value. The output of the example should show something like this:

The next example puts this algorithm in an User Defined Opcode. Its input is a Markov table as a two-dimensional array, and the previous line as index (starting with 0). Its output is the next element, also as index. -- There are two Markov chains in this example: seven pitches, and three durations. Both are defined in two-dimensional arrays: giProbNotes and giProbDurs. Both Markov chains are running independently from each other.

Random Walk

In the context of movement between random values, 'walk' can be thought of as the opposite of 'jump'. If you jump within the boundaries A and B, you can end up anywhere between these boundaries, but if you walk between A and B you will be limited by the extent of your step - each step applies a deviation to the previous one. If the deviation range is slightly more positive (say from -0.1 to +0.2), the general trajectory of your walk will be in the positive direction (but individual steps will not necessarily be in the positive direction). If the deviation range is weighted negative (say from -0.2 to 0.1), then the walk will express a generally negative trajectory.

One way of implementing a random walk will be to take the current state, derive a random deviation, and derive the next state by adding this deviation to the current state. The next example shows two ways of doing this.

The pitch random walk starts at pitch 8 in octave notation. The general pitch deviation gkPitchDev is set to 0.2, so that the next pitch could be between 7.8 and 8.2. But there is also a pitch direction gkPitchDir which is set to 0.1 as initial value. This means that the upper limit of the next random pitch is 8.3 instead of 8.2, so that the pitch will move upwards in a greater number of steps. When the upper limit giHighestPitch has been crossed, the gkPitchDir variable changes from +0.1 to -0.1, so after a number of steps, the pitch will have become lower. Whenever such a direction change happens, the console reports this with a message printed to the terminal.

The density of the notes is defined as notes per second, and is applied as frequency to the metro opcode in instrument 'walk'. The lowest possible density giLowestDens is set to 1, the highest to 8 notes per second, and the first density giStartDens is set to 3. The possible random deviation for the next density is defined in a range from zero to one: zero means no deviation at all, one means that the next density can alter the current density in a range from half the current value to twice the current value. For instance, if the current density is 4, for gkDensDev=1 you would get a density between 2 and 8. The direction of the densities gkDensDir in this random walk follows the same range 0..1. Assumed you have no deviation of densities at all (gkDensDev=0), gkDensDir=0 will produce ticks in always the same speed, whilst gkDensDir=1 will produce a very rapid increase in speed. Similar to the pitch walk, the direction parameter changes from plus to minus if the upper border has crossed, and vice versa.

II. SOME MATHS PERSPECTIVES ON RANDOM

Random Processes

The relative frequency of occurrence of a random variable can be described by a probability function (for discrete random variables) or by density functions (for continuous random variables).

When two dice are thrown simultaneously, the sum x of their numbers can be 2, 3, ...12. The following figure shows the probability function p(x) of these possible outcomes. p(x) is always less than or equal to 1. The sum of the probabilities of all possible outcomes is 1.

For continuous random variables the probability of getting a specific value x is 0. But the probability of getting a value within a certain interval can be indicated by an area that corresponds to this probability. The function f(x) over these areas is called the density function. With the following density the chance of getting a number smaller than 0 is 0, to get a number between 0 and 0.5 is 0.5, to get a number between 0.5 and 1 is 0.5 etc. Density functions f(x) can reach values greater than 1 but the area under the function is 1.

Generating Random Numbers With a Given Probability or Density

Csound provides opcodes for some specific densities but no means to produce random number with user defined probability or density functions. The opcodes rand_density and rand_probability (see below) generate random numbers with probabilities or densities given by tables. They are realized by using the so-called rejection sampling method.

Rejection Sampling:

The principle of rejection sampling is to first generate uniformly distributed random numbers in the range required and to then accept these values corresponding to a given density function (or otherwise reject them). Let us demonstrate this method using the density function shown in the next figure. (Since the rejection sampling method uses only the "shape" of the function, the area under the function need not be 1). We first generate uniformly distributed random numbers rnd1 over the interval [0, 1]. Of these we accept a proportion corresponding to f(rnd1). For example, the value 0.32 will only be accepted in the proportion of f(0.32) = 0.82. We do this by generating a new random number rand2 between 0 and 1 and accept rnd1 only if rand2 < f(rnd1); otherwise we reject it. (see Signals, Systems and Sound Synthesis chapter 10.1.4.4)

Random Walk

In a series of random numbers the single numbers are independent upon each other. Parameter (left figure) or paths in the room (two-dimensional trajectory in the right figure) created by random numbers wildly jump around.

Example 1

Table[RandomReal[{-1, 1}], {100}];

We get a smoother path, a so-called random walk, by adding at every time step a random number r to the actual position x (x += r).

Example 2

x = 0; walk = Table[x += RandomReal[{-.2, .2}], {300}];

The path becomes even smoother by adding a random number r to the actual velocity v.

v += r

x += v

The path can be bounded to an area (figure to the right) by inverting the velocity if the path exceeds the limits (min, max):

vif(x < min || x > max) v *= -1

The movement can be damped by decreasing the velocity at every time step by a small factor d

v *= (1-d)

Example 3

x = 0; v = 0; walk = Table[x += v += RandomReal[{-.01, .01}], {300}];

The path becomes again smoother by adding a random number r to the actual acelleration a, the change of the aceleration, etc.

III. MISCELLANEOUS EXAMPLES

Csound has a range of opcodes and GEN routine for the creation of various random functions and distributions. Perhaps the simplest of these is random which simply generates a random value within user defined minimum and maximum limit and at i-time, k-rate or a-rate accroding to the variable type of its output:

ires random imin, imax
kres random kmin, kmax
ares random kmin, kmax

Values are generated according to a uniform random distribution, meaning that any value within the limits has equal chance of occurence. Non-uniform distributions in which certain values have greater chance of occurence over others are often more useful and musical. For these purposes, Csound includes the betarand, bexprand, cauchy, exprand, gauss, linrand, pcauchy, poisson, trirand, unirand and weibull random number generator opcodes. The distributions generated by several of these opcodes are illustrated below.

In addition to these so called 'x-class noise generators' Csound provides random function generators, providing values that change over time a various ways.

randomh generates new random numbers at a user defined rate. The previous value is held until a new value is generated, and then the output immediately assumes that value.

The instruction:

kmin = -1
kmax = 1
kfreq = 2
kout randomh kmin,kmax,kfreq

will produce and output something like:

randomi is an interpolating version of randomh. Rather than jump to new values when they are generated, randomi interpolates linearly to the new value, reaching it just as a new random value is generated. Replacing randomh with randomi in the above code snippet would result in the following output:

In practice randomi's angular changes in direction as new random values are generated might be audible depending on the how it is used. rsplsine allows us to specify not just a single frequency but a minimum and a maximum frequency, and the resulting function is a smooth spline between the minimum and maximum values and these minimum and maximum frequencies. The following input:

We need to be careful with what we do with rspline's output as it can exceed the limits set by kmin and kmax. Minimum and maximum values can be set conservatively or the limit opcode could be used to prevent out of range values that could cause problems.

The following example uses rspline to 'humanise' a simple synthesiser. A short melody is played, first without any humanising and then with humanising. rspline random variation is added to the amplitude and pitch of each note in addition to an i-time random offset.

Most of them have been written by Paris Smaragdis in 1995: betarnd, bexprnd, cauchy, exprnd, gauss, linrand, pcauchy, poisson, trirand, unirand and weibull.^

According to Dodge/Jerse, the usual algorithms for exponential and gaussian are: Exponential: Generate a uniformly distributed number between 0 and 1 and take its natural logarithm. Gauss: Take the mean of uniformly distributed numbers and scale them by the standard deviation. ^

There has been error in communication with Booktype server.
Not sure right now where is the problem.