Stochastic, from the Greek ‘stochos’ or ‘aim, guess’, means of, relating to, or characterized by conjecture and randomness. A stochastic process is one whose behavior is ‘non-deterministic’ in that a state does not fully determine its next state. – Wikipedia (01)

In probability theory, a stochastic process is often defined as a collection of random quantities. Roughly speaking it is the counterpart of a ‘deterministic process’; instead of dealing only with one possible reality of how the process might evolve under time, in a random process there is some ‘indeterminacy’ in its future evolution described by probability distributions. This means that even if the initial condition is known, there are more possibilities the process might go to. (02)

Fig. 01: ‘Example of eight random walks in one dimension starting at 0. The plot shows the time steps (horizontal axis) versus the current position on the line (vertical axis).’

2. Stochastic processes as Optimizers

Given the difficulties in many real-world problems and the inherent uncertainty in information that may be available for carrying out the task, stochastic optimization has been playing a rapidly growing role. (03)

Stochastic optimization algorithms are optimization algorithms which satisfy one or both of the following properties: - There is random noise in the measurements of the criterion to be optimized and/or related information… - There is a random choice made in the search direction as the algorithm iterates toward a solution. – Wikipedia (04)

Algorithms satisfying second property are usually created via computer-based pseudorandom number generators. The most common example is ‘genetic algorithms’. (05)

No matters how you use a computer, or whichever computer you use, to create an art work is not easy. Nevertheless, I believe artists can find a new horizon in their creative activities by having the experience of using geometric object and/or stochastic process. – Yoshiyuki Abe (06)

Although it is a controversial issue that whether randomness adds creativity or just unpredictability (which is believed to be completely different from creativity), there is no doubt that using stochastic processes can end to results that might not be possible in any other way. I think, roughly speaking, this can be called increase of creativity. At least the fact that you can have a variety of outputs (instead of one in other conventional methods) from the same inputs, give one the possibility of choice (with any kind of ‘fitness’ criterion).

Fig. 04: Fire of Faces - Karl Sims

4. Randoms as insurance in parametric programs

A very helpful way for programmers to test the reliability of their programs is giving a level of randomness to their parameters. The problem that many times happens, when for example you are scripting a parametric algorithm, is that you fix an initial setup and base your entire process on that; then, when after finishing the script you change your initial setup or parameters, some unpredicted errors happen that sometimes make you completely change your algorithm.

What I myself always do to avoid this to happen is, while developing the algorithm, I let my initial setup or parameters be chosen in a specific range randomly. In this way, you can rely more on your script to be further responding to the changes in your changeable parameters.

5. Generating Stochastic Processes

While having real stochastic processes (like Random Walk) is not doable mathematically, the most common way of having processes making complex differentiated results is combination of ‘deterministic algorithms’ and random inputs, parameters, or choices in different stages of them.

A simple example for this can be using random numbers as inputs for Cellular Automata or a computer-graphics program written by Karl Sims. This program uses genetic algorithms to generate new images, or patterns, from pre-existing images. In a typical run, the first image is generated at random. Then the program makes nineteen independent changes in the initial image by using ‘genetic algorithms’. At this point, the artist chooses either one image to be mutated, or two images to be ‘mated’ (this part can be automated my defining a fitness function or even be randomized again). The result is another screenful of twenty images, of which all but one (or two) is newly-generated by random mutations or crossovers. The process is then repeated, for as many generations as the artist wants. (07)

Fig. 05: Phenotype selection, genotype reproduction - Karl Sims

6. Two facts about randoms

A: Are randoms really random?

A pseudorandom process is a process that appears random but is not. Pseudorandom sequences typically exhibit statistical randomness while being generated by an entirely deterministic causal process. – Wikipedia (08)

‘True randomness’ is only the one that comes from environment (like Brownian motion) and even phenomena described by ‘chaos theoryف are also ‘deterministic’ which means that the result is fully defined by the initial condition. (09) ‘Cosmological hypothesis of determinism’ even takes steps ahead and claims that ‘there is no randomness in the universe, only unpredictability.’ (10)

To clarify more, I will describe an experiment I made in MEL (Maya Embedded Language):

The second result was different from the first one, but the interesting point was that whenever I repeated the same command, I got the same result. This means that for each certain Seed (a factor which initializes the pseudorandom number generator) there is a predefined sequence of random numbers. The computer already knows which number it is going to randomize! For us, it is random because we cannot predict it.

B: Randoms can be precise!

The Central Limit Theorem (CLT) states that, any sum of many independent identically distributed random variables is approximately normally distributed…For instance if dice are rolled repeatedly the frequency distribution will resemble more and more the Normal Distribution. (11)

This is another interesting fact about randomness. To clarify this I give an example using Microsoft Excel. I opened an Excel file, generated 10 random numbers between 0 and 100, and asked Excel to calculate the average of these 10 numbers for me. The average was 39. I repeated the operation again, but this time by 100 numbers; the average was 53.69. By repeating the operation by 1000 numbers, the average was 51.829. The average by 10000 numbers was 50.235 and by 100000 numbers was 50. 014.

Although each time that you repeat the operation the numbers and average are different, there is a steady fact: the average of the numbers gets closer to 50 (the average of 0 and 100) by increasing the number of random numbers.

Fig. 06: Applying random processes on an existing image by using a base image.