Things with code, creativity and computation.

A simple goal, to directly connect Emacs to running hardware and software synthesisers with realtime control. Emacs as a musical instrument.

I already use code to control synths (as Repl-Electric) but there is a level of indirection between the code and the effect on the music. You push keys on your computer keyboard and nothing happens. Only when you run the code does the music change. I wanted to add realtime control to my performances while still remaining in code and Emacs. Bringing the performance closer to musical instruments were instant feedback is a core part of the performance experience.

Playing the Emacs

Sculpting sound live with Emacs.

Scratching Samples with Emacs

Doing crazy things with Emacs starts to open more doors in musical expression. Since we can control music hardware with Emacs, we can control the playhead position within a sample. Much like the needle of a record player.

Say we map the position of your cursor in the Emacs buffer to the position of the playback head. By moving around in your buffer you can scratch the sample.

Emacs Communicating with Midi

A lot of musical hardware and software only supports midi. We are sending OSC 😕. Hence we need a quick way of converting our OSC message to a midi message. Why don’t you send midi from Emacs directly? It’s not a new thought (https://www.emacswiki.org/emacs/EmacsMidi), but I’ve seen no examples of getting it working. My answer here is path of least resistance, and I don’t feel like implementing the midi standard in Elisp.

For MIDI_HOST I’m using the Inter-application communication (IAC) driver on Mac. This registers IAC Bus 1 as a midi device. I’m also sending a control change message, shaping the parameters rather than triggering notes. This could be any of the supported midi messages (note_on, note_off, pitch_bend, poly_pressure, etc).

(require'thingatpt);;The patterns for matching the beginning and;;end of something that looks like a float(put'float'end-op(lambda()(re-search-forward"[0-9-]*\.[0-9-]*"nilt)))(put'float'beginning-op(lambda()(if(re-search-backward"[^0-9-\.]"nilt)(forward-char))))(defunchange-number-at-point(fn-float-op)"Check if the thing under the cursor looks like a float and if so change it with `fn-float-op`. `fn-float-op` is passed the name of the synth, parameter and float value."(let*((bounds(bounds-of-thing-at-point'float))(float-val(buffer-substring(carbounds)(cdrbounds)))(cursor-point(point)))(goto-char(carbounds))(re-search-backward"^\\([^\n]+\\): "(line-beginning-position)t)(let((synth-str(match-string1nil)))(goto-char(carbounds))(delete-char(lengthfloat-val))(let*((parts(split-string(string-trimsynth-str)" "))(synth-and-param(ifm(concat(replace-regexp-in-string"^#"""(firstparts))"/"(first(reverseparts)))nil)))(insert(format"%.2f"(funcallfn-float-op(string-to-numberfloat-val)synth-and-param)))))(goto-charcursor-point)))(defuninc-float-at-point()"Increase a float value and send OSC message."(interactive)(change-number-at-point(lambda(float-valsynth-and-param)(let((new-float(min1.00(+float-val0.01))))(route-osc-messagesynth-and-paramnew-float)new-float))))(defundec-float-at-point()"Decrease a float value and send OSC message."(interactive)(change-number-at-point(lambda(float-valsynth-and-param)(let((new-float(max0.00(-float-val0.01))))(route-osc-messagesynth-and-paramnew-float)new-float))))

Encoder ASCII Art

We are almost done. For fun I’ve added a visually aid to the position of the float encoder. Midi messages are generally limited to 0-127 values, so if we map that to 0-100% we can create a visual representation of the current setting.

Exploring patterns as a means of documenting Clojure functions to aid recall and understanding.

Whats the difference in Clojure between:
partition and partition-all?
interpose and interleave?
cons and conj?

Documenting Functions

All non-side effecting functions create or alter a pattern. To explain a function’s pattern we use a number of descriptions.

A function signature:

(nthrest coll n)

A textual description:

12

Returns a lazy seq of the elements of coll separated by sep.
Returns a stateful transducer when no collection is provided.

Examples showing application of the function:

12

(interpose1.0[0.30.4]);;=> [0.3 1.0 0.4 1.0]

Exploration vs Recall

As someone with a brain far more orientated to visuals than text I struggled to remember and understand many Clojure functions: nthrest, conjcons, etc.
The documentation of patterns in the Clojure documentation is all text. Even with the documentation brought into the editor I struggle. Example from Cider & Emacs:

Clojure has a strong focus on REPL driven development. If you don’t understand a function use an interactive REPL to explore examples.

Critically this favours discovery over recall. I can never remember the difference between conj and cons, but I can find out through the REPL.

To help aid memory and understanding I’ve turn the examples of the collection orientated functions in Clojure into visual patterns. I won’t try and make any general case on visuals vs text (Its a fuzzy research area: http://studiokayama.com/text-vs-visuals/).

(butlast coll)

(concat x y)

Returns a lazy seq representing the concatenation of the elements
in the supplied colls.

(concat )

;;=>

(conj coll x) (conj coll x & xs)

conj[oin]. Returns a new collection with the xs
'added'. (conj nil item) returns (item). The 'addition' may
happen at different 'places' depending on the concrete type.

conj vector

(conj )

;;=>

conj list

(conj )

;;=>

(cons x seq)

Returns a new seq where x is the first element and seq is
the rest.

(cons )

;;=>

(dedupe coll)

Returns a lazy sequence removing consecutive duplicates in coll.
Returns a transducer when no collection is provided.

(dedupe )

;;=>

(distinct coll)

Returns a lazy sequence of the elements of coll with duplicates removed.
Returns a stateful transducer when no collection is provided.

(distinct )

;;=>

(drop-last n coll)

Return a lazy sequence of all but the last n (default 1) items in coll

(drop-last 2 )

;;=>

(flatten coll)

Takes any nested combination of sequential things (lists, vectors,
etc.) and returns their contents as a single, flat foldable
collection.

(flatten )

;;=>

(interpose coll coll)

Returns a lazy seq of the elements of coll separated by sep.
Returns a stateful transducer when no collection is provided.

(interpose )

;;=>

(interleave coll coll)

Returns a lazy seq of the first item in each coll, then the second etc.

(interleave )

;;=>

(nthnext coll n)

Returns the nth next of coll, (seq coll) when n is 0.

(nthnext 2)

;;=>

(nthrest coll n)

Returns the nth rest of coll, coll when n is 0.

(nthrest 2 )

;;=>

(partition n coll)

Returns a lazy sequence of lists of n items each, at offsets step apart.
If step is not supplied, defaults to n, i.e. the partitions do not overlap.
If a pad collection is supplied, use its elements as necessary to complete
last partition upto n items. In case there are not enough padding elements,
return a partition with less than n items.

(partition 3 )

;;=>

(partition-all n coll)

Returns a lazy sequence of lists like partition, but may include
partitions with fewer than n items at the end. Returns a stateful
transducer when no collection is provided.

(partition-all 3 )

;;=>

(replace smap coll)

Given a map of replacement pairs and a vector/collection, returns a vector/seq
with any elements = a key in smap replaced with the corresponding val in smap.
Returns a transducer when no collection is provided.

(replace [0 3 4] )

;;=>

(rest coll)

Returns a possibly empty seq of the items after the first. Calls seq on its
argument.

(rest )

;;=>

(reverse coll)

Returns a seq of the items in coll in reverse order. Not lazy.

(reverse )

;;=>

(shuffle coll)

Return a random permutation of coll.

(shuffle )

;;=>

(sort coll)

Returns a sorted sequence of the items in coll. If no comparator is
supplied, uses compare. comparator must implement
java.util.Comparator. Guaranteed to be stable: equal elements will
not be reordered. If coll is a Java array, it will be modified. To
avoid this, sort a copy of the array.

(sort )

;;=>

(take-nth n coll)

Returns a lazy seq of every nth item in coll.

(take-nth 3 )

;;=>

(split-at n coll)

Returns a vector of [(take n coll) (drop n coll)]

(split-at 2 )

;;=>

Conclusions

As someone who performs live coding to an audience I perhaps have a different value on recall vs exploration. Hundreds of eyes staring at you tends to have that effect.
While some examples are stronger through patterns than others, at least for myself the use of a visual aid as part of development and documentation is beneficial. Its the only way I can the remember the oddity of conj.

Within my REPL interaction I use the functions-as-patterns toolkit, providing a higher level representation of the patterns and data. I can understand a drum pattern faster through colour than I can through a 1 & 0 data structure.

In creating the cheatsheet the value of the comparison of functions through patterns also became clear. I discovered almost identical functions such as nthnext and nthrest which only differed in a special case (with an empty sequence).

Problems of turning data into colour

While this visual cheatsheet is useful there are caveats:

Semantic’s of arguments

Its not always clear if an argument is an index or a value. If we look at the replace function example:

(replace (take 5 (hues-of-color)) [0 3 4])

0,3 & 4 are references to indices within the first argument. Ideally it would be nice to replace those with the relevant colour.
However the functions-as-patterns library cannot tell these are not values, it assumes everything is a value. Hence you end up with [0 3 4] drawn in shades of black:

Pattern of emptyness

I’ve not tried to visually represent, empty or nil. Some functions are defined by the difference in handling the empty case. The patterns might mis-lead you to think nthnext and nthrest are identical when they are not.

We use these propeties to guide us in creating new music for machines that explores the smudged edges around machine listening. Highlighting how differently humans and machines identify music. And for fun.

To try and match our generative audio to songs we will use a number of music services and some open-source audio fingerprinting tools. Most commercial audio fingerprinting algorithms are secret and patented up to the eyeballs. So there is a lot of trial and errors. Some services used to evalute a song match:

Soundcloud (copyright detection)

Youtube (copyright detection)

Shazam (audio lookup)

Chromaprint (Open-source audio fingerpinter)

Music for Machines

All generated tracks have comments exactly when a song was detected.

Warning: The audio clips are designed for machines and not your weak human ears . Hence keep your volume low before listening

Generation #2.0 – 1470683054969

Generation #2.0 – 1470700413305

Artist/Songs identified:

T-Pain Vs Chuckie Feat. Pitbull – Its Not You (Its Me)

Pink Floyd – Cymbaline

Machines listening with ambient background noise

While I experimented with lots of different services the above examples were most successful when using Shazam for identification. This focuses on operating in noisy environments and identifying a sound as quickly as possible based only on partial song information. This tolerance makes it easy to get Shazam to mis-match audio to songs.

The other services also had a nasty habit of banning new accounts uploading what appeared to be copyrighted infringing content (who would have thought!). Which makes the whole mass procedural generation somewhat challenging.

Shazam has a desktop app which will run detection on audio for 8 hours continuously. So over that time we generate a large set of audio and pick the winners from each generation.

Overtone synths & score generation

Dynamically generating Overtone synths using generative testing framework test.check.
Using QuickCheck style generators is a cheap way of exploring a permutation space given grammar rules, like those of a synth definition in Overtone. Supports selection of:

Audio wave (potentially many combined)

Envelope type

Effects (reverb/echo/pitchshift/delays)

Filters (low pass/high pass)
The various properties of the audio units are selected randomly.

Dynamically generating a score varying:

Clock tempo

Note lengths

Root note / scale

Octaves

Dynamically generating synth control parameters:

Distortion

Changing audio wave (sin/saw/triangle/square/etc)

Running for 3 minutes with a random chance of mutation to score and parameters.

Conclusion

There is a clear difference in the strength of accuracy when it comes to fingerprinting audio for copyright infringement. It’s noticeable that Soundcloud or YouTube are matching when processing the entire track (even though it will check for partial matches) while Shazam focuses on as small a segment as possible LIVE. Open-source alternatives (like Chromaprint) while useful, provided little help tricking the commercial services.

Coming back to Shazam, what actually made the tracks match remains somewhat of a mystery. If we look at one example “Michael Jackson - You Are Not Alone” our generative score was not even in the same scale or tempo! We can identify things that made it hard to match, for example adding drums patterns killed off all matches. More layers of audio, more permutations to explore.

One thing is clear, the way machines learn and the specialisation on a single application rules out a whole subset of sound that is unlikely to enter the realm of music. Hence for the creators of the algorithms, a mismatch of this type is of little relevance.

The Bridge between Clojure and Shaders

A vital feature of Shadertone is a map between Clojure atoms and shader Uniforms. What is a shader Uniform? Well think of it as a read-only global variable in your shader. A Clojure watcher ensures any updates to your Clojure atom persist into your Uniform. A little clunky but all uniforms start with the letter i.

The shader:

1

uniformfloatiExample;

And in Clojure

123456

(def example-weight(atom0.5))(shadertone/start-fullscreen"resources/shaders/example.glsl":user-data{"iExample"example-weight});;iExample Uniform will also be updated.(reset!example-weight0.2)

Live editing shaders

When a shader file is edited Shadertone is watching the file (using watchtower) and will reload/recompile the changed file. This results in a slight freeze as the new code is run (This might be down to my graphics card).
Hence most of the time I prefer alternatives to live editing the shader to create smoother transitions.

Injecting movement

To make static images move we need a continuously changing value.

Shadertone gives us iGlobalTime using the number of seconds since the shader was started:

123456

uniformiGlobalTimevoidmain(void){//Use the continuously changing time signal as the value for a color. gl_FragColor=vec4(sin(iGlobalTime)*0.5+0.5);}

Putting a continuously changing value through a function like sin/cos is the
bread and butter of creating animations with shaders.

Randomness

We often need a cheap and fast way to generate random floats in Shaders. Without persistent state and preservation of a seed it can be difficult. One solution is to use a noise image and the current pixel coordinates as an index into the image for a float value.

//Turning the current pixel coordinates (uv) into a random float. vec2uv=gl_FragCoord.xy/iResolution.xy;returntexture2D(iChannel0,vec2(uv)/256.0,0.0);

Composing visual effects

I attach a weight to each function or visual phase of the shader. Through this we can select which visual effect is visible or combine multiple effects. Its a bit messy, since I have all my functions in a single shader file. I’ve not explored including of external files with shaders.

Synchronisation

Shadertone uses the seconds since start (iGlobalTime) while Overtone via Supercollider uses the soundcard’s clock. Hence there is no guarantee these two sources will be in sync.

Replacing iGlobalTime is the only option. We create a special synth called data-probes which sole function is to transfer data from the Supercollider world to the Clojure world. Overtone provides a Supercollider to Clojure binding called a tap. We add a tap into our Overtone synth which is polling our global timing signal (this powers all synths and is how we co-ordinate everything).

My solution is to move to Clojure where mutable state using atoms is simple.

Our timing is guided by Supercollider and a global clock. The value of our kick buffer at anyone time is only known inside the synth and hence inside Supercollider. But if we want to have mutable state we need access to this value in Clojure. So we create a custom synth that taps the value of the kick buffer based on the global clock signal.

12345678910111213141516

(defsynthdrum-data-probe[kick-drum-buffertiming-signal-bus0](let [beat-count(in:krtiming-signal-bus)drum-beat(buf-rd:kr1kick-drum-bufferbeat-count)_(tap"drum-beat"60(a2kdrum-beat))])(out00))(defonce kick-drum-buffer(buffer256));;Create the synth with the "drum-beat" tap(def drum-data-probe(drum-data-probekick-drum-buffer(:counttime/beat-1th)));;Bind the running synth and the tap(def kick-atom(atom{:synthdrum-data-probe:tap"drum-beat"}));;Extract the tap atom(def kick-tap(get-in(:synth@kick-atom)[:taps(:tap@kick-atom)]))

Now in the Clojure world its simple to watch our tap atom and hence get alerted when it changes value. Overtone is dealing with the magic of updating the atom under the covers, the watcher is a nice implementation independent way of hooking into this. We now know the value of our kick buffer in Clojure. If we use another atom as our accumulator we can update it when the tap atom changes. Finally pushing this new accumulator to the Shader.

Thats a lot of work, but I’m very happy with the results in my (end-of-buffer) performance.

Buffer events

Writing to a buffer is a common way of live coding in Overtone. Its very useful to attach some visual effect based on the settings of a buffer.

123

;;Setting notes to a buffer(def notes-buf(buffer256))(pattern!notes-buf(degrees-seq[:f31314]))

We could put a tap into the synth and grab the current note and pass this into the shader. As I’ve mentioned taps are expensive and they are always on while we may not always be using them.
This also gets more complicated when say we have 3 instances of the same synth running playing simultaneous to form a chord.

An alternative is to invent an atom which is used as a signal on every buffer write.

123456

;;Could also do this with OSC messages...(do(defonce buffer-change-event-notes-buf(atom0.0))(pattern!notes-buf(degrees-seq[:f31314]))(swap!buffer-change-event-notes-buf+ 1))

I use this trick in (end-of-buffer) to use the bass note to control the level of distortion of the visuals (source). Its wonderful to focus on the notes and feel the visuals following you automatically.

Midi notes to visuals

I often want to map a midi note to a visual effect. All my notes are mapped to buffers. Much like we did with the drums I can use a tap to get access to the current note being played in a buffer. Skipping over the details, when we have a midi note we send it as an float (to support crazy 42.333 like notes) to the shader via an atom.

Text

In end-of-buffer I spell the word Repl Electric out of floating lights. We are bound to only a few data structures with fragment Shaders. I used a simple 3x3 matrix mapping each part of a character. Then using this to decided the position of the lights.

12345678910111213141516171819202122232425

constmat3LETTER_R=mat3(1,1,1,1,1,0,1,0,1);constmat3LETTER_E=mat3(1,1,1,1,1,0,1,1,1);vec4letter(mat3letter,vec2offset,vec2uv){vec2point=vec2(0,0);vec4helloPoint=vec4(0,0,0,0);vec3xPos=vec3(0.01,0.03,0.05);vec3yPos=vec3(0.05,0.03,0.01);for(inty=0;y<3;y++){for(intx=0;x<3;x++){if(letter[y][x]==1){// Show this part of the letterpoint=vec2(xPos[x]+offset.x,offset.y+yPos[y]);helloPoint+=buildCell(uv,point,STATIC_LETTERS);}}}returnhelloPoint;}letter(LETTER_R,0.2,uv);

And the visual.

Visuals effected by frequencies

Shadertone provides a 2x512 array with the frequency spectrum (FFT) and audio waveform data. It does this by loading the data into a 2D Texture. The audio data is taken from tapping the main Overtone audio bus.

Here is an example where I use the audio waveform to distort the scale & jaggedness of a series of circle shapes.

1234567891011121314151617

constfloattau=6.28318530717958647692;vec3wave=vec3(0.0);floatwidth=4.0/500;for(inti=0;i<60;i++){floatsound=texture2D(iChannel0,vec2(uv.x,.75)).x;floata=0.1*float(i)*tau/float(n);vec3phase=smoothstep(-1.0,.5,vec3(cos(a),cos(a-tau/3.0),cos(a-tau*2.0/3.0)));wave+=phase*smoothstep(width,0.0,abs(uv.y-((sound*0.9)+0.2)));//This shift of uv.x means our index into the sound data also //moves along, examining a different part of the audio wave. uv.x+=0.4/float(n);uv.y-=0.05;}wave*=10.0/float(n);returnvec4(wave,1);

And the resulting visual:

Final thoughts on Live coding visuals

Through Clojure’s binding of atoms with Fragment shaders we have the power to live code visuals and music. Though it comes at a cost of complexity having to wrap lots of functions in order to have powerfully connected visuals. Fragment shaders are extremely terse, and can be pushed to replicate many advanced effects but they are also performance intense, and often taking a non-shader route will be much more performant.

Stability

My knowledge of LWJGL is small, but crashes in the fragment shaders often occur leaving the JVM wedged. This has happened to me quite a lot practicing, but never in a performance. Its worth reflecting that something (be it fixable) leaves a risk of a freeze in a performance.

Combining State & Shaders

I’ve started to explore what a shader application might look like if it was a server and provided a state machine so the live coding language does have this complexity. In turn producing a freer and more spontaneous interaction. This project is Shaderview and steals all the good ideas of Shadertone while adding some new features like vertex shader art. I’ll be writing up more about Shaderview soon.

Emacs is designed for fast, highly customisable manipulation of text.
ASCII animation requires manipulating text at a sufficient speed that it appears animated. Emacs is also used by a number of performers to live code musical & visual performances (and many other things). Where the audience can see the code in emacs and hear it.

In my live coding performances as Repl Electric I’ve used emacs animations to augment emacs with more feedback for the performer and a chance to destroy the order and structure the programmer has spent the entire performance building. Reminding us that we are looking at thoughts expressed through code that seem magical but are ultimately nothing more than text.

Maybe something akin to the creation and destruction of Sand Mandalas.

Framework for Emacs Animation

Zone Mode is an Emacs plugin which provides a framework for screensaver like animations.

Importantly it allows us to turn on an animation using our current code buffer as input and to terminate the animation, returning to the original code on a key press. So we can safely mangle the text knowing we can also return to safety. Well so far I’ve always found it to be safe but there is a small risk as mentioned in the zoning warning:

1

(message"...here's hoping we didn't hose your buffer!")

A nice property of taking our buffer as input is we are never quite sure what text will be there and hence the properties of the animation.

Example: Uppercase all letters

A simple function that finds non-whitespace in the buffer and tries to uppercase the char. It knows nothing about the zoning framework, its just a plain old function that operates on the active buffer.

12345678910111213141516171819202122

(defunzone-upper-case-text()(zone-fill-out-screen(window-width)(window-height))(randomt)(goto-char(point-min))(while(not(input-pending-p))(let((wbeg(window-start))(wend(window-end)));;Keep moving the char cursor until its not whitespace(while(looking-at"[ \n\f]")(goto-char(+wbeg(random(-wendwbeg))))));;If we are at the end of the buffer go to the last char(when(eobp)(goto-char(point-min)));;Read the char at the cursor(let((c(char-after(point))))(delete-char1);; Remove the char(insert-char(upcasec)));; Reinsert with caps ;;Sleep(zone-park/sit-for(point-min)0.1)))

The animation in all its glory:

Zoning Setup

We can override all other zoning programs and just specify our zone-fn. When we activate zoning our animation will be run.

Open Sound Control Protocol Based animation

OSC is a handy protocol for sending data between networked devices using url like endpoints.
Emacs has a plugin to run an OSC server (http://delysid.org/emacs/osc.html).
Hence if we have some kind of beat signal we could send a message to Emacs and in turn it could render changes based on our musics timing.

With my Overtone setup for Repl-Electric I have the following flow of OSC messages:

1

[Supercollider]->OSC->[Clojure]->OSC->[Emacs]

Within Emacs setup an OSC server and define two call backs which change the color of the window face number

Heres a little demo with the brackets and window number changing colour based on the Overtone beat.

Synchronisation

Given some small local lag we now have a timing signal which is threaded through all our tools. Supercollider, Overtone and Emacs.

Which means our emacs animations can start to change to the beat of the music…

Sound in ASCII

Now that we have ways to animate and to connect audio data with emacs we can go a little further (way too far) and start to visualise the data about our sound in ASCII.

From Overtone or SuperCollider we can create a synth which tracks the peak and power of an audio signal. It sends us messages back with the data which we then forward on as OSC messages to Emacs.

12345678910111213141516171819202122

#Triggers a Sin Wave Oscillator and sends signals about power/peakSynthDef(\pulse,{varsig,chain,onsets;sig=SinOsc.ar(Rand(220.0,440.0))*EnvGen.ar(Env.perc(releaseTime:0.5),Dust.ar(0.5))*0.7;Out.ar(0,sig!2);//chain=FFT({LocalBuf(512,1)},sig);onsets=Onsets.kr(chain,0.1,\power);SendTrig.kr(onsets);SendPeakRMS.kr(sig,20,3,"/replyAddress");}).add;#Run the crazy synth aboveSynth(\pulse)#Forward the data on as an OSC message#to emacs~host=NetAddr("localhost",4859);p=OSCFunc({|msg|~host.sendMsg("/peakpower",msg[3],msg[4]);"peak: %, rms: %".format(msg[3],msg[4]).postln},'/replyAddress');

End of Buffer

In this animations the text gets slowly broken up with white spaces and then like the wind, blows the characters away. Sometimes words are ripped apart as they blow in the wind (if we get lucky).

Two main phases:

Injection of spaces.
This starts to distort the text while keeping it readable. It provides a way to increase the effect of expanding whitespace in the next stage.

Transforming whitespace into lots of whitespace.
A Regex matches whitespace and replaces it with a randomly increasing amount of whitespace. Which leads to the effect of the characters of the code blowing away. I spent a while trying to improve the speed of this phase and Regexs proved to be the fastest way.

If we move the text fast enough soft word wrapping means the text appears to re-enter from the left side of the screen and eventually disappear out of the buffer. Without soft wrapping we get a horrible jitter as emacs moves back and forth between left and right side of the buffer.

A couple of other tricks/tactics used:

Continually incrementing integer. Useful for injecting movement or using sin/cos fn with a continuous value.

Perserving the syntax highlighting of the original code in an attempt to maintain some of the meaning of the code.

Waves

This animations attempts to simulate the effect of waves using line wrapping and mixing deletions with insertions of different sizes to create lines that seem to move at different speeds.

Breaking Tools

While it may seem silly to bend Emacs to do things it was never intended to do, it’s an important part of discovering for yourself how you want your tools to work. Not just doing what you are expected but breaking them apart and discovering for yourself how you want to use them.

I’ve been working over the last year in the data team at SoundCloud building a realtime data pipeline using Clojure and Amazon’s Kinesis. Kinesis is Amazons equivalent to Kafka, “Real-Time data processing on the Cloud”. This is a summary of what was built, some lessons learnt and all the details in-between.

Fig1: Overall System flow

Tapping Real traffic

The first step was to tee the traffic from a live system to a test system without comprising its function.
The main function of the live system is logging JSON events to file (which eventually end up somewhere like HDFS).
Tailing the logs of the live system gives us access to the raw data we want to forward on to our test system.
A little Go watches the logs, parses out the data and then forwards them in batch to test instances that will push to kinesis. Hence we had live data flowing through the system and after launch a test setup to experiment with. Sean Braithwaite was the mastermind behind this little bit of magic.

Tapping Traffic

Sending to Kinesis

All kinesis sending happens in an application called the EventGateway (also written in Clojure). This endpoint is one of the heaviestly loaded services in SoundCloud (at points it has more traffic than the rest of SoundCloud combined). The Eventgateway does a couple of things but at its core it validates and broadcasts JSON messages. Hence this is where our Kinesis client slots in.

Squeezing Clojure Reflection

Its worth mentioning that in order for the Eventgateway service to be performant we had to remove all reflection in tight loops through type hints. It simply could not keep up without this. It became a common pattern to turn reflection warnings on while working in Clojure.

Project.clj

1

:profiles{:dev{:global-vars{*warn-on-reflection*true*assert*false}}}

Kinesis

The Eventgateway posts to Kinesis in batch using a ConcurrentLinkedQueue and separate producers and consumers. Messages are pushed into a ConcurrentLinkedQueue. We rolled our own Clojure kinesis client using Amazons Java library rather than using Amazonica.

Note this is where we also decided the partition key. In our case its important for the same user to be located on the same partition.
For example when consuming from Kinesis a worker is allocated a partition to work from and would miss events if they where across multiple partitions.

This is our backpressure signal, in which case at worst we need to log to disk for replay later

Consuming Messages from Kinesis

With the consumption of events we have a different application stream for every worker. All workers have their own streams, and own checkpoints so they operate independently of each other. Some example of the workers we gave running:

Logging Events to s3

Calculating listening time

Forwarding certain messages on to various other systems (like RabbitMQ).

Launching a worker is pretty simple with the Amazon Java Kinesis library.

12345678

(:import[com.amazonaws.services.kinesis.clientlibrary.lib.workerWorker])(defn -main[&args](let [worker-fn(fn [events](print events))config(KinesisClientLibConfiguration.worker-fn);;I'm airbrushing over the Java classesprocessor(reifyIRecordProcessorFactoryworker-fn);;Ultimately this is a lot of config wrapped in Java fun[^Workerworkeruuid](Worker.processorconfig)](future(.runworker))))

One of the hardest parts of setting up the a worker is getting the configuration right to ensure that the consumers are getting through the events fast enough. Events are held in Amazon for 24 hours after entry, and hence there is a minimum consumption rate.

Counting events in and events out with Prometheus made it easier to get the correct consumption rates.

Via the Amazon console you also get access to various graphs around read/write rates and limits:

Finally you can also look at Amazon’s Dynamodb instance for the Kinesis stream providing insight into metrics around leases, how many where revoked, stolen, never finished, etc.

Here is an example of one of our Kinesis workers configuration covered in scribblings of me trying to work out the right settings.

12345678910111213141516171819202122232425262728

{;;default 1 sec, cannot be lower than 200ms;;If we are not reading fast enough this is a good value to tweak:idle-time-between-reads-in-millis500;;Clean up leases for shards that we've finished processing (don't wait;;until they expire):cleanup-leases-upon-shard-completiontrue;;If the heartbeat count does not increase within the configurable timeout period,;;other workers take over processing of that shard.;;*IMPORTANT* If this time is shorter than time for a worker to checkpoint all nodes;;will keep stealing each others leases producing a lot of contention.:failover-time-millis...;;Max records in a single returned in a `GetRecords`. Cannot exceed 10,000:max-records4500;;Process records even if GetRecords returned an empty record list.:call-process-records-even-for-empty-record-listfalse;;Sleep for this duration if the parent shards have not completed processing,;;or we encounter an exception.:parent-shard-poll-interval-millis10000;;By default, the KCL begins withs the most recently added record.;;Instead always reads data from the beginning of the stream.:initial-position-in-stream:TRIM_HORIZON}

Monitoring

Prometheus (http://prometheus.io/) a monitoring tool built at SoundCloud was core to developing, scaling and monitoring all of this pipeline. Amazon does provide some useful graphs within the AWS console but more detailed feedback was very helpful even if it was removed later.

Exception Logging pattern

All Exceptions are counted and sent to log. This was a very useful pattern for driving out errors and spotting leaks in the interactions with Kinesis and consumption:

A Cloud Pipeline in Pictures

In my previous post about Building Clojure services at scale I converted the system metrics to sound. With so many machines processing so many events its easy to loose track of the amount of work being done in the cloud.
To make this feel more real I captured metrics across all the machines involved and created 3d renderings using OpenFrameworks and meshes of the systems function:

Thanks

This work constitues a team effort by the Data team at SoundCloud. A lot of advice, collaboration and hard work. Kudos to everyone.

Building the world

We need to install Spigot which is an optimized version of the Craftbukkit Java Minecraft server and install clj-minecraft project as a plugin. Things are complicated by Bukkit no longer being registered in Maven.

Speaking to the Minecraft REPL

clj-minecraft opens a REPL on localhost port 4005. Using emacs and cider connect to this REPL instance.

Boot and connect Overtone:

123

(use'overtone.core)(connect-external-server)#=>:happy-hacking

Interaction

Using MUD we have some useful wrappers around Overtone for scheduling functions on beats.

To coordinate graphics and sound we schedule both within a single function.

12345678910

(require'[mud.core:asmud])(defonce highhat-sample(freesound53532))(def ride-trigger(mud/on-beat-trigger8;; Every 8th beat(fn [](highhat-sample);; Play sample(block2102:grass);; Place a block into the Minecraft world)))

Credits

Built on the back of lots of great open source projects.
Thanks to the Craftbukkit/Spigot contributors, @CmdrDats for clj-minecraft and @samaaron for Overtone and inspiring this crazy journey with the musical Sonic Pi (which supports combining music and Minecraft on the RaspberryPi).

Live coding is the act of turning a programming session into a performance. This can constitute improvisation, music, visuals, poetry, hardware, robots, dance, textiles and people. Pretty much anything with an input and output can be controlled live by programming.

This is not just a performance by programmers for programmers. While this is often where it starts as a live coder, the type of audience and the accessibility of the performance lies in the performers imagination. Abstraction can get us pretty much anywhere.

1

(def the-stars(dark-matter))

Repl Electric

Repl Electric is a project I started in order to discover more about music composition and Artificial intelligent based aids to creativity. Which in turn through the inspiration of people like Meta-ex lead me to live programming music.

Here is a performance live coding music and graphics, inspired by a performance in London:

Tools

Clojure focuses on interactive REPL (Read, Evaluate, Print & Loop) driven development. Which makes it a good choice for interactively coding music. It also turns out functional programming is a good fit for operating over music as data.

Emacs live is a Emacs release with packages and defaults that are Live Coding centric. Something I use for both for my work and for my live coding.

To execute our code, we launch a repl instance in our project (NEVER launch inside emacs, since then if emacs crashes the repl and music dies) and connect to it from emacs using ciderhttps://github.com/clojure-emacs/cider.

A simple trick to combine Emacs code and visualizations is to launch an OpenGL window in full screen (see Shadertone) and then put a full screen transparent terminal window running emacs over it.

Timing

Timing is a complicated issue but so important its worth touching on. You have a choice with Overtone to use Java for timing or Supercollider. I use Supercollider since I have found it to be much more reliable.
Everything you need is here (copy and paste), thanks to the hard work of Sam Aaron.

The key concept to take away is there are two types of timing, a beat counter which is forever incrementing and a beat trigger which flips back and forth between 1/0.

Buffers

Most of my live coding performance was writing to buffers which are hooked into synths. Buffers are just fixed size arrays but they are stored in Supercollider rather than in Clojure. Here is an example from The Stars where the midi notes are read from a buffer at a rate based on my beat timing signal (a 16th of the main beat here).

Shaders generate imagery directly on your Graphics Processing Unit rather than going through your CPU. Through a language called GLSL (which is C like) we can express very simple functions which get called on every single pixel generating complex visuals. Here is a simple extract from The Stars that generates all the background small dots:

123456789101112

voidmain(void){vec2current_pixel_position=mod(gl_FragCoord.xy,vec2(5.0))-vec2(0.0);floatdistance_squared=dot(current_pixel_position,current_pixel_position);vec4black=vec4(.0,.0,.0,0.0);vec4white=vec4(1.0,1.0,1.0,1.0);//Test the current pixel position and if it should be a circle shade it.vec4circles=(distance_squared<0.6)?white:black;gl_FragColor=circles;}

To synchronize the graphics with the music I created a special Overtone synth which does not generate any sound, it instead feeds information in realtime to my shader.

1234567891011121314151617181920212223

(use'overtone.live)(require'[shadertone.core:ast]);;A synth that exposes through taps all the lovely timing information.(defsynthbuffer->tap[beat-buf0beat-bus0beat-size16measure6](let [cnt(in:krbeat-bus)beat(buf-rd:kr1beat-bufcnt)_(tap"beat"60(a2kbeat))_(tap"beat-count"60(a2k(modcntbeat-size)))_(tap"measure-count"60(a2k(/ (modcnt(* measurebeat-size))measure)))])(out00));;; Used to store our drum beat, 1 for a hit 0 and for a miss(defonce drum-sequence-buffer(buffer256))(def beats(buffer->tapdrum-sequence-buffer(:counttiming/main-beat)));;Open a OpenGL window running our shader(t/start-fullscreen"resources/shaders/electric.glsl":user-data{"iBeat"(atom{:synthbeats:tap"beat"})"iBeatCount"(atom{:synthbeats:tap"beat-count"})"iMeasureCount"(atom{:synthbeats:tap"measure-count"})})

When Alan Turing asked if a machine can be intelligent one aspect of this question focused on “could machines be creative”?

Ada Lovelace seemed convinced that originality was not a feat a computer was capable of:

it can do whatever we know how to order it to perform,
it has no pretensions whatever to originate anything

Before we outrightly dismiss the idea of creative machines do we even understand what creativity is?

Join me on a journey examining these questions while also meeting a new generation of artists born through code. Looking into their hearts and brains examining different algorithms/techniques and there effectiveness at exhibiting creativity.

Step 2: EEG machine

I am using a EEG machine brought from Neurosky which is rated as Research grade (whatever that means).
This measures voltage fluctuations resulting from ionic current flows within the neurons of the brain. While EEG machines are not the most accurate they are now reasonably cheap.

Step 3: EEG –> Overtone

In order to generate music I want to import the EEG brainwave data into Overtone.

We interact with the EEG machine over a serial port. The most mature library for this interface is in Python so there is a little jiggery pokery to get the data into Overtone.

importreimporttimeimportjsonimportunicodedataimportgeventfromgeventimportmonkeyfrompymindwaveimportheadsetfrompymindwave.pyeegimportbin_powermonkey.patch_all()# connect to the headseths=Nonehs=headset.Headset('/dev/tty.MindWave')hs.disconnect()time.sleep(1)print'connecting to headset...'hs.connect()time.sleep(1)whilehs.get('state')!='connected':prinths.get('state')time.sleep(0.5)ifhs.get('state')=='standby':hs.connect()print'retrying connecting to headset'defraw_to_spectrum(rawdata):flen=50spectrum,relative_spectrum=bin_power(rawdata,range(flen),512)returnspectrumwhileTrue:t=time.time()waves_vector=hs.get('waves_vector')meditation=hs.get('meditation')attention=hs.get('attention')spectrum=raw_to_spectrum(hs.get('rawdata')).tolist()withopen("/tmp/brain-data","w")asfp:s={'timestamp':t,'meditation':meditation,'attention':attention,'raw_spectrum':spectrum,'delta_waves':waves_vector[0],'theta_waves':waves_vector[1],'alpha_waves':(waves_vector[2]+waves_vector[3])/2,'low_alpha_waves':waves_vector[2],'high_alpha_waves':waves_vector[3],'beta_waves':(waves_vector[4]+waves_vector[5])/2,'low_beta_waves':waves_vector[4],'high_beta_waves':waves_vector[5],'gamma_waves':(waves_vector[6]+waves_vector[7])/2,'low_gamma_waves':waves_vector[6],'mid_gamma_waves':waves_vector[7]}s=json.dumps(s)fp.write(s)gevent.sleep(0.4)

Would you like to hear my brain?

The results, please listen to my brain.

Not really music is it? With beta-waves we get a serious of high to low transitions. While we can control at what pitch the transitions occur by performing activities that shape our brain waves the transitions don’t provide the order or structure we need to recognize this as music.

Brain controlled Dubstep

The only logical path left is to try and control Dubstep with our brain. Rather than generative music we can use our brain waves to control the tempo and volume of existing synthesized music.

We again have to linearise the beta wave signal to the range of volume 0.0-1.1 and to the bpm 0-400.

Now all thats left to do is connect it to our brain.

Here’s what brain controlled Dubstep sounds like:

And for comparison what playing Go does to your brain activity (I turned the Dubstep down while playing, concentrating with that noise is hard):

Discovery through sound

Mapping brain waves into live music is a challenging task and while we can control music through an EEG machine that control is hard since we are using the brain to do many other things. What is interesting in the path of this experiment is not in fact the music generated but the use of sound to provide a way to hear the differences in datasets.

Hearing the difference between play Go or sleeping, between young people or old people.

Sound as a means of discovering patterns is a largely untapped source.