Menu

Back in 2013 I worked on a project which mixed dance with technology called Artism. At some point during the live show, the audience would see a dancer moving on the stage and behind him a projection with a 3D character performing the exact same moves. The character was morphing between a man and an ape.

At the time I was working on visuals for another part of the show, but I had access to the motion capture file and kept a copy to do something with it one day.

Concept
The theme was ‘the monolith’ and the first thing that came to my mind was this scene from 2001: A Space Odyssey with all the apes jumping around a monolith.

Monolith, ape, man, dance. There was something there.

Visuals
The original mocap was an FBX file and contained only data about the bones, there was no skin so it couldn’t be used directly. It took me a while to figure out the best way to work with the data. A few months before, I was looking into ways of optimising Three.js’ own JSON format and even published a small package on npm about it. But I ended up not using it. I used glTF instead.

When I started I didn’t know how the model was going to be rendered. This is what I like about experiments, they don’t follow a set path, they just flow in the direction that suits them best as they go. In this case the coloured lines and ribbons in the final result appeared after several iterations and a lot of other unsuccessful ideas.

Tech
On the technical side, I think there are two interesting things to mention: one is how to find the positions of vertices influenced by bones, and the other is how to sort the vertices so that the line segments look pretty.

Transformed skin vertices
The vertices on a skinned mesh are transformed in real time, either because of morph targets or because of bones. In Three.js the positions are updated in the vertex shader, but in order to make elements (i.e. ribbons) follow some given vertices, I needed to know their positions in Javascript. I learned how to do that for morph targets in my Billie Deer project. I also figured out how to find the positions of the bones themselves by looking into the code of SkeletonHelper.

Sorted body parts
Once a mesh is defined it is easy to change from drawing triangles to drawing lines or line segments. The challenge is to make the lines look good. Segments are drawn for pairs of vertices, so if in our model vertex 0 belongs to the right foot and vertex 1 belongs to the head, a straight line would be drawn across the model. If all the vertices are connected as such, we end up with a convex shape saturated with lines and the body becomes indistinguishable.

One way to improve that is to sort the vertices by their distance from each other in the first frame of the animation. It helps, but it is not enough. The best is to create a correspondence between a vertex and the body part it belongs to. Luckily, we can read body parts from the skeleton and we can check which vertex is influenced by which bone using skinWeight and skinIndex. For a given vertex on a skinned mesh, get the index associated with the strongest weight, then get the bone name for that index and the result is i.e. vertex 2714 is part of the pelvis.

Now vertices can be grouped by body parts and lines can be drawn inside those groups. If the technique was applied on its own, this is how it would look:

Unlike my previous submissions to the Christmas Experiments, this time I’m not using a hybrid man-deer. I wanted to come up with new ideas, but the first ones were rubbish like ‘I’ll do a low poly santa walking down the beach…’ It wasn’t until I started looking at gifs for inspiration that the idea really kicked in.

I started to work on an algorithm to distribute items on a triangular grid – which was not as trivial as I thought. Then dusted off my 6th grade geometry formulas and started rotating tetrahedrons – with a big help from this page.

Once positions and rotations were sorted I moved on to post processing and was lucky to find this library by vanruesc. I ended up creating my own film shader by mixing up good bits from different shaders – special thanks to mattdesl for filmic-gl.

And for sound I finally got the chance to work with Tone.js. It is such a nice library. It was a joy to work with.

My music skills are quite basic, but enough to figure out the notes of three famous Christmas songs. The mechanic is simple: mouse over, play next note. The tempo is up to the user. It is a little surprise. Maybe users don’t realise that there is a song there and just play a bunch of notes. Maybe there is that ‘ah!’ moment when they recognise the song.

The same is true for switching scenes. There are three scenes and tree songs in the experiment. I’m not sure people will figure that out. Maybe some users will check the first one and think that that’s it. I could add an info box somewhere and write ‘click and hold’, but that would ruin the surprise.Edit: actually I did just that, just added an info box with instructions =)

Spoiler Alert

I’ve added the konami code again.
I know it is a bit old school, but like I wrote above it is nice to find little surprises, isn’t it?
Go ahead and try it. Open the experiment and press:up up down down left right left right b a

Background
A few weeks ago I woke up to some very sad news. A bunch messages from friends on my phone telling me that Chris Cornell has passed away. Like many other fans I was shocked and confused. No one saw it coming. He was young and fit and active. He was up on the stage on that same night. I wasn’t there, I wasn’t even on the same continent, but the news hit me as if I had been there, as if I had just seen him. Chris and his music have been very present in my life lately. I have been working on an experiment called Mailman.

In case you are not familiar with Chris Cornell, he was best known as the lead vocalist of Soundgarden. He was involved in many other projects, but Soundgarden, at least in my opinion, is where he was at his best. Especially during the Superunknown phase, their fourth album released in 1994. If you only know one or two songs from Soundgarden, chances are that they are from this album.

Superunknown (1994)

I always go through phases of discovering and rediscovering bands. About three years ago I was discovering Soundgarden again. This was around the same time I was finishing my Teen Spirit experiment and I knew I wanted to do more audio visualisations. I decided that my next one was going to be with a track from Superunknown. Not one of the most famous, but one of my personal favourites. Track #4 – Mailman. Powerful riff. Badass lyrics. Still my favourite to this day.

This was three years ago. The project was parked for a long time during this period. I picked it up and dropped it again dozens of times. A few months ago I found it again and decided to finish it.

It was never supposed to be a posthumous homage. It was about using code and real time graphics to visualise a badass rock song.

Disclaimer:
I haven’t tried to contact the band or the label, I am using the song without permission.
This is just fan art.

Lyrics
It started with the lyrics. For me they are about a guy who had enough of being trampled over and decides to strike back. The song starts like this:

Hello don’t you know me
I’m the dirt beneath your feet
The most important fool you forgot to see

I wanted to show the lyrics in the experiment. Not like karaoke, but as a visual element like Robert Hodgin did on his Solar, with lyrics.

Visuals
The visuals were inspired by a bunch of different sources. I can mention the amazing work of Ryoji Ikeda and his bold black & white lines. Also the parallel hatching style on a poster I have on my wall created by Kii Arens. And also the ribbons on Yi-Wen Lin’s codevember /04.

Experiment
It took quite a lot of iterations to arrive to the current look & feel, but the idea is quite simple. The video is used as texture and the value of each pixel determines the thickness of the ribbon just above it. The audio frequency is used to modify the depth (or z position) of each ribbon.

The lyrics were mapped manually using Adobe Audition and exported as a .csv file. There are also other files describing the song structure (i.e. pre-chorus starts at 0:55.714) and some key video cuts. The data is used to apply different settings to different parts of the song.

Queens of the Stone Age is one of my favourite bands. I go watch them live whenever I get the chance. Last time I saw them, they had a big screen on the stage with some cool visuals for each song. I recorded this video with my phone during ‘Go With The Flow’:

A bunch of bidents travelling in space, flocking, going around bends and coming towards the camera.

Wikipedia: A bident is a two-pronged implement resembling a pitchfork.

A couple of weeks ago I was going through my files, watched this video again and wondered if I could replicate it in WebGL.

Path
To me it seemed like the bidents were following a path in the video, so I started by revisiting an old experiment with steering behaviors and adapted it to 3D. The path itself was generated using a formula extracted from TorusKnotGeometry. In this case a simple curve using p = 2 and q = 4.

Model
At first I created the model in code. One cylinder for the stick, then two curves, two cylinders and two cones for the top. It looked ok, but the performance was terrible on mobile. I moved to Blender and recreated it there – thanks to all the amazing people that post tutorials and videos online.

Then I adapted the code to use InstancedBufferGeometry. To my surprise, the performance on mobile was even worse. I found out that vertices are duplicated when using the .fromGeometry() method. I joined the discussion on this github issue and proposed a solution, but I think it doesn’t work for all cases. It worked on mine. Simple modifications to include indices in the generated geometry made it much faster on mobile. I started getting 60 fps on my 5 years old phone. So in case you found this post while looking for issues with .fromGeometry(), have a look at these modifications here, or check this hack that copies indices from each face of the original geometry.

Camera
I’ve been wanting to play with spite’s Storyline.js since I bumped into it online. This was the perfect opportunity. It is simple and works very well. I chose a few camera positions around the path and linked them in the storyboard using the t of the curve. With that I could be sure the bidents were always passing by where the camera was there.

Background
In the original video I think the background was just a pure red color. It looks great, but I thought it was looking too flat in the browser. The inspiration then came from the music video for ‘Go With The Flow’ – IMHO one of the best music videos ever made, hats off to Shynola.

The result is a mishmash of noise shaders found online, specially Procedural SkyBox by Passion.

Three years ago my friend David invited me to participate on his Christmas Experiments project – an advent calendar with one code experiment a day. He gave me about two months notice so I had plenty of time, but I spent most of it just trying to have an idea. There were a bunch of false starts until one happy day when I googled ‘christmas gifs’ and found this:

Half reindeer, half Michael Jackson. Can’t go wrong with that. My idea was to trace the silhouette of the character and then use that as a base to create visualizations in HTML Canvas. And that’s what I did. And people liked it. It was fun!

This year David invited me to join the Christmas Experiments again, but this time I had only 3 weeks. I knew I had to start straight away. The idea had to come fast. And it did. How about a tribute to my old experiment, but this time in 3D? Wow, such brilliant, very technology, much moves.

Next, a quick feasibility check. I downloaded Blender, watched a few tutorials on modeling and rigging, found a couple of generic male models, a couple of reindeer heads and, most importantly, I found this mocap:

OK so all I had to do was to throw all those ingredients in a pan and start cooking. I thought I would have a dancing model in a Three.js scene in a couple of days and then I could go crazy on the shaders to make some cool visualizations.

I was wrong.

It started well. I learned the basics of modelling in Blender and was able to chop this guy’s head off and replace it with this deer head. He was looking cool. I called him mandeer.

I learned how to rig (following mainly thesevideos) and started testing some free mocap using the Makewalk plugin for Blender.

Around this time I showed the prototype to Damien at work and he got interested. We discussed a few ideas for the sound and he was keen to work on it. From that point onward we were a duo. We wrote to David and told him it was going to be a collaboration.

That’s when the problems started. No, not with Damien, he was great. With the mocap. I purchased the file from TurboSquid and tried to convert it from .BIP to .BVH so I could use it in Blender. It didn’t work. It really didn’t work.

The flow was to load the .BIP onto a biped in 3Ds Max, then export the animation as .FBX, open it in Motion Builder, clean the object tree, export it as .BVH, open the rigged model in Blender and load .BVH onto it. But somewhere in this broken telephone the information was not translated properly and all I could get on the other side was a cubist deformed pile of bones. I tried everything. I tried random combinations of export settings, I tried BVHacker, I googled every term imaginable, I read forums with desperate lonely comments posted in 2008, I waited for the planets to align, I called my mom…

At the same time Damien and I were clocking insane hours at the office – funny enough, on another advent calendar project – and there was very little time for anything else. The deadline for our experiment was approaching and we were not ready. We tried to give it a last push on the last day (December 3rd), but it didn’t happen. We missed the deadline. David was sad. We were sad. We ended up going live with a bloody ‘coming soon’ placeholder.

I never really managed to solve the .BIP to .BVH problem. In the end what worked for me was to create a pose for the biped in 3Ds Max, then export it as .DAE and import it directly in Blender (skipping Motion Builder), then rig the character again based on the new pose and adjust the twisted bones one by one, frame by frame. It was laborious, but at least it was getting somewhere.

We ended up going live 16 days later, on the 19th of December. Still crazy busy at work, but trying to progress with the experiment on every spare hour. No more time or energy to go crazy with shaders, unfortunately. I wanted to recreate some visualisations from my 2013 version, like the popping circles and the disco lines, I think they would look good in 3D, but I’ll have to leave them for the next time. What I ended up using was a combination of point lights with lambert shading and a directional light with a hatching shader.

Thanks:
To Damien for the partnership, to David and William for being patient with us, to Michael Jackson for the beat, to Elliot Dear for the gif, to Mr.doob and all the amazing people making Three.js, to Ben Houston for tidying up the animation classes, to the ones behind the Blender exporter, to the people that take the time to upload tutorial videos to YouTube, to the guy from TurboSquid that took 72 hours to reply saying that their conversion support doesn’t cover animations, to the people that created a GUI to edit mocap just for Second Life (BVHacker), to the lonely guy that posted a question in 2008 and is still waiting for an answer and to Konami for the code.

I was prototyping something and I needed to draw a curve with some thickness. It wasn’t just the case of increasing the thickness of the stroke, I wanted to find the contour of a curve, to draw two new curves around one in the center. After some research, I learnt that the correct term for that is parallel curve or offset curve.

The task turned out to be not as simple as I thought. After some failed attempts I found the solution in a paper by Gabriel Suchowolski entitled ‘Quadratic bezier offsetting with selective subdivision‘. The recipe is there, but I was missing an open source implementation so I decided to write one.
In this post I present a step by step process and at the end an interactive version written in Javascript.

How to draw an offset curve:

Start with 3 points.

Draw a quadratic curve using p1 and p2 as anchors and c as the control point.

Get the vectors between these points.v1 = c - p1
v2 = p2 - c
Find the vector perpendicular to v1 and scale it to the width (or thickness) of the new curve.
Add the new temporary vector to p1 to find p1a, then subtract from p1 it to find p1b.
Do the same with c to find c1a and c1b.

Repeat the same process with v2 to find the points on the other side.

Find vectors between the new points. These are parallel to v1 and v2 and offset by the given thickness.

The intersection points of these vectors are the new control points ca and cb.

Draw a curve from p1a to p2a with control point at ca.
Draw another curve from p1b to p2b with control point at cb.

This method works only when the angle between v1 and v2 is wide (bigger than 90 degrees), it doesn’t work for sharp angles.

For angles smaller than 90 degrees it is necessary to split the curve. In fact the curve could be split several times, the more the better the precision of the offset curve. To do it only at 90 degrees is fast and the result is not too bad.

The curve needs to be split at t, which is the closest point to c in the curve. The technique to find t has been described in the paper I mentioned before. It requires solving a third degree polynomial like this ax3+bx2+cx+d=0

The equation returns a number between 0 and 1 that can be plotted in the curve to find t.

Find the tangent of t and the points t1 and t2 where it intersects v1 and v2.
Create a new vector perpendicular to the tangent of t, scale it to the given thickness and find qa and qb. This vector splits the original curve at t.

Add the tangent of t to qa and qb and find the points where it intersects the offset vectors.

These are all the points needed to draw an offset curve. All the others that were created in the process can be removed for clarity.
Draw a curve with anchors at p1a and qa with the control point at q1a.

Repeat the process for all the new points to get the offset curve.
—

Here is an interactive version. Drag the gray dots to change the curve.

It was a sunny day in London. I was sitting under a tree and had this idea: what if I could scan an image draw the bright areas with circular lines? Bullshit. I stole it. This guy did it first.

I absolutely loved the visuals and had to do it myself. He tagged it as #processing but I didn’t find a sketch or a video so I could only guess how it worked. Once I got the basics working I had a few ideas for variations and threw some sliders in. It’s not completely new, but I like to think my small additions are valid.

There is more to explore. Maybe make the circles pulse with sound. Maybe add 3D. Maybe write a shader and run it over a video. These could all be really cool. If someone wants to try please go ahead and let me know what you did. I might try them too. But for now I just want to release this as is. A quick experiment.

I imagine this could be a nice artwork for an album cover. If you agree and happen to know just the band/artist, get in touch and I’ll be happy to work on a version with good print quality size.

Even before I finished my previous experiment I already knew the next one was going to be about sound. I wanted to do music visualization in the browser.

And one more time the result of the experiment is quite different from the initial idea. But I quite like that, it is one of nice things of experimentation, the direction can change any time something interesting appears in the process.

I started by playing around with Web Audio API and looking for references. I found this cool project called Decorated Playlists, a website ‘dedicated to the close relationship between music & design.’ One of the playlists – Run For Cover – had some nice visuals with bold lines coming towards the viewer with a strong perspective. I imagined how those lines could react to music and replicated them in code. On these initial tests I was using a song from that playlist called Espiritu Adolescente, by Mandrágora Tango Orchestra – which is a cool tango version of Nirvana’s Smells Like Teen Spirit. It was looking good, but nothing special. I tried a few variations here and there and eventually dropped the idea.

Time for a new experiment. And a new song. I am a big fan of rock, so I started looking for the next tune in my own music library. I chose God Hates A Coward, by Tomahawk. I love this song. There is an awesome live version on YouTube where we can see Mike Patton barking the lyrics behind a mask.

That mask could be interesting to use in a visualization, so I started googling images of masks.This one grabbed my attention. It seems to be a drawing based on this photo, but instead of the text on the cylinder, there are just lines. Once again I imagined how those lines could look when reacting to music.

So I went to code to try to replicate that cylinder. I tried a few geometries in three.js, but I realized I need more control of the vertices. It was one of those moments when my brain just wouldn’t shut down. I remember figuring out how to do it on the street walking back from lunch. The solution was to divide the bars into segments and then stretch the vertices only until the limit of the segment. i.e. if a bar has 10 segments and the value it needs to represent is 0.96, it is not like the entire bar is scaled down to 0.96, with this system the first 9 segments are assuming values of 0.1 and only the last segment is scaled down to 0.06. Then those segments can be distributed around a circle and the shape is preserved for any value.

The more I saw those bars reacting to music in 3D, the further away I got from the idea of the mask. The circular bars had something on their own and I couldn’t stop playing with them. Eventually I dropped the idea of the mask. And dropped also the song I was using. In my tests I found that the bars were reacting much better to the Nirvana tango I was using earlier. At this point the two experiments merged.

I feel stupid
And contagious
Here we are now
Entertain us

I didn’t know exactly what to do with all those shapes. All I knew was that some of them were looking pretty cool at certain camera angles and light positions, so I started to create some scenes with my favourite settings. I have to say it was a constant battle in my head between using pre-defined scenes or make everything dynamic. Some people might just start clicking and close the experiment because it doesn’t react. But I wanted to make something tailored for that song. Something like Robert Hodgin’s Solar. Everything is generated by code and runs real-time in the browser, but it could also be a video.

The creation of these scenes is what took most of the time. There was a lot of experimentation and a lot of stuff didn’t make it to the final version. Together with the sound reactive bars, I can say there were two other major accomplishments: one was to finally get my head around quaternions to be able to tween the camera smoothly – I should write another post about thatI wrote on stackoverflow instead – and the other was to add the words ‘hello’ and ‘how low’ in a way that would fit well with the visuals.

I plan to explore these two topics a bit more in the future. And I definitely want to do more music visualization. Hopefully next time with some rock n’ roll!

After I finished my previous experiment with the Web Audio API I was looking for something else to do with sound. I had this idea of using London Underground’s data and playing a note every time a train left a station. I could assign a different note for each line and the live feed would create random music all day long. So I started checking TfL Developers’ Area and it didn’t take long to realize that my idea wouldn’t be possible. The data does show predicted arrival times for each station, but these are rounded to 30 seconds. If the experiment were to use the data literally, it would stay silent for 30s, then play a bunch of notes at the same time, then go back to silence for another 30s. A friend even suggested randomizing some values in between those 30s, but that wouldn’t be any different from just listening to some random notes chosen by the computer, without any connection to the tube.

OK, that idea was gone, but the data was quite interesting. With the rounded times I could tween the position of the trains between stations. It would be cool to see the trains moving in ‘almost’ real time on the screen, wouldn’t it? Oh wait, someone did it already: Live map of London Underground, by Matthew Somerville. And it is nice, but not really what I had in mind. I wanted more of a cool visualization based on the tube data, rather than an informative/useful map. How could I do something new with this data? Add a third dimension maybe? Three.js was on my list of things to experiment with for a long time and this seemed like the right opportunity. Oh wait, has someone done it already? The only thing I could find was this and it is definitely not what I had in mind. So yeah, green light!

I had everything I needed: train times, latitude, longitude and depth of the stations. Those were coming from many different files, so I stretched my regex skills and created a simple tool with Adobe AIR to parse everything and output a consolidated .json for me. With that I could finally plot some points in space using the Mercator projection. The next step was to create tubes connecting these points and again I was really lucky to find exactly what I needed online. Three.js is an amazing library not only because of what it does, but also because of how it is made. Together with the classes I needed (TubeGeometry and SplineCurve3), I also found the conversation between the authors while they were developing these classes.

One of the biggest challenges was to make the trains follow the spline and sometimes change splines depending on the train’s destination. I feel that my algorithm could be more solid here, but it is working well. The last touches were to add the labels for each station and add some ambient sound recorded at the tube.

In my previous post I talked about my attempts on Processing + Typography, but I didn’t post any interactive example. Not because I didn’t want to, but because my sketch is using the OPENGL renderer and it is tricky to publish applets with it. Last night I received a notification about a reply to a post at the processing forum with some instructions to do just that. Now the applet is published.

Works fine for me. I asked a few friends to test it and it didn’t work for everyone. If it will work for you or not depends on platform and JRE version – and probably the lunar phase and many other things. Please give it try. Source code is also available.