Blog

I did a talk at webgl camp in switzerland, that was really nice to meet other people doing webgl stuff. Amazing projects seen there like quake4 in the browser, the nokia demo, really interesting projects. I talked about how shaders were generated in sketchfab.com and you can find my slides here

Nouvelle Vague is a project I worked on for ultranoir, It offers a poetic and interactive real-time 3D experience based on Twitter. In a minimalist and surrealist world, Tweets are carried out with different flying objects from the borders of the scene to the center where stands the ultranoir black statue (tweets are retrieved from your selected hashtag).

Flying objects are air balloons, biplane, UFOs, zeppelins, balloons. Each has its own speed and specific paths. The user can select any of these ships to take advantage of the pilot’s view and explore the scene. In this post I will explain how we made it.

Scene

We wanted to have vehicles coming from mountain/sky to the statue and leave a tweet. We did not want to manage vehicle collisions, after a while, we decided to organize scene and vehicles animations like the picture below.

The idea is to avoid vehicles to penetrate each other, for this we constrained each vehicle in a ‘row’ in which the animation will be played (off course the animation must be setup to fit in that virtual row). Doing this we minimized collision between vehicles but we knew it will not be 100% perfect and sometime you could see artifact near the statue.

Animation’s vehicle were played in loop mode. To add a bit of randomness I added a random delay at the beginning of a new loop to avoid vehicles to be synchronized together (eyes are very good to detect those pattern).
Animations were made using blender and to make it work with osgjs I had to update it to support keyframes container from osg, it also means that now the osgjs plugin for osg is able to export osgAnimation data. Why using this workflow ? In all my project I use OpenSceneGraph as a swiss knife, then I export data from osg to osgjs.

Drawing Texts

The text and logo on the ground were displayed using distance map. When you have vector shape with closed shape like text/logo it’s more efficient to use distance map instead of classical texture mapping. The advantage is that you can use less texture size and with better result than just bitmap. To do this you have to convert your original texture into a new one (distance map) then you use a ‘special’ shader to display it in realtime. Below you can see picture from the Valve paper, both image are at the same resolution.

(Pictures from valve paper)

The only problem I had was for the big text/logo in center of the scene, even with distance map I had aliasing because of the static ‘edge size’ in the shader.

To fix this, I adapted the ‘edgeSize’ depending on the camera position, it’s more a hack to fix aliasing than a real fix.

Camera

We implemented 2 cameras, one for the center of the scene to watch the statue and tweets and one in each vehicles. Camera in vehicles was more tricky because vehicles comes and back with the tweets. In the beginning I had a simple lookat camera that was located in the vehicle but looked to the statue. It worked but not really interesting, we wanted to be in the plane and see the looping. To do that I changed the camera to fps one. To tune camera and let the artist configure them, I added offset connected to html sliders.
Good but vehicles came in and went back after bringing their tweets, so the problem was that we were seeing an empty screen (the mountain) when the vehicles returned to their original position. We resolved this by changing the camera from ‘in vehicle’ to the camera ‘look to the statue from the vehicle’.
Finally for automatic mode we improved the camera to select the best camera available. It meant that we checked for each vehicles the time in their animation, and select the vehicles that has a time in ‘good’ range. Off course we had to tune the range for each animations of vehicles. You can see below the differents event in timeline for a vehicles.

Delay Random: random time before playing vehicle animation.Leave tweet: time when the tweet box leave the vehicle and play the transition animation.Camera cut: when the vehicle start to go back, we cut to watch the statue and tweets.Camera invalid: camera invalid means the vehicle can’t be selected when the camera switch to a new one

Shadow

The ground was a plane so I took advantage of this to use flat textured quad that followed the position of the vehicle but at 0 in z. Shadow textures were generated by the artist then converted to distance map and finally used on the quad. We used distance map on those texture because it gave more control, for example the blur of the edge. It worked on most vehicles except for the plane because of its animations (the shadow would not follow the plane rotation). To fix this I used a matrix that project the shape of the plane onto the ground. Using this method meant no soft edges for shadow plus some artifact due to blending. Deadline make us to fix it later.

Transitions

To make the tweets going from vehicles to the statue, we had to make a transition. I wanted to try something like disolving the tweet box to a lot of smaller cubes and then moved them like they are transported by wind to the statue.
I first setup the effect on this page and then improved it with a fake wind like in the demojs-fff. To finish I added a simple fade out when cubes are near the statue and voila.
The effect was not optimized and I used one 3d model per cube, that would be better to use pseudo instancied cube or pack all cube into one model and passing transformation to the shader with attributes or uniforms, again time…

Clouds

I wanted to try volumetric clouds on this project. For this I tried different method

mega particles (youtube video). Because of the shower doors effect I dropped this method after a few tries.

3d volume textures, I started but I needed more time to implement it. The idea was to generate a 3d texture based on noise function then in realtime draw slice to represent the volume. I will try to release an example later.

Particles based. Like particles you draw different textured sprite with transparency. In this case you have to sort sprite from the camera position and render with blending enabled. I used this method because of time and it worked enough. On screenshots below you can see some test with tuning parameters of clouds.

Tools

One of the most important aspect when I worked on this project was how we setup and tuned effects. I wrote scripts to export 3d models, generate distancemap as usual, but the new tool I wrote for this project was to integrate automatic slider generation from shader parameters. To do this, I wrote functions that were able to check variable and type in shaders and from those informations created html slider elements that communicated directly with the shaders. To let artists tune effect and focus only on the desired effects I added and not think if they could lose or not their work, I saved value with localstorage. When artists were happy with the result they sent me by mail the value then I added their value as ‘defaults’ value. This process could be improved in futur with undo and save set of parameters, but even without that it was really convenient to let artist worked this way. You can see on the screenshot below the sliders used to fine tune the rendering effects.

We had a meeting a few months ago before the demojs event in Paris to organize it. I worked on the intro to announce the event 10 days before the deadline. 4 of us made this intro: Guillaume Lecollinet who helped on design and css stuff, Ulrick for the music and Mestaty for 3d models, both are from FRequency demo group and I worked on the code. If you are interested in particles you really need to read this blog. This guy does awesome things.

Particles again

At the beginning I did not really know what I wanted to create. I wanted to work on particles but with more complexity than my previous toy. Finally I did an intro only with particles. The consequence is that the entire intro used the same shader, I will describe the following stuff I used into the intro.

Verlet physic integration

Spawning particles

Distance map

Velocity field

Morphing of 3d models

Verlet Integration

Verlet integration in a nutshell is a numerical method used to integrate Newton’s equations of motion. There is a good blog and examples how to use it. In webgl we can’t use render to texture on floating point texture. In fact we can use an extension but I wanted to make it works on most browser with webgl so I did not use the extension. The consequence is that particles coordinates has to be encoded in specific format on rgba pixels.

In my previous particles toy I used 16 bits fixed point to encode coordinates, but on this one I wanted to improve it and try 24 bits to have more precision, I encoded more informations like signed distance, life of particle, or material id in pixels. (picture above left). In webgl there is no multi render target and I had to draw the scene 3 times to compute particle’s positions, for x, y and z. To select each dimension I wanted I used a uniform.
Finally to compute a ‘next’ frame (3 textures) it required ‘current’ frame (3 textures) ‘previous’ frame (3 textures), in final I needed 9 textures to just have the verlet physic running without controlling their motions. For this I used others textures I will describe after. Texture size . To not hurt too much my gpu, I fixed the texture’s size to 512×512, meaning 262144 particles. We could

Spawning particles

To determine the life span and position of new particles, I used uv range of particles to distributes them in space. It’s not really elegant or pratical for bigger projects/shapes. For example, the equalizer scene was done allowing particles on a plane where equalizers were. Basically there is a range 0.25 in ‘u’ per equalizer bar and I limited the v to 0.5. So we have 0.25*(4 equalizer) and v limited in 0.5 it means 0.25*4u + 0.5v = 131072 particles allocated for equalizers, and the 131072 others are used for the 3d models. Next time I would like to try ‘mesh emitter’ or something more useful than doing it manually.

Distance Map

What is a distance map ? you can read this paper from valve that explains how it works.
Distance map is a really useful tool to control particles. In the intro I used texture that encodes distance map and gradient (the vector that tells you which direction to take to go to the nearest point on the shape). For this I created a tool (DistanceMapGenerator) and then I computed the gradient from the distance map. Finally I constructed a texture that contains both pieces of information. During the computation of the position I take the signed distance of this position to fit the shape I want, eg:

This technique was used for most of the motions/shapes I wanted the particles to fit in. I tried to manipulate particles manually but it was too complex and I was not able to do what I wanted to. Distance maps is really easier.

Velocity field

To add some perturbation motion like ‘procedural wind’ I used a MathGL/udav tool. The idea was to find a nice formula I could use in the shader that produces nice motion. For that I used udav to display the vector field from the formula. Once I was happy with the vector field, I added some variation in real time depending on time. This tool was not really convenient and maybe next time I will write something to help me with this. Once the formula was selected I used a lookup to get my vector depending on particle’s position. It looks like this below:

3D models

At the end of the intro I used morphing between different 3d models ( the firefox logo, and the abstract model formed of cube ). To use those models with particles I first had to convert them into a suitable format for the particle system, meaning into textures that would encode the model’s position as rgb pixels. The particle system used 262k particles but models used up to 131k vertexes ( remember 131k particles were allocated for the equalizers ). So we have 131k particles to display morph and animate our 3d models. The morphing between the different shapes works with a lerp between position ( finalVertex = model0*t + model1*(1.0-t) ). To add some perturbation to the motion we still add the ‘fake wind’ during the animation. If you want to check the tool to build vertex to texture format used by the particle system look here. It’s a plugin for openscenegraph.

Music

The music was done by Ulrick from FRequency and they used their own tool to export pattern events in a c++ header. I made a little script to convert the result into json, and then I injected events data into timeline.js. Timeline.js was great but I needed to patch it to support callback and use an external time, the one that came from the music.

Improvements

There is a lot of stuff I would have wanted to do better but 10 days was too short. So I discarded lighting on particles, shadow, spawn mesh emitter, post process effect, smoke simulation with sph. Maybe the next time I will play with particles I will be able to add some of those elements.

Over the last few months, I have been working on a webgl demo for firefox. The objective was to create a demo to showcase webgl technology. I am currently working on a 3D framework called osgjs so the application uses this javascript library. osgjs is a javascript implementation of OpenSceneGraph and helps to manage 3d scenes and webgl states. You can get more information on the website.

We did different experiments before we ended up with globetweeter, I kept some of them for history :)

Jurassic Park

I started off by creating a file system similar to the 3D file system used in Jurassic Park. I had to figure out the best type of camera that would be suitable to use with the system.

The idea is simple. Let’s say a user selects the item B . When selected, the camera moves from its current viewpoint to the chosen item. The position of the camera (C) is in orbit relative to the selected item. Basically I used a lookat matrix from the camera position (C) to the target item. To create the camera motion when moving from one item to another, I interpolated the target position (from A to B) and generated a rotation around this interpolated point (X) during the animation. I added some constraints like the distance from the target point and some limits in the rotation to keep the camera position in range (like if we would see the item from a 3rd person). Check out this experiment (use ‘del’ key to go to previous level).

Twitter

This idea ended up being too geeky, so we tried out something more popular and surrounded by more hype. And therein was born the idea of displaying tweets with 3D.
A first try was to iterate on something like tweet deck but we wanted something that would be more responsive and with eye candy features… the first ugly experiment was to render tweets in a canvas and to use them as texture in 3D. We ended up dropping this idea and instead decided to show tweets geo-localized on the earth.

The last idea we had is the current incarnation of the globe tweeter. To make the globe I used 3 data files from natural earth data

As you can see, those data are flat and need to be projected on a sphere. Before projecting those data, however, I had to tesselate the triangles in order to have enough vertex to project a clean shape on the sphere. For this I have created a tool called ‘grid’. It tessellates the input shape with a grid. It’s a kind of boolean union operation.

Above on the left, you can see the white model that is the original ’110m admin 0 countries.shp’. The black model is the same model but tesselated a bit more to fit more closely on the sphere. On the right is the model (grid) use to tessellate the original ’110m admin 0 countries.shp’. The idea is to add subdivisions on the height section of the model.

Once the data is subdivided enough, I created a tool to project each vertex onto a sphere using the standard WGS84 projection. You can see a webgl version of the projected model by clicking on the picture.

Once the data below were ready we selected a nice color for each model. On the demo I drew the globe in two passes. The first pass drew back faces of ’110m admin 0 countries’ with ‘back color’ and the second drew the front faces with ‘front color’. It was necessary to have transparency of the globe because of the blending mode ‘One Minus Src Alpha’.

The Final result looks like this

SceneGraph representation

Wave

To add some details about twitter activity I setup a simple wave physics simulation that produces waves where tweets appear. The algorithm to produce the waves is explained here. To accomplish this I used two small hidden canvases with a size of 128×64 , I used small canvases because the computation is done on javascsipt and can be expensive. The update of waves was updated every 1/30 seconds. The update function did the following operations:

Convert tweets locations into source wave in the canvas.

Do the physics computation and store the result into the current canvas.

Upload the current canvas as texture to use in the vertex shader.
The vertex shader used this texture as a heightmap. To understand better how the heightmap works you can see here the original model without waves.

This shader is applied on a regular grid model projected on the sphere as explained before but this time the grid has a better resolution. Some of you will not see the relief of the waves because some webgl implementation does not expose texture unit on the vertex shader. Therefore as a work around I made another shader that does not move the vertexes in the vertex shader. Instead it only changes the color of the vertexes. You can read more about this issue on the Angle project.

Yes it’s a bit sad, I have seen this issue lately … :( . As conclusion this effect works well but it takes too much cpu in javascript/canvas, I should have tried a different effect that was less cpu intensive. Have a look at this video if you can’t see the waves’s relief.

Tweets

Tweets are displayed with the avatar image with simple quad oriented and positioned on the sphere from latitude/longitude. To add a nice border around the image I used a blending operation in the canvas with the following image.

Finally to have a nice animation when a tweet appears and disappears, I used an EaseInQuad function for the color, and EaseOutElastic for the scale component.

Zooming to the earth made tweet really huge related to the screen. To prevent this effect I introduced a scale factor that depends on the camera altitude. The full code to update a tweet looks like this

NodeJS

The server responsible for sending tweets to the clients is done with nodejs. I used twitter-node, socket.io, and express modules to build the server. The code is really short so you can have a look on the server directly. You can get the server code here and improve it :) A big huggy to proppy who bootstraps the the nodejs server \o/

Stats

The first graph shows the number of connections per day. There is a big spike when the news was broadcasted. The second graph shows the number of connections per day but with a smaller scale and the last graph shows the cumulated number of connections.