In my last blog I wrote about an anaglyph demo I created for my FSOSS presentation in October. It was part of a series of delayed blogs which I only recently had time to write up. So, in this blog I’ll be proceeding with my next fun experiment: Shadows in WebGL.

Shadows are useful since they not only add realism, but can also provide additional visual cues in a scene. Having never implemented any type of shadows, I started by performing some preliminary research and found that there are numerous methods to achieve this effect. Some of the more common techniques include:

vertex projection

projected planar shadows

shadow mapping

shadow volumes

I chose vertex projection since it seemed very straightforward. After a few sketches, I got a fairly good grasp of the idea. Given the position for a light and vertex, the shadow cast (for that vertex) will appear at the line intersection between the slope created by those points and the x-intercept. If we had the following values:

Light = [4, 4]

Vertex = [1, 2]

Our shadow would be drawn at [-2, 0]. Note that the y component is zero and would be equal to zero for all other vertices since we’re concentrating on planar shadows.

At this point, I understood the problem well; I just needed a simple formula to get this result. If you run a search for “vertex projection” and “shadows” you’ll find a snippet of code on GameDev.net which provides the formula for calculating the x and z components of the shadow. But if you actually try it for the x component:

It doesn’t work.

When I ran into this, I had to take a step back to think about the problem and review my graphs. I was convinced that I could contrive a working formula that would be just as simple as the one above. So I conducted additional research until I eventually found the point-slope equation of a line.

Point-Slope Equation

The point-slope equation of a line is useful for determining a single point on the slope give the slope and another point on the line. This is exactly the scenario we have!

Where:m – The slope. This is known since we have two given points on the line: the vertex and the light.

[x1, y1] – A known point on the line. In this case: the light.

[x, y] – Another point on the line which we’re trying to figure out: the shadow.

Since the final 3D shadow will lie on the xz-plane, the y components will always be zero. We can therefore remove that variable which gives us:

Now that the only unknown is x, we can start isolating it by dividing both sides by the slope:

Which gives us:

And after rearranging we get our new formula, but is it sound?

If we use the same values as above as a test:

It works!

I now had a way to get the x component for the shadow, but what about the z component? What I did so far was create a solution for shadows in 2 dimensions. But if you think about it, both components can be broken down into 2 2D problems. We just need to use the z components for the light and point to get the z component of the shadow.

Shader Shadows

The shader code is a bit verbose, but at the same time, very easy to understand:

Double Trouble

The technique works, but its major issue is that objects need to be drawn twice. Since I’m using this technique for dense point clouds, it significantly affects performance. The graph below shows the crippling effects of rendering the shadow of a cloud consisting of 1.5 million points—performance is cut is half.

Fortunately, this problem isn’t difficult to address. Since detail is not an important property for shadows, we can simply render the object with a lower level of detail. I had already written a level of detail python script which evenly distributes a cloud between multiple files. This script was used to produce a sparse cloud—about 10% of the original.

Matrix Trick

It turns out that planar shadows can be alternatively rendered using a simple matrix.

This method doesn’t offer any performance increase versus vertex projection, but the code is quite terse. More importantly, using a matrix opens up the potential for drawing shadows on arbitrary planes. This is done by modifying all the elements of the above matrix.

Future Work

Sometime in the future I’d like to experiment with implementing shadows for arbitrary planes. After that I can begin investigating other techniques such as shadow mapping and shadow volumes. Exciting! (:

A couple of weeks ago I gave a talk at FSOSS on XB PointStream. For my presentation I wanted to experiment and see what interesting demos I could put together using point clouds. I managed to get a few decent demos complete, but I didn’t have a chance to blog about them at the time. So I’ll be blogging about them piecemeal for the rest of the month.

The first demo I have is an anaglyph rendering. Anaglyphs are one way to give 2D images a depth component. The same object is rendered at two slightly different perspectives using two different colors. Typically red and cyan (blue+green) are used.

The user wears anaglyph glasses, which have filters for both colours. A common standard is to use a red filter for the left eye and a blue filter for the right eye. These filters ensure each eye only sees one of the superimposed perspectives. The mind them merges these two images into a single 3D object.

Method

There are many ways to achieve this effect. One method which involves creating two asymmetric frustums can be found here. However, you can also create the effect by simply rotating or translating the object. It isn’t as accurate, but it’s very easy to implement:

// ps is the instance of XB PointStream
// ctx is the WebGL context
ps.pushMatrix();
// Yaw camera slightly for a different perspective
cam.yaw(0.005);
// Create a lookAt matrix. Apply it to our model view matrix.
ps.multMatrix(M4x4.makeLookAt(cam.pos, V3.add(cam.pos, cam.dir), cam.up));
// Render the object as cyan by using a colour mask.
ctx.colorMask(0,1,1,1);
ps.render(pointCloud);
ps.popMatrix();
// Preserve the colour buffer but clear the depth buffer
// so subsequent points are drawn over the previous points.
ctx.clear(ctx.DEPTH_BUFFER_BIT);
ps.pushMatrix();
// Restore the camera's position for the other perspective.
cam.yaw(-0.005);
ps.multMatrix(M4x4.makeLookAt(cam.pos, V3.add(cam.pos, cam.dir), cam.up));
// Render the object as red by using a colour mask.
ctx.colorMask(1,0,0,1);
ps.render(pointCloud);
ps.popMatrix();

Future Work

I hacked together the demo just in time for my talk at FSOSS, but I was left wondering how much better the effect would look if I had created two separate frustums instead. For this I would need to expose a frustum() method for the library. I can’t see a reason not to add it considering this is a perfect use case, so filed!

A few days ago I noticed the turbulent point cloud demo for ro.me was no longer working in Firefox. Firefox now complains that the array being declared is too large. If you look at the source, you’ll see the entire point cloud is being stuffed into an array, all 6 megabytes of it. Since it no longer works in Firefox, I thought it would be neat to port the demo to XB PointStream to get it working again.

Stealing Some Data…

I looked the source code and copied the array declaration into a empty text file.

var array = [1217,-218,40,1218,-218,37,....];

So I had the data, which was great, but I needed it to be in a format XB PointStream could read. I had to format the vertices to look something like this:

1217 -218 40
1218 -218 37
...

Conversions

Using JavaScript to do the conversion only made sense, but I first had to split up the file which contained my array declaration so Firefox could actually load it. After some manual work, I had 6 files—each with its own smaller literal array.

I then wrote a JavaScript script which loaded each array and dumped the formatted text into a web page. I ran my script and copied the output several times until I had the entire file reconstructed as an .ASC file.

Adding Turbulence

Once I had the point cloud loaded in XB PointStream, I needed to add some turbulence. I could have used the shader which the original demo used, but I found a demo by Paul Lewis which I liked a bit better. The demo isn’t quite as nice as the original, but given more time I could incorporate the original shader as well to make it just as good.

On Wednesday I posted a video on YouTube of Firefox rendering Radiohead’s “House of Cards” point cloud data in WebGL. I’m now releasing the code for anyone to play with RIGHT HERE. If you download it, make sure to read the README file!

I tested the demo on Chromium and found that it didn’t work, so I’ll be debugging that over the weekend. If you find any other issues with the code or instructions or if you make a neat visualization, let me know!

CSV Parser

First I downloaded the House of Cards data and saw it was in CSV format. XB PointStream already has the architecture setup for user-defined parsers, so I was able to write one without changing the library itself.

User-defined Shader

To make things interesting I wrote a simple shader which changes the positions of the points and colors while the video plays. Again, I didn’t need to change the library since user-defined shaders are supported as well.

Performance Issues

When I first began rendering the video, I was using a MacBookPro 3.1 (2Ghz, 2GB RAM, GeForce 8600M GT 128MB), but Firefox began chugging after about 400 frames. Luckily my supervisor (Cathy Leung) saved me by giving me a new MBP 8.2 (2GHz, 8GB RAM, AMD Radeon HD 6490M 256MB). With this new system I was able to render it in real-time without any major issues.

There are 2100 frames of Thom Yorke singing which totals 880MB, so you can’t stream it online :c However, I’ll place all my work on Github if you’d like to tinker with it. Keep an eye on my blog when I make it available.

A simple way to increase performance when rendering point clouds is using levels of detail (LOD). If the camera is far from an object, a lower detailed version of that object can be rendered without much loss of visual detail. As the camera moves closer, higher fidelity versions can be drawn.

A while ago I thought about adding this functionality to XB PointStream and soon realized that the library already supports it! The library can load different point clouds in the same canvas, which allows users to split a cloud into a series of files and conditionally render them. When I had this idea I was too busy with other work, so I had to put it off.

Yesterday I finally sat down and began working out the details to get a demo up and running. I needed two things. First, I needed to evenly distribute the points in a cloud. All of the clouds in my repository have been scanned linearly or in blocks, which doesn’t lend itself well for LOD purposes. For LOD, each cloud needs to represent a coarse level version of the entire object. Second, I needed to split up the cloud into several files.

I decided to start with a simple ASCII point cloud format, ASC. The file is organize something like this:

1.13 6.86 7.81 0 128 255
7.27 9.59 7.29 0 128 255
...
...

Using Some Python

I don’t know Python, but I knew it would be a good choice for this task. My plan was to load the input file into an array, randomly select indices from the array and write them out to the output file.

Soon after I got to work on writing my script, I saw there was a shuffle() method for arrays. This saved me quite a bit of work, so I was happy. I then hacked together the rest of the script. If you’re a Python developer, let me know if there are ways I can fix up the code.

I tested this first with a million points and didn’t see much performance gain. This was a bit disappointing and suggests that there are other bottlenecks in the library. I decided instead to try the largest file I have, the visible human, which is about 3.5 million points. I fed the cloud into my script and split it up into 10 files. When testing this I got a reasonable FPS gain. When rendering all the clouds I get ~20FPS and when I zoom out and render only 1 cloud, I get ~60FPS.

In Conclusion

If you’re going to render a large data set with XB PointStream, consider using my script to split it up into many files to increase your script’s performance.

Since I’ve already written a few blogs about WebGL’s readPixels and because developers seem to find my page mostly by this keyword, I decided to help clarify a recent issue I found.

In some of my WebGL scripts I have a feature which allows users to convert 3D images to 2D (see here). The script does this simply by making a call to readPixels.

This used to work until browsers (namely WebKit and Chrome) began implementing the preserveDrawingBuffer option. This is an option set when the WebGL context is acquired and as its name suggests it preserves drawing buffers between frames.

What this means is if preserveDrawingBuffer is false/off (which it is by default) it will not save the depth and color buffers after each draw call. Trying to call readPixels in this state will result in an array of zero’ed out data.

If you’re planning on calling readPixels, you’ll need to turn on this option when you get your WebGL context.

A couple of weeks ago it was my turn to demonstrate to the other researchers at the Seneca CDOT lab what I had been working on. I gave them a quick presentation of XB PointStream, a WebGL library for rendering point clouds in web pages.

After watching my presentation, Mike Hoye expressed interest in extracting data from the visible human project video and feeding it into the library—creating an interactive version of the video in 3D. The idea seemed fascinating and I anticipated seeing the results. It wasn’t long before Mike casually mentioned he finished everything. He showed me his demo and I was extremely impressed.

Slider Slicer

After playing around with the demo for a bit I decided it needed at least one change: a slider to slice the subject just like the video. I added a jQuery UI slider which now allows users to create cross sections of the point cloud. I then made some changes to the camera so it always focuses on the section which has been sliced ‘out’.

Something Completely Different

This demo is interesting because it’s different in two ways. First, the point cloud has contents. All my other clouds are actually hollow, while this one has a bunch of ‘meat’ within it.

Second, the file size. Mike mentioned the data set had been scaled down from several gigabytes. This sparked my interest since none of my current point clouds surpass 50MB. If I manage to solve the problem of dynamically loading sections of point cloud files, I could start experimenting with loading the entire 10GB cloud.

Making it Faster

The demo is sluggish right now since it stupidly renders 3.5 million points/frame. However, this can be fixed. Because the user clipping planes work on the Y-axis and because the cloud loads along the Y-axis, it would be possible to do coarse-level culling on sections of the cloud if it was pre-cut along this axis. For example, if I had 5 or so cross section ‘chunks’ of the cloud and one of the clipping planes passed the bounds of a cloud, that section could be culled from rendering entirely. When I have time (ha,ha) I’ll get around to doing that.

Special Thanks

A huge thanks to Mike Hoye who on his own time performed some magic and got the data out of the video. This demo would not have been possible without him. I’m looking forward to some higher fidelity point clouds in the future!