Andor Salga » webglhttps://asalga.wordpress.com
A journey into recursive dreamsSun, 02 Aug 2015 20:20:16 +0000enhourly1http://wordpress.com/https://secure.gravatar.com/blavatar/fd8e04d10e2b1b6b2ad228a4695e395b?s=96&d=https%3A%2F%2Fs2.wp.com%2Fi%2Fbuttonw-com.png » webglhttps://asalga.wordpress.com
Engage3D Hackathon Coming Soon!https://asalga.wordpress.com/2012/12/08/engage3d-hackathon-coming-soon/
https://asalga.wordpress.com/2012/12/08/engage3d-hackathon-coming-soon/#commentsSat, 08 Dec 2012 23:58:51 +0000http://asalga.wordpress.com/?p=2841]]>A month ago, Bill Brock and I pitched our idea to develop an open source 3D web-based videoconferencing system for the Mozilla Ignite Challenge over Google chat. Will Barkis from Mozilla recorded and moderated the conversation and then sent it off to a panel of judges. The pitch was to receive a slice of $85,000 that was being doled out to the winners of the Challenge.

After some anticipation, we got word that we were among the winners. We would receive $10,000 in funding to support the development of our prototype. Our funding will cover travel expenses, accommodations, the purchasing of additional hardware and the development of the application itself.

We will also take on two more developers and have a hackathon closer to the end of the month. Over the span of four days we will iterate on our original code and release something more substantial. The Company Lab in Chattanooga has agreed to provide us with a venue to hack and a place to plug into the network. Both Bill and I are extremely excited to get back to hacking on Engage3D and to get back to playing with the gig network.

I am working with Bill Brock (a PhD student from Tennessee) to develop an open source 3D video conferencing system that we are calling engage3D. We are developing this application as part of Mozilla’s Ignite Challenge.

During the past few days, Bill and I made some major breakthroughs in terms of functionality. Bill sent me Kinect depth and color data via a server he wrote. We then managed to render that data in the browser (on my side) using WebGL. We are pretty excited about this since we have been hacking away for quite some time!

There has been significant drawbacks to developing this application over commodity internet, I managed to download over 8Gb of data in only one day while experimenting with the code. Hopefully, this will soon be able to be ported to the GENI resources in Chattanooga, TN for further prototyping and testing.

Even though we are still limited to a conventional internet connection, we want to do some research into data compression. Also, we have been struggling with calibrating the Kinect. This is also something we hope to resolve soon.

This blog post is the continuation of a series of blogs [1,2, 3] related to adding .obj file support to Processing.js. This code I’m working on is important since it will allow developers to easily load 3D models from files and it will increase the performance of rendering 3D objects in Processing.js.

Since my last blog, I have added some small but critical changes to the code, some of which I outline here.

Interface Change

I contacted one of the developers of Processing, Andrés Colubri, who is reworking most of the OpenGL code. Some of his rework includes making Saito’s .obj loader native in Processing. This is great for Processing, but it means that all the time I spent making the Processing.js .obj loader work like Saito’s was wasted ): On the other hand, it means that pushing this code in the next release of Processing.js might actually be done! (:

The sketch below is a simple example of using Saito’s .obj extension, which my code expected.

The problem was that I had no idea what loading 3D models was supposed to look like natively. So I asked Andréas for a simple sketch that worked in Processing and that I could emulate in Processing.js.

After receiving the sketch, I was glad to see it wasn’t much different.

I was able to quickly add a few hacks to make Processing.js work with the new interface. I didn’t want to rewrite my entire parser just yet since all my tests rely on the old method. I also don’t want to rewrite my code a third time (:

Triangulation

I found that many 3D authoring tools export .obj models with triangle fans. In my last blog about .obj importing I wrote about the lack of support in my code for this scenario, but I recently wrote a patch that fixes the issue. It was not difficult to write, but because of this fix, many more models can now be properly parsed. This includes the 3D model at the top of this post.

Testing, testing, …

I found a few more issues with the parser so I fixed them and added reference tests. I’m finding these tests invaluable since I’m often tweaking the parser as I go. I have just over 30 right now, but I hope to have many more since I expect the code will go through many more transformations.

Feedback

If you are using my ‘extension’ and you find a file that isn’t being properly loaded, please send me your file so that I can fix it and add a test.

In my last blog I wrote about an anaglyph demo I created for my FSOSS presentation in October. It was part of a series of delayed blogs which I only recently had time to write up. So, in this blog I’ll be proceeding with my next fun experiment: Shadows in WebGL.

Shadows are useful since they not only add realism, but can also provide additional visual cues in a scene. Having never implemented any type of shadows, I started by performing some preliminary research and found that there are numerous methods to achieve this effect. Some of the more common techniques include:

vertex projection

projected planar shadows

shadow mapping

shadow volumes

I chose vertex projection since it seemed very straightforward. After a few sketches, I got a fairly good grasp of the idea. Given the position for a light and vertex, the shadow cast (for that vertex) will appear at the line intersection between the slope created by those points and the x-intercept. If we had the following values:

Light = [4, 4]

Vertex = [1, 2]

Our shadow would be drawn at [-2, 0]. Note that the y component is zero and would be equal to zero for all other vertices since we’re concentrating on planar shadows.

At this point, I understood the problem well; I just needed a simple formula to get this result. If you run a search for “vertex projection” and “shadows” you’ll find a snippet of code on GameDev.net which provides the formula for calculating the x and z components of the shadow. But if you actually try it for the x component:

It doesn’t work.

When I ran into this, I had to take a step back to think about the problem and review my graphs. I was convinced that I could contrive a working formula that would be just as simple as the one above. So I conducted additional research until I eventually found the point-slope equation of a line.

Point-Slope Equation

The point-slope equation of a line is useful for determining a single point on the slope give the slope and another point on the line. This is exactly the scenario we have!

Where:m – The slope. This is known since we have two given points on the line: the vertex and the light.

[x1, y1] – A known point on the line. In this case: the light.

[x, y] – Another point on the line which we’re trying to figure out: the shadow.

Since the final 3D shadow will lie on the xz-plane, the y components will always be zero. We can therefore remove that variable which gives us:

Now that the only unknown is x, we can start isolating it by dividing both sides by the slope:

Which gives us:

And after rearranging we get our new formula, but is it sound?

If we use the same values as above as a test:

It works!

I now had a way to get the x component for the shadow, but what about the z component? What I did so far was create a solution for shadows in 2 dimensions. But if you think about it, both components can be broken down into 2 2D problems. We just need to use the z components for the light and point to get the z component of the shadow.

Shader Shadows

The shader code is a bit verbose, but at the same time, very easy to understand:

Double Trouble

The technique works, but its major issue is that objects need to be drawn twice. Since I’m using this technique for dense point clouds, it significantly affects performance. The graph below shows the crippling effects of rendering the shadow of a cloud consisting of 1.5 million points—performance is cut is half.

Fortunately, this problem isn’t difficult to address. Since detail is not an important property for shadows, we can simply render the object with a lower level of detail. I had already written a level of detail python script which evenly distributes a cloud between multiple files. This script was used to produce a sparse cloud—about 10% of the original.

Matrix Trick

It turns out that planar shadows can be alternatively rendered using a simple matrix.

This method doesn’t offer any performance increase versus vertex projection, but the code is quite terse. More importantly, using a matrix opens up the potential for drawing shadows on arbitrary planes. This is done by modifying all the elements of the above matrix.

Future Work

Sometime in the future I’d like to experiment with implementing shadows for arbitrary planes. After that I can begin investigating other techniques such as shadow mapping and shadow volumes. Exciting! (:

A couple of weeks ago I gave a talk at FSOSS on XB PointStream. For my presentation I wanted to experiment and see what interesting demos I could put together using point clouds. I managed to get a few decent demos complete, but I didn’t have a chance to blog about them at the time. So I’ll be blogging about them piecemeal for the rest of the month.

The first demo I have is an anaglyph rendering. Anaglyphs are one way to give 2D images a depth component. The same object is rendered at two slightly different perspectives using two different colors. Typically red and cyan (blue+green) are used.

The user wears anaglyph glasses, which have filters for both colours. A common standard is to use a red filter for the left eye and a blue filter for the right eye. These filters ensure each eye only sees one of the superimposed perspectives. The mind them merges these two images into a single 3D object.

Method

There are many ways to achieve this effect. One method which involves creating two asymmetric frustums can be found here. However, you can also create the effect by simply rotating or translating the object. It isn’t as accurate, but it’s very easy to implement:

// ps is the instance of XB PointStream
// ctx is the WebGL context
ps.pushMatrix();
// Yaw camera slightly for a different perspective
cam.yaw(0.005);
// Create a lookAt matrix. Apply it to our model view matrix.
ps.multMatrix(M4x4.makeLookAt(cam.pos, V3.add(cam.pos, cam.dir), cam.up));
// Render the object as cyan by using a colour mask.
ctx.colorMask(0,1,1,1);
ps.render(pointCloud);
ps.popMatrix();
// Preserve the colour buffer but clear the depth buffer
// so subsequent points are drawn over the previous points.
ctx.clear(ctx.DEPTH_BUFFER_BIT);
ps.pushMatrix();
// Restore the camera's position for the other perspective.
cam.yaw(-0.005);
ps.multMatrix(M4x4.makeLookAt(cam.pos, V3.add(cam.pos, cam.dir), cam.up));
// Render the object as red by using a colour mask.
ctx.colorMask(1,0,0,1);
ps.render(pointCloud);
ps.popMatrix();

Future Work

I hacked together the demo just in time for my talk at FSOSS, but I was left wondering how much better the effect would look if I had created two separate frustums instead. For this I would need to expose a frustum() method for the library. I can’t see a reason not to add it considering this is a perfect use case, so filed!

A few days ago I noticed the turbulent point cloud demo for ro.me was no longer working in Firefox. Firefox now complains that the array being declared is too large. If you look at the source, you’ll see the entire point cloud is being stuffed into an array, all 6 megabytes of it. Since it no longer works in Firefox, I thought it would be neat to port the demo to XB PointStream to get it working again.

Stealing Some Data…

I looked the source code and copied the array declaration into a empty text file.

var array = [1217,-218,40,1218,-218,37,....];

So I had the data, which was great, but I needed it to be in a format XB PointStream could read. I had to format the vertices to look something like this:

1217 -218 40
1218 -218 37
...

Conversions

Using JavaScript to do the conversion only made sense, but I first had to split up the file which contained my array declaration so Firefox could actually load it. After some manual work, I had 6 files—each with its own smaller literal array.

I then wrote a JavaScript script which loaded each array and dumped the formatted text into a web page. I ran my script and copied the output several times until I had the entire file reconstructed as an .ASC file.

Adding Turbulence

Once I had the point cloud loaded in XB PointStream, I needed to add some turbulence. I could have used the shader which the original demo used, but I found a demo by Paul Lewis which I liked a bit better. The demo isn’t quite as nice as the original, but given more time I could incorporate the original shader as well to make it just as good.

For some time now I’ve been working on an OBJ loader for Processing.js. This loader reads and parses OBJ files which contain 3D models as a set of vertices, normals and texture coordinates. The loader then creates the necessary WebGL buffers used to render those models in Processing.js.

A project for Processing already exists: Saitoobjloader. I’m simply trying to make a JavaScript equivalent.

0.1

The project has lagged, significantly. But not because I was working on XB PointStream. It lagged because calling something 0.1 can be very difficult. As I kept adding support for more features, my standards for the loader kept increasing thus pushing the date further and further back.

Then about a week ago, I decided it was too late and it needed to ship now. In a fury I wrote a series of references tests and decided that would mark 0.1. When I finished the tests I ‘released’ the code. That is, I put the associated ticket up for review.

Looking back on the experience I learned a couple valuable things.

Start with tests

If I had created 10 tests and said to myself “Once it passes these, that’s 0.1″ I could have had it out the door a long time ago. At the time I didn’t realize what I did, but in retrospect it makes sense. I had set a hard standard for the project and a completion criteria.

Set Limits

Just before releasing I was hesitant as I wanted to add support for quads. I already had the logic from a previous project and knew it was simple to implement. But I knew as soon as I added support for it, the project would slip once again. Instead of writing the code, I added known failure ref tests. I moved quads from 0.1 to 0.2. In a strange way I’m more proud of my decision not to support quads then the entire project itself.

Moving Forward

Now I didn’t get the actual code staged into Processing.js. But the result of placing the ticket up for review spun off an important meeting with some of the lead developers of Processing.js: Dave Humphrey, Jon Buckley and Mike Kamermans. We talked about the necessary changes Pjs would need to support libraries and we set some milestones.

I’m looking forward to continuing my work on the loader. Once we have library support in Pjs, we’ll also have a decent OBJ loader to go with it (:

Filed under: Open Source, Processing.js, webgl]]>https://asalga.wordpress.com/2011/09/29/extending-processing-js-with-a-obj-importer-part-3/feed/10asalgaobj loaderHouse of Cards WebGL Demo Sourcehttps://asalga.wordpress.com/2011/09/02/house-of-cards-webgl-demo-source/
https://asalga.wordpress.com/2011/09/02/house-of-cards-webgl-demo-source/#commentsFri, 02 Sep 2011 20:58:26 +0000http://asalga.wordpress.com/?p=2370]]>On Wednesday I posted a video on YouTube of Firefox rendering Radiohead’s “House of Cards” point cloud data in WebGL. I’m now releasing the code for anyone to play with RIGHT HERE. If you download it, make sure to read the README file!

I tested the demo on Chromium and found that it didn’t work, so I’ll be debugging that over the weekend. If you find any other issues with the code or instructions or if you make a neat visualization, let me know!