3D Render Live with Kinect and Bubble Boy

[Mike Newell] dropped us a line about his latest project, Bubble boy! Which uses the Kinect point cloud functionality to render polygonal meshes in real time. In the video [Mike] goes through the entire process from installing the libraries to grabbing code off of his site. Currently the rendering looks like a clump of dough (nightmarishly clawing at us with its nubby arms).

[Mike] is looking for suggestions on more efficient mesh and point cloud code, as he is unable to run any higher resolution than what is in the video. You can hear his computer fan spool up after just a few moments rendering! Anyone good with point clouds?

It’s used to render particle fluid simulations. But can be applied to about any point cloud. As it runs totally on the GPU it’s pretty scalable. I was able to render about 30K particles without any problems using this technique.

Your problem is Processing. Java is wicked slow; you should be using C, C++, or (preferably), Haskell, which compiles to C. Anything that is interpreted, runs in a virtual machine, or uses any execution path other than compilation to machine code will be slow.

I think that a really simple way to do it would be to generate the mesh once and then deform it , instead of continuously generating new meshes. If you really want to generate meshes (respond to changes such as people walking in and out of frame) You can regenerate over a couple of time frames and sync the model + skeleton again when a new mesh is made just to correct errors. By dividing the work over updates and deforming existing meshes instead of regeneration the rate should go up considerably.

Not exactly, i was thinking more like generating one pointcloud and then using a skeletal structure (that you can track) to basically move the points around instead. But it seems like what you want is more of a realtime 3d scanner.