@Sven: heh, I've seen that chair in so many places. When I first got my Roomba it got stuck on the legs, so I figured that seeing how many of those chairs I've seen, there must be someone else out there with a Roomba/Poäng combination, and sure enough as soon as I had typed "Roomba poa" in Google a whole bunch of threads about the problem rolled out. I love IKEA.

@Maddus: that's so uncanny! When I saw it I thought "hey, I don't remember this picture at OH MY GOD!"

@W3bbo: I'm using the official Kinect SDK. The RGB video is exposed as a bunch of PlanarImage objects for each frame, that you can poll or subscribe to an event that returns one as soon as it's ready. The PlanarImage isn't specific to any framework, but you can get to the bits and other data easily and convert it to whatever image format you need. It's not entirely convenient, but an extension method is easily written and it's still pretty versatile this way.

@dentaku: I still want to clean up the code a bit, and I'm too busy this weekend, but with a bit of luck I can post something later tonight. Otherwise probably monday.

@MasterPie:I'll post some info on how it works when I post the code, but in short it's a bunch of pixel shaders (that I grabbed from the awesome Shazzam) that I applied to the RGB image from the Kinect.

@Sven: heh, I've seen that chair in so many places. When I first got my Roomba it got stuck on the legs, so I figured that seeing how many of those chairs I've seen, there must be someone else out there with a Roomba/Poäng combination, and sure enough as soon as I had typed "Roomba poa" in Google a whole bunch of threads about the problem rolled out. I love IKEA.

@W3bbo: I'm using the official Kinect SDK. The RGB video is exposed as a bunch of PlanarImage objects for each frame, that you can poll or subscribe to an event that returns one as soon as it's ready. The PlanarImage isn't specific to any framework, but you can get to the bits and other data easily and convert it to whatever image format you need. It's not entirely convenient, but an extension method is easily written and it's still pretty versatile this way.

Did you use the Coding4Fun libraries? I think I remember seeing some extension methods to simplify some of that stuff being demonstrated in Dan Fernandez's videos.

The way the application works is that it displays whatever the RGB camera sees, and tracks the skeletons of up to two people in the frame, and checks which of their joints are closer than a certain distance to the Kinect sensor. Those joints should cause a ripple in the video image, so the joints 3D coordinates are converted to 2D pixel coordinates corresponding to the joint's position in the video image.

At these 2D coordinates, a ripple pixel shader effect is displayed. Pixel shaders are small C procedures that run on the GPU, and I have no idea how they work exactly, so I downloaded Shazzam and let it generate a nice C# wrapper class for me. Shazza also allows you to test the shaders, so I played around with the provided Ripple pixel shader's properties. I decided that each ripple effect would start with a certain frequency (the number of ripples in the effect) that would over time decrease to a frequency of zero meaning no ripples. This would give the impression of the ripples calming down again.

Each ripple effect also has an amplitude, which deterines how 'deep' the ripples are. A light touch should cause a light ripple and a deep punch should cause a wild, deep ripple, so the closer the joint is to the sensor, the higher the ripple's amplitude.

The biggest problem was applying multiple pixel shader effects to the same image. The WPF Image control, like many others, has an Effect property, but that only takes a single pixel shader effect. The only way I could find to apply two or more pixel shaders to the image was by wrapping the Image in a container (I chose a border but I guess it could be a Canvas or whatever) and setting the Effect property of that container. Then, when another ripple needs to be added, we create another border to wrap the first border (which wraps the Image), set the new border's Effect, and so on. This results in a deep nested tree of borders, the very deepest of which cointains the Image control. As soon as a ripple animation as finished, I remove the border it was applied to. The application stays remarkably performant, despite the fact that, if you go crazy waving your arms and legs, it can contain up to 50 nested borders, each with animating pixel shader, wrapping the single image of you in front of the Kinect.

I haven't had time to clean up the code yet, I basically wrote it from scratch non-stop in one evening, so some bits are kind of dodgy, but it still works. If anyone has any further questions, let me know.