Tutorial: XNA 4.0 + Kinect for Windows SDK

We don't have many XNA 4 Kinect examples or tutorials in the Gallery, so as soon as I came across this I posted it. I just wish I'd seen it sooner (it was written in June 2011). Better late than never I guess?

Tutorial: XNA 4.0 + Kinect for Windows SDK

...a new tutorial series that will cover the basics of the new Kinect for Windows SDK using XNA 4.0.

Kinect Fundamentals #1: Installation & Setup

Connecting the Kinect

Step 0:If you don’t have a Kinect Sensor, you can (probably) find it at your nearest electronics dealer.

Step 1: Download and install the Kinect SDK:

...

Kinect Fundamentals #2: Basic programming

Our first Kinect program will be using XNA 4.0 to create a texture that is updated with a new image from the Kinect Sensor every time a new image is created, thus displaying a video.

Setting up the projectCreate a new XNA Windows game and give it a name. To use the Kinect SDK, you will need to add a reference to it. This can be done by right clicking on the References folder, click Add Reference.. and in the .NET tab find Microsoft.Research.Kinect.

...

Creating the NUI objectNow we are ready to create the object that will “hold” the Kinect Sensor. The Kinect SDK have a class named Runtime that contains the NUI library. To get what you need our from the Kinect Sensor, instantiate an object from this class:Runtime kinectSensor;

We also create a Texture2D object that will contain our images

...

Kinect Fundamentals #3: Getting distance-data from the Depth Sensor

Now that you know how to use the RGB Camera data, it’s time to take a look at how you can use the depth data from the Kinect sensor.

It’s quite similar to getting data from the RGB image, but instead of RGB values, you have distance data. We will convert the distance into a black and white image representing the depth map. Remember, there are two sensors that contains distance data, so this need to be handled.

...

Converting the depth data

Now is the time for the meat of this tutorial. Here we get the depth data from the device in millimeter, and convert it into a distance we can use for displaying a black and white map of the depth. The Kinect device got a range from 0.85m to 4m. We can use this knowledge to create a black and white image where each pixel is the distance from the camera. White pixels are close, while black are far. We might also get some unknown depth pixels if the rays are hitting a window, shadow, mirror and so on (these will have the distance of 0).

Because we are using a Depth Image, there are two bytes per pixel that represents the distance (one from each depth sensor). To get the distance at a given pixel, you will need to bitshift the second byte left by 8.

Can you please create a new blog called Kinecting4Fun, or perhaps take a week off from blogging about the next Kinect project? Of the last fourteen articles, eight are about Kinect. I suspect the pattern holds for the past few months; I'd count but I have run out of fingers and toes (damn you, high school shop class!).

Remove this comment

Remove this thread

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation,
please create a new thread in our Forums, or
Contact Us and let us know.