Programming with the Kinect for Windows Software Development Kit: Displaying Kinect Data

In this chapter from Programming with the Kinect for Windows Software Development Kit you will learn how to display the different Kinect streams. You will also write a tool to display skeletons and to locate audio sources.

Because there is no physical interaction between the user and the Kinect sensor, you must be sure that the sensor is set up correctly. The most efficient way to accomplish this is to provide a visual feedback of what the sensor receives. Do not forget to add an option in your applications that lets users see this feedback because many will not yet be familiar with the Kinect interface. Even to allow users to monitor the audio, you must provide a visual control of the audio source and the audio level.

In this chapter you will learn how to display the different Kinect streams. You will also write a tool to display skeletons and to locate audio sources.

All the code you produce will target Windows Presentation Foundation (WPF) 4.0 as the default developing environment. The tools will then use all the drawing features of the framework to concentrate only on Kinect-related code.

The color display manager

As you saw in Chapter 2 Kinect is able to produce a 32-bit RGB color stream. You will now develop a small class (ColorStreamManager) that will be in charge of returning a WriteableBitmap filled with each frame data.

This WriteableBitmap will be displayed by a standard WPF image control called kinectDisplay:

<Image x:Name="kinectDisplay" Source="{Binding Bitmap}"></Image>

This control is bound to a property called Bitmap that will be exposed by your class.

NOTE

Before you begin to add code, you must start the Kinect sensor. The rest of the code in this book assumes that you have initialized the sensor as explained in Chapter 1

Before writing this class, you must introduce the Notifier class that helps handle the INotifyPropertyChanged interface (used to signal updates to the user interface [UI]):

As you can see, this class uses an expression to detect the name of the property to signal. This is quite useful, because with this technique you don’t have to pass a string (which is hard to keep in sync with your code when, for example, you rename your properties) to define your property.

Using the frame object, you can get the size of the frame with PixelDataLength and use it to create a byte array to receive the content of the frame. The frame can then be used to copy its content to the buffer using CopyPixelDataTo.

The class creates a WriteableBitmap on first call of Update. This bitmap is returned by the Bitmap property (used as binding source for the image control). Notice that the bitmap must be a BGR32 (Windows works with Blue/Green/Red picture) with 96 dots per inch (DPI) on the x and y axes.

The Update method simply copies the buffer to the WriteableBitmap on each frame using the WritePixels method of WriteableBitmap.

Finally, Update calls RaisePropertyChanged (from the Notifier class) on the Bitmap property to signal that the bitmap has been updated.

So after initializing the sensor, you can add this code in your application to use the ColorStreamManager class:

The final step is to bind the DataContext of the picture to the colorManager object (for instance, inside the load event of your MainWindow page):

kinectDisplay.DataContext = colorManager;

Now every time a frame is available, the ColorStreamManager bound to the image will raise the PropertyChanged event for its Bitmap property, and in response the image will be updated, as shown in Figure 3-1.

If you are planning to use the YUV format, there are two possibilities available: You can use the ColorImageFormat.YuvResolution640x480Fps15 format, which is already converted to RGB32, or you can decide to use the raw YUV format (ColorImageFormat.RawYuvResolution640x480Fps15), which is composed of 16 bits per pixel—and it is more effective.

The ConvertFromYUV method is used to convert a (y, u, v) vector to an RGB integer. Because this operation can produce out-of-bounds results, you must use the Clamp method to obtain correct values.

The important point to understand about this is how YUV values are stored in the stream. A YUV stream stores pixels with 32 bits for each two pixels, using the following structure: 8 bits for Y1, 8 bits for U, 8 bits for Y2, and 8 bits for V. The first pixel is composed from Y1UV and the second pixel is built with Y2UV.

Therefore, you need to run through all incoming YUV data to extract pixels: