If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

The Kinect gspca subdriver just supports the video stream for now, exposing the output from the RGB sensor or the unprocessed output from the monochrome sensor; it does not deal with the processed depth stream yet, but it allows using the sensor as a Webcam or as an IR camera (an external source of IR light might be needed for this use).

What does that mean, I don't get a 3D image, but the pure IR camera? The structured light is not turned on? So basically it gives a picture a little bit like here?
I wonder if one can actually control the structured light source, not only on/off but also how the light is structured.
Does anyone have any in depth information how this thing works in depth? (I know how the kinect works, but now how it works from a driver perspective, like what is done on the PC/Xbox and what in the device.)