Replies(97)

im actually working on a wrapper for openi under windows, i haven't tried macosx.a friend of mine said it works (latest unstable version has macosx support as multiple devices) but he had some hardtime around it..

It makes sense to me that we would transition over to OpenNI from the libfreenect stuff. I have not looked at OpenNI other than the basics, but I would be excited to replace our current stuff with something better. It's not in my personal queue or Hai's, but we would be excited to receive a git pull request along these lines :).

Actually i've been trying to get github around my mind and place a first version of openni wrapper online.In order to connect this to cinder is just a matter of adding the bitmap data to a cinder surface/texture and thats it. I just copied the code from cinder-kinect to my test app.

how to get this running?

to initialize a device, create one from the device manager and pass either a xml file or flags that define which generators to run.

this runs on setup() and should get you a running device. (i havent tested multiple devices, so i dont know how that works yet)

_manager = V::OpenNIDeviceManager::InstancePtr();

_device = _manager->createDevice( "data/configIR.xml", true );

_device->setPrimaryBuffer( V::NODE_TYPE_DEPTH );

_manager->start();

then for each frame you can access the usual stuff, Image, IR, depth data.Also the manager deals with user generator and handle a list of users (skeleton tracking) itself.so far i have only tested it myself, one user, but it should handle multiple ones

_manager->update();

It lacks documentation, so if you want to know more, just browse the code.

Yes i wasn't sure what func would print to the console on macosx and linux. i'll just add that to the wrapper so it makes things easier for everyone.

As for accessing the data:what this wrapper returns is bitmaps that represent the data from the image/infrared/depth generators.To visualise it on cinder you would need to create a surface and pass that to a gl texture if in not mistaken.

what i did was easy and practical.i took the image code from the Cinder-Kinect block and just use that in my test application.Below is that code:

class ImageSourceKinectColor : public ImageSource

{

public:

ImageSourceKinectColor( uint8_t *buffer, int width, int height )

: ImageSource(), mData( buffer ), _width(width), _height(height)

{

setSize( _width, _height );

setColorModel( ImageIo::CM_RGB );

setChannelOrder( ImageIo::RGB );

setDataType( ImageIo::UINT8 );

}

~ImageSourceKinectColor()

{

// mData is actually a ref. It's released from the device.

/*if( mData ) {

delete[] mData;

mData = NULL;

}*/

}

virtual void load( ImageTargetRef target )

{

ImageSource::RowFunc func = setupRowFunc( target );

for( uint32_t row = 0; row < _height; ++row )

((*this).*func)( target, row, mData + row * _width * 3 );

}

protected:

uint32_t _width, _height;

uint8_t *mData;

};

class ImageSourceKinectDepth : public ImageSource

{

public:

ImageSourceKinectDepth( uint16_t *buffer, int width, int height )

: ImageSource(), mData( buffer ), _width(width), _height(height)

{

setSize( _width, _height );

setColorModel( ImageIo::CM_GRAY );

setChannelOrder( ImageIo::Y );

setDataType( ImageIo::UINT16 );

}

~ImageSourceKinectDepth()

{

// mData is actually a ref. It's released from the device.

/*if( mData ) {

delete[] mData;

mData = NULL;

}*/

}

virtual void load( ImageTargetRef target )

{

ImageSource::RowFunc func = setupRowFunc( target );

for( uint32_t row = 0; row < _height; ++row )

((*this).*func)( target, row, mData + row * _width );

}

protected:

uint32_t _width, _height;

uint16_t *mData;

};

and in the main app class:this methods will return a ImageSource from the data given by OpenNI. Several options are available.

If you're getting "expected a primary expression before..." errors, you're probably missing a semicolon somewhere. Or you've done something like declare the type of a thing without a name or the name of a thing without a type.

I didn't wanted to create another post for this but has it's the most active thread about OpenNI here it is:

I've started a little cinder block few weeks ago when OpenNi released their drivers for the kinect. I didn't had time to do much more so I'm posting that here. Hopefully with all the codes you already have this can maybe help building a more complete block ...

Actually it was my first cinder project, and I took all my "design inspiration" in the existing kinect block and from the OpenNI documentation (mostly copy&paste of the samples actually) ....

It currently support gesture recognition and hand tracking. It is based on the callback system (wish is really great) and doesn't need xml config file ....

7. Compile should work now but program will fail though (Couldn't create device from xml)

8. You need to copy BlockOpenNI/samples/BlockOpenNISkeleton/data to BlockOpenNI/samples/BlockOpenNISkeleton/xcode/build/Debug/

9. If you have OpenNI, KinectSensor and Nite from http://openni.org it should work now as on the screenshot above.

And here is another OSX trick. Probably it won't work with openni.org drivers (it didn't in my case). Then you go and download SensorKinect from https://github.com/avin2/SensorKinect and follow the README instructions. Works like a charm.

I guess I found the problem with the crash, it's a synchronisation issue, in your update you use a mutex but actually you never use it. the problem would occur if you check for existing user, which is true, but the user is lost again and then you query the userimage, which would crash.

this is a quick fix, making the mutex public and when querying for the user image lock that scope

m_OpenNIMgr->_mutex.lock();

if( m_OpenNIMgr->hasUsers() && m_OpenNIMgr->hasUser(1) )

{

m_UserSurface = getUserColorImage(1);

}

m_OpenNIMgr->_mutex.unlock();

I'll tested it for some time and it seems to have done the trick, but I'll keep you posted.

There is still a problem with the coloring of the users (using newest version of your github), very seldom the system gets confused and green and yellow occurs, Regarding your manager it should be single user only, so just green. This situation where it identifies two user and shows them can lead to a total freeze of the user color image and accordingly skeleton tracking. Could be a synchronization issue as well, but I think it's something else, will have a look.

Hi cadet,Thanks for the bug hunting. I was able to reproduce the error when losing a user quite easily i must say. Not sure why i haven't seen that before. Definitely a synch error. I'll see if i can fix this in the next few days.Keep up the good work

@marcin Can't remember if i did it before, but, thank you for the osx experiments and the step tutorial setting it up. I couldn't do that myself.

does anybody else have the issue that after running cinder/openNI/kinect on OSX 10.6 for a while the keyboard, trackpad etc. get disabled until the computer is restarted? external mouse works, everything else drops dead.

happened to me with the OpenNI cinderBlock, and also with the cobbled-together version i had made myself before i saw this thread, but hasn't happened so far with the NITE sample executables.

First of all, thx to everybody for your hard job! I'm trying to run the BlocOpenNISkeleton sample on Win7. Now the code compile and all the dependencies looks working, but when I run the app is showing me this errors:

(Device) Error! XN_STATUS_OS_FILE_NOT_FOUND

[OpenNIDeviceManager] Couldn't create device from xml

(App) Couldn't init device0

I'm able to run openFrameworks examples using openNI, so I assume that the drivers are properly installed, so I think that I'm doing some lame mistake related with dll paths, or something like that.

That started happening to me about an hour go once I changed the window dimensions from 640x480, when reading data you can see the output becomes corrupted.

When you call, ConvertRealWorldToProjective from the DepthMapGenerator, is your z value NaN?

I haven't found a proper solution, but if you reset the OpenGL matrices to 640, 480 before retrieving the data, then set it back it seems to stay ok. It's not the best solution of course, but it beats rebooting for now.

thanks for the hint. i tried doing gl::setMatricesWindow( 640, 480 ); before reading the data in the update loop, and before i actually draw my objects(that i have updated with the new data) i do this:

Has anyone tried compiling any of the other samples? Specifically, I'd like to get the PointViewer sample going to I can make a modified single-hand tracking application (TUIO output -> system TUIO mouse driver for general use of Kinect for mouse movement).

I have used both, and my experience is that the one provided by Section9 does less, so it makes it easier to follow along with the example code provided by OpenNI/NITE ( which is used in both of these blocks )

However I found that I was working around the pixelnerve version more, as it was more abstracted yet is unfinished - so i could not get it to do what I wanted, but also OpenNI/Nite examples did not work 'out of the box'

-

Performance was the same on both - because they are both just wrappers for <XnCppWrapper.h> provided by OpenNI

They both suffer from the same bug, where they could make your built in KB and trackpad stop working, for the same reason as above.

In anycase, they both provide a lot of stuff that i don't think i would of been able to get off the ground without them, so both are good projects.

@pixelnerve are you still working on your wrapper ? it has still the user problem and can crash...

another question is how do I feed the raw depth into a texture so I can use it as displacemap, in principile it works but just for 8bit resolution so there are two layers of depth (in the middle starting from black again) due to the overflow. any ideas on that ?

I'm still using pixelnerve's great block (with minor changes of my own).

I've been experiencing the problem that when a user leaves the scene OpenNI will sometimes stop detecting new users. I.e. the NewUser callback is just never called. This occurs whether the existing user is still registered or not.

For example: I walk out of the scene, LostUser is called, my device has 0 users after that, but it doesn't detect me if I walk back in. If I wait for a couple of minutes, and try again, it starts working again (i.e. NewUser is called), but not always. Since I'm not aware of any state changes in the cinderblock, I suspect it's on the openNI side, which makes it really hard to debug. Has anybody had similar problems?

I'm using the prime sense dev kit with the OpenNi block, it works fine except for two odd issues.

First I have a black vertical strip in my image, this happen with both getDepthImage() and getDepthImage24() ( I also don't get the difference between the two methods except for the brightness which seems inverted).

The other issue is very odd, when I'm moving close to the camera(70/100cm) it auto adjust the brightness of the depth image almost like a common camera with auto brightness or auto exposure, so basically when I'm moving my arm the points cloud changes in depth. This might be due to the minimum distance but I was wondering if anybody had the same issue.

After fixing a few errors in the latest code ("std::" that should have been "boost::" and a missing "#ifdef WIN32") the examples run, the Kinect turns on, but all I see is a black screen. I followed the instructions above (and on https://github.com/OpenNI/OpenNI/tree/unstable and the SensorKinect site)

Additionally, there was a variable USE_THREAD that needed to be true, otherwise it just didn't update! This has been fixed in the newest git, plus I added the compiled OS X dynlibs for Nite, OpenNI, and SensorKinect. This may or may not have been a good idea - next step is to write an xcode script to package them all into the final app so it can be made distributable.

Now it works, in that it doesn't crash and the depth and IR images are shown, but for some reason pose estimation and skeleton tracking is not working on OS X.

for the size of this thread i can see there is alot of trouble getting this code to work (specially on osx). if there is a mac dev'er out there willing to work with me to make things easier for everyone, drop me a message or email.

2) libusb universalIt seems to be that there is two ways to do it. You must check at the end the arch of your lib with this command: lipo -info libusb/.libs/libusb-1.0.0.dylibIf you succeed, the arch must be i386 and x86_64 First go with the terminal inside your libusb folder then do one of this way:

4) You can now compile. However, you will get an xml error. Navigate to BlockOpenNI/samples/BlockOpenNISkeleton/data/ and edit the configIR.xml. Tou must insert this line: <License vendor="PrimeSense" key="0KOIk2JeIBYClPWVnMoRKn5cdY4="/> . After you need to copy BlockOpenNI/samples/BlockOpenNISkeleton/data to BlockOpenNI/samples/BlockOpenNISkeleton/xcode/build/Debug/ .

5) Run the project. Do the pose calibration. After few seconds, you must see your skeleton.

Sadly, I just spent several hours trying to get OpenNI installed. Multiple failures.

According to my research, I need to install doxygen and graphviz (which are listed as optional installs in the OpenNI readme). Because I didnt, when I ran Platform/Linux-x86/CreateRedist, it errors about doxygen and fails to actually create the Redist folder. So I try to install doxygen using MacPorts (typing 'sudo ports install doxygen') and it says it fails to install the Cairo dependency.

Googled my brains out and read a couple different online tutorials. Nothing relevant came up.

I just installed it a few days ago. Depencies including doxygen and graphviz from macports, OpenNI, SendorKinect from git, and the NITE binary. Make sure to pull the unstable branch from git. What is the error you are getting when installing cairo? Try to update ports if you haven't done that ('sudo port -d selfupdate') and install cairo alone first ('sudo port install cairo').

No luck. Also tried 'port clean cairo' and 'port uninstall cairo'. Seems Cairo just doesn't want to play with me. Filed a ticket with MacPorts. Will let the forum know if I figure it out. Thanks for the input.

Finally got the project to build, but ran into the same problem reported by several people above. The dreaded

(Device) Error! XN_STATUS_OS_FILE_OPEN_FAILED

[OpenNIDeviceManager] Couldn't create device from xml

(App) Couldn't init device0

I followed a couple different peoples solutions above. I ended up just doing the loadResource way to get to configIR.xml. I made sure to add the License info to the .xml. I also tried hardcoding the path to the xml file. Suggestions?

I just use createDevice( int flags ) method.
OR the flags you need using | and you should be fine.
On a side note: I just went over my pride and bought a mac. I use Lion and the code works just fine on it. Also tested on snow leopard. Openni is compiled against

On Cinder you will find a method called getResourcePath() inside the app class. this will point to the resources folder in the app package.Place the xml file inside your resources folder as cinder likes/recommends it, then when loading the file do:

createDevice( getResourcePath()+"config.xml" );

thats it

BTW i've updated the github repo with latest code and running samples both on windows and mac (probably needs to change PATH's but, no need to make it too easy for you right?)

I have OpenNI, NITE, and SensorKinect installed,All my Cinder and OpenNI headers are found,The Kinect and OpenCV blocks works great,

but I get a very strange linking error in Xcode with the OpenNIBlock projects.===============================================================================================ld: warning: directory '/SDKs/OpenNI/Lib /SDKs/cinder_master/blocks/OpenNI/samples/BlockOpenNISkeleton/xcode/../../../lib/macosx' following -L not foundld: library not found for -lnimCodecscollect2: ld returned 1 exit statusCommand /Developer/usr/bin/g++-4.2 failed with exit code 1===============================================================================================

I have tried to google -lnimCodecs but cant find anything useful.

Can anyone help me with this?

Maybe I should just install windows on bootcamp and try the stable version but I am not that familiar with Visual Studio

we are proud to share our 2RealKinectWrapper
with the OpenSource Community. It is a simple API which allows for
multiple Kinect usage transparently on OpenNI and Microsoft SDK,
Panasonics SDK is to come.

The SDK is multithreaded for performance. A long readme should explain everything...

It is accompanied with a cinder sample, so everyone here can quickly check it out:

I too was struggling with all the different implementations, instructions and forks. I could not make any other Cinder example version work in a way that would detect skeleton with no "special pose" (the typical "raise your hands" base pose). I even tried OpenFrameworks. In all cases I got the regular depth map working and in oF I had pose-based skeleton working. This was insufficient and almost led me to abandon all efforts.

I just managed to have the 2Real Cinder example working. I have Mac OS Lion with XCode 4 and Cinder 0.8.4.

I followed the basic OpenNI/NITE/Kinect instructions. All samples from that work fine.

There was one catch, though: the instructions in 2Real tell you to use your normal Cinder path wherever you installed it and that did not work for me. I noticed 2Real bundles Cinder in their codebase so I decided to try and use that path as the CINDER_PATH and voilá, the errors were different, mentioning `MultipleKinectApp::keyDown(cinder::app::KeyEvent) in MultipleKinectApp.o`. I went to the application source (their example .cpp) and commented out the two offending lines and the compile went through fine! What I commented seemed not to be critical. More detail on that change here: https://github.com/cadet/_2RealKinectWrapper/issues/1

I will post a screen grab later.

EDIT:

As promised, screen grab with two Kinects, same person in both but I have successfully tested with two people (need more friends):

I am trying to build the BlockOpenNI samples on macosx 10.6 32bit with github version of cinder, and I am running into the Symbols not found for architecture i386 problem, mentioned before on this topic.

Anyone got a solution for that?

Thanks in advance!

Undefined symbols for architecture i386:

"_xnGetRefContextFromNodeHandle", referenced from:

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIDevice.o

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIDeviceManager.o

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIUser.o

"_xnContextUnregisterFromShutdown", referenced from:

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIDevice.o

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIDeviceManager.o

xn::Context::SetHandle(XnContext*) in VOpenNIDeviceManager.o

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIUser.o

"_xnContextRelease", referenced from:

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIDevice.o

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIDeviceManager.o

xn::Context::SetHandle(XnContext*) in VOpenNIDeviceManager.o

xn::Context::TakeOwnership(XnContext*) in VOpenNIDeviceManager.o

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIUser.o

"_xnContextRegisterForShutdown", referenced from:

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIDevice.o

xn::NodeWrapper::SetHandle(XnInternalNodeData*) in VOpenNIDeviceManager.o