Kinect to MAX/MSP for Windows

I just thought id post about i program i created for the start of my final year research project as i imagine some people here may get some use out of it.

Using the offical SDK for windows the program takes data from the Xbox Kinect and sends it to Max/Msp via OSC. By using the Ventuz OSC C# wrapper it packs all the X, Y, Z data for all the skeletal points into an OSC bundle and sends that to Max where it was unpacked in the patch.

The data sent from the Kinect is the unchanged co-ordinate system of values in metres rangeing from -1 to 1 for all three axis for both players.

That being said the data sent from Kinect to Max can be used by any device that is capable of reading OSC packets .The format of the OSC message is

/joint/skeleton_[1/2]/[joint] x, y, z

This is a simple implementation allows quick and easy access to the full power of the kinects skeletal tracking in windows for users of the offical SDK over OSC. I hope it can be of some use.

Attachments:

Wow
looks like a lot of time and effort went into this; I can imagine many applications, not just creative interaction but health services and industries, HCI etc. My default attitude to remote tracking (video/infrared/wiimote et al) is usually "meh"……but this looks very intriguing. Good luck with the continued research.

This is awesome! I’m having trouble getting any OSC data to read in my patch though. I’ve linked the OSC-Route external, Opened the DepthView.exe, Opened Kinect for Max and opened port 7710. I’m not getting any OSC data in the skeleton patch.

I even turned my firewall off to see if that was the problem but no luck.

Hey Jordan,
I’ve been trying to run the patch but I can’t get any data running through it. I did the same as sirjimbob and I’ve got no result, also tried with my firewall off but still nothing. I am running win 7 x64. One thing that I noticed is that when I run your patch or the "Kinect to Max.exe" the projector on the kinect doesn’t light up, but wen I run the "Sample Skeletal Viewer" from the official SDK it starts. Would you suggest what might be cousing the problem?
Thanks

Attachments:

Hello,
i’ve been trying to open Kincet to Max app, but it just stopped and doesn’t work.
Need I more software to run it? or its just necessary SDK drivers?
Is necessary make changes to "KinectCOMLib.dll"?

Hey Jordan. First off, great work! This is a serious project and you’ve done a great job. A couple things though.

First off, I’m on a Windows 7 x64 machine with Max 5 & 6.

Everything seems to load and run OK, but for the life of me I can’t get the skeleton to lock. Any depth, any length, any position, no luck. Any ideas? The MS Sample Skeletal Viewer locks on with pretty much no problem.

Also, it’d be nice if the viewer had the option to show as a mirror image. It just makes working on screen a bit easier.

Jeff, I’ve been trying to, but unfortunately I cant get the patch to work, I cant get the depth viewer (Kinect To Max.exe) to work neither. Just doesnt show anything. I contacted Jordan by email, and he was kind enough to compile couple of different versions with different versions of the SDK drivers, but again with no success. Since yesterday I am trying with processing and then to send the data to Max, I still havent got any result but if I do I will post the code here. Otherwise the official SDK and its Simple Skeleton Viewer works fine…

Hey Jordan, I still havent, I left it aside for that time I was working on a sequencer controller and just finished it few days ago and again I am looking to get that kinect working so I could finish my project untill the end of June. I had to reinstall my computer several times since last month and I will install the kinect drivers right about now and see if there is any change since the last time.

As it stands its compiled against the beta SDK v2 which still supports the 360 Kinect as I dont own a Kinect for Windows, if i can get my hands on one ill recompile it but until then it wont work on the latest SDK sorry

The final Kinect SDK works perfectly with my xbox kinect. Even the speach recognition works fine and as far as I remember it didnt with the beta SDK. So you might want to try the final SDK as well. Also I found that external called synapse http://synapsekinect.tumblr.com/post/6307752257/maxmsp-jitter I tried to test it but because of lots of additional stuff which it requires I couldnt get it to run, but thats most likely because I am doing something wrong with the additional stuff. I will give it a try again tomorrow and will post if there is any result.

As you can see there’s been a fair amount of updates. If you haven’t had the chance to get started on this yet, JordanRS, maybe we can collaborate: I’ll send you what I have and maybe you can help me figure out why I’m getting a black window frame instead of the camera data from the Beta version.

It seems to be a worthwhile project as the new SDK seems to be a lot more robust than the beta.

Attachments:

meanwhile you might want to check this jit.openni external http://hidale.com/category/software/
after i tried everything I could find on the net this was the only one which worked for me so far. I managed to get synapse runing as well, but then realized that the jitter external for synapse is for Mac only…

Well it turns out the 1.5 SDK just came out on Monday so it would do well to compile against that one and forget about 1.0, I’ll have another look at the project.

I am likely going to also start working on a 64-bit Windows version of Synapse since it only supports 32-bit Windows at the moment (if anyone did get it to work let me know!), so if you have any requests for it let me know.

Ok so I’ve mixed together a bit of code to get the functionality I was personally looking for (simply transmission of my two hand’s XYZ coords) implemented by mixing together the OSC portion of Jordan RS’s codebase in with some code from the Beginning Kinect Programming book, Chapter 4.

The advantage here is that developments on this codebase will now be able to take advantage of new features of the 1.0/1.5 SDKs, and also already includes some extra code bits from the book’s base, including a method to detect which hand is closer on the depth plane to the camera than the other (which would be great for air drumming with the hands).

The GUI application is simply drawing the skeleton, scaled/truncated ints for the XYZ coords of the hands, and also showing which hand is closer to the camera. There’s no color camera layer as for my purposes it just takes away focus from how well the skeleton itself is being mapped.

I’m working with the Kinect for Windows hardware now but it’s been tested as working fine for XBox Kinect as well. I’m really interested in getting more into the seated mode and perhaps the face tracking as well (see my avatar!).

At this stage the codebase isn’t really good enough to share publicly here, but if anyone wants it I can send it to them privately.

I’ll also look into implementing any feature requests.I’m sending raw data to Max, as its better to scale the data only once within Max rather than scale once on the SDK side and then again within Max which would introduce some precision loss.

This is really all a side project to add Windows support for a Max application I’m going to release soon for Kinect, 2 WiiMotes, OpenGL/Jitter, and surround sound. If anyone has interest in that main project specifically, I’m starting a thread to try to gage interest so look for it.

im on max6 // win7 64bit sp1 with kinect sdk 1.5 and a xbox kinect model camera. drivers and camera working fine with other apps.

just downloaded your files, give a try to both .exe’s and absolutly nothing coming out or in…
even the leds of the camera dont change or the 3d sensor gets on… so it seems the software isnt initializing the kinect…

is your piece of soft only for new kinect for windows camera? or must work also with xbox model?

That code was designed for the beta 2 version of the SDK, as mentioned above. I’m working on a version for the 1.5 SDK that is now functional but I’d like to add a little more before releasing it. If there’s any particular functionality you’re hoping for now is a good time to mention it!

I should have it out by mid July.

Edit: Whoops almost forgot. So it turns out the Candescent NUI is in C# too so I’m going to see about merging some of that codebase to get some finger tracking information going into Max as well. With any luck it’ll all be working together within a few weeks, but this fork of the project will probably only be able to support the Kinect for Windows with its Near Mode (see Candescent FAQ)

Hey Kcoul,
I’m looking to integrate Kinect with my MaxMSP/Jitter project this year during my first Semester of my final year at University.
I’m thankful that you’ve been working hard on the project and I can’t wait to see how it looks.
The main thing I’m looking for is the ability to control various things with the sensor. I.e; Pitch, Volume, Speed etc.
An easy way of linking objects to the Kinect Sensor would also be a nice touch.

How is the ability to having multiple people? Say two people on screen at once, is it possible to simultaneous control four or more objects in unison? (depending of course that the people are in sync).

I can’t wait to see the project and give it a spin for my presentation.
Feel free to message me directly if you want to discuss anything, It’d be good to talk.

Sorry for a late reply as I was quite busy over the summer. Are you starting the semester this week, or did you start already?

The code I am working on can easily be modified to handle two people, I’ll take a look at doing that. I lost my code when an SSD I had everything on died so I will get it up and running again.

The way it works is all the joint data is being piped into Max/MSP and you can do whatever you want with it. I’m looking to add a few additional messages, such as a boolean for which hand is out further.

I’m not sure how much work I want to do on the Max/MSP side, aside from perhaps one patch as an example: the idea is to abstract it as much as possible so I don’t give the impression that the patch is only useful for a specific range of things. Hopefully I’ll get my code back to where it was before the SSD died by the end of this week or next, then if I can clean it up a bit I’ll be happy to post a first version, so check back soon.

Sorry for the long wait, here is a fairly stable version that works with the latest 1.6 SDK. As you can see I am really only interested in upper torso data.

The wait is for a good reason though, I am now working on a centralized gestural dev environment called "GestureLab" that will map input from WiiMotes, Kinect, PS Move, and LeapMotion. It looks like the first version will probably be in C# for Windows. I’ll try and post an update, and check this thread to support this code if anyone has setup questions.