Hey everybody,
as the topic’s title suggests, I’m curious to learn how people is tracking their skeletons these days.

Let me go first: most of the time, when I need skeleton tracking, I’m still using the old NiTE library or the Kinect SDK. I imagine this is very common practice.
Sometimes, when the scenario is easy/fitting enough, I also bring back old school, pre-ML, skeletonization techniques.

For most of the real world situations I deal with, I feel OpenPose is still not the most practical solution and again: I think a lot of people feel the same way.
I really think it’s kind of weird how, in the post-kinect world, a lot of new depth cameras hit the market allowing us to re-use most of our depth sensing code without rewriting a single line of code, but there’s a thick silence when it comes to skeleton tracking. There’s some talk about hand tracking, but almost nothing when it comes to robust full figure tracking.

So here I am: anybody working on this? Any paper, or library I should check out?
Any hackish NiTE workaround?

I use kinect V1 with MS SDK, onLinux I had better results with openNI2 (ofxN2) it gaves more robust results than openNI and it is way easier to use than openNI.
It is also possible to use OpenNI2 on Windows if you install the MS SDK (kinect V1), I didn’t try it out yet since with the MS SDK is enough for me.
I will try the new tools mentioned above.

i’ve built this code in 2006. still using it in 2019.
it also has a very simple skeleton tracker, suits my needs just fine, i’ve used it to make projects such as 747.3 (2006), DRACO.WOLFANDDOTCOM.INFO (2015), etc.