Rolling Rotations for Recognizing Human Actions from 3D Skeletal Data

Abstract: Recently, skeleton-based human action recognition has been receiving significant attention from various research communities due to the availability of depth sensors and real-time depth-based 3D skeleton estimation algorithms. In this work, we use rolling maps for recognizing human actions from 3D skeletal data. The rolling map is a well-defined mathematical concept that has not been explored much by the vision community. First, we represent each skeleton using the relative 3D rotations between various body parts. Since 3D rotations are members of the special orthogonal group SO(3), our skeletal representation becomes a point in the Lie group SO(3) × . . . × SO(3), which is also a Riemannian manifold. Then, using this representation, we model human actions as curves in this Lie group. Since classification of curves in this non-Euclidean space is a difficult task, we unwrap the action curves onto the Lie algebra by combining the logarithm map with rolling maps, and perform classification in the Lie algebra. Experimental results on three action datasets show that the proposed approach performs equally well or better when compared to state-of-the-art.

​Contributions

We combine the logarithm and rolling maps to flatten the special orthogonal group SO(3) for recognizing human actions from 3D skeletal data. To the best of our knowledge, rolling maps were never used in the context of human action recognition.

Most existing works on rolling maps use a geodesic curve as the rolling curve. In contrast to this, we propose to use mean action curves, which are non-geodesic, as rolling curves.

Existing literature does not provide closed form expressions for the rolling map in the case of a non-geodesic rolling curve. In this work, we show how to compute a piecewise smooth rolling map corresponding to a given (discrete) non-geodesic rolling curve in SO(3).

We introduce a scale-invariant skeletal representation by using only 3D rotations (instead of full rigid body transformations) to describe the relative geometry between various body parts. Using only the rotations reduces the feature dimensionality by half compared to our earlier SE(3)-based representation.

We show that the proposed scale-invariant rotation-based representation performs equally well when compared to our earlier full rigid body transformation-based representation by evaluating it on three action datasets: Florence3D-Action dataset, MSR-Action Pairs dataset and G3D-Gaming dataset.

Experimental results

Comparison between using the logarithm map at a point and unwrapping while rolling (in terms of recognition accuracy)