ManoMotion’s second generation SDK offers more features and improved tracking over the original release. The first iteration of ManoMotion’s technology can interpret up to 2 million gestures, but it is limited to tracking a single hand at a time. The ManoMotion SDK 2.0 supports multi-hand tracking. It also offers skeletal structure understanding and improved depth tracking.

The first version of ManoMotion’s hand tracking SDK could interpret a wide range of hand gestures, but it wasn’t well suited for interaction with virtual objects because it didn’t support layered objects in 3D space. ManoMotion’s new SDK understands where virtual objects are situated in a 3D environment, which enables object occlusion behind virtual or real-world features. It also helps to improve virtual object manipulation and interaction. ManoMotion said that its technology also supports a variety of gesture interactions, including swiping, clicking, tapping, and grabbing.

ManoMotion’s new SDK integrates with native iOS and Android applications. It also supports the ARKit and ARCore augmented reality APIs from Apple and Google, and includes a Unity plugin for both smartphone platforms.

ManoMotion offers its SDK in a freemium model, which allows anyone to use the SDK to develop interactions for their applications. The company doesn’t ask for licensing fees until you release a commercial product.

ManoMotion’s SKD 2.0 isn’t yet readily available, but the company is accepting applications from developers who wish to gain priority access to ManoMotion’s new feature set.