Top: We create avatars reproducing the same soft tissue motions as the ones seen on real humans. We show how avatars generalize to external forces (applied with the red sphere), and how they deform if gravity is 7 times higher. Bottom: The red avatar performs a motion. A green heavier avatar tries to reproduce the same motion, struggling to raise the leg and loosing balance afterwards.

We, humans, live in a real world governed by the laws of physics, i.e. we apply and experiment forces, such as gravity, in our daily interactions with the world. In this project we allow virtual humans to interact with a virtual world subject to the laws of physics. How would ones’ body shape deform in case of a collision with an object? How would our walk pattern look like if we had a few kilos more?

The goal in [ ] was, starting with observations of the surface of a human peforming high dynamic motions, how to create a virtual avatar with physical properties, allowing to not only reproduce the observed motions as well as new unseen motions with high visual fidelity, but also capable to generalize to external forces, such as increased gravity or the collision with objects? The problem is hard: which metric to use to best match the simulations to the real data? How to overcome the lack of analytical gradients in the optimization problem? Which assumptions can reduce the complexity of the problem without compromising the quality of the results? With a new layered representation of the human body, hypothesis on the continuity of the physic parameters on the body and a new optimization scheme, we solved the complex problem.

The research question in [ ] was how to retarget an input motion into a new body having different physical properties, i.e. taller, heavier, thinner, so that the new morphology is taken into account. To obtain visually plausible simulations we propose a simplified representation of the human body, and animate it using physically-based retargeting. We develop a novel spacetime optimization approach that learns and robustly adapts physical controllers to new bodies and constraints. The method automatically adapts the motion of a subject with the novel target body shape, respecting the physical properties and creating a different movement. This makes it easy to create a varied set of motions from a single mocap sequence by simply varying the characters.

Motion capture is often retargeted to new, and sometimes drastically different, characters. When the characters take on realistic human shapes, however, we become more sensitive to the motion looking right. This means adapting it to be consistent with the physical constraints imposed by different body shapes. We show how to take realistic 3D human shapes, approximate them using a simplified representation, and animate them so that they move realistically using physically-based retargeting. We develop a novel spacetime optimization approach that learns and robustly adapts physical controllers to new bodies and constraints. The approach automatically adapts the motion of the mocap subject to the body shape of a target subject. This motion respects the physical properties of the new body and every body shape results in a different and appropriate movement. This makes it easy to create a varied set of motions from a single mocap sequence by simply varying the characters. In an interactive environment, successful retargeting requires adapting the motion to unexpected external forces. We achieve robustness to such forces using a novel LQR-tree formulation. We show that the simulated motions look appropriate to each character’s anatomy and their actions are robust to perturbations.

Data driven models of human poses and soft-tissue deformations can produce very realistic results, but they only model the visible surface of the human body and cannot create skin deformation due to interactions with the environment. Physical simulations can generalize to external forces, but their parameters are difficult to control. In this paper, we present a layered volumetric human body model learned from data. Our model is composed of a data-driven inner layer and a physics-based external layer. The inner layer is driven with a volumetric statistical body model (VSMPL). The soft tissue layer consists of a tetrahedral mesh that is driven using the finite element method (FEM). Model parameters, namely the segmentation of the body into layers and the soft tissue elasticity, are learned directly from 4D registrations of humans exhibiting soft tissue deformations. The learned two layer model is a realistic full-body avatar that generalizes to novel motions and external forces. Experiments show that the resulting avatars produce realistic results on held out sequences and react to external forces. Moreover, the model supports the retargeting of physical properties from one avatar when they share the same topology.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems