We saw previously how to stop an object at the edge of a window when using touch manipulation. You likely also want the object to stop moving when it hits the edge of its container when moving as a result of inertia (i.e. you’ve already lifted your finger from the screen).

In the following code fragment, we’ve enabled positional inertia. We then check to see if the object is past the container boundary. If it is and if it’s moving as a result of inertia, we stop the inertial event by calling Complete and we move the object back within the window.

You can set the TranslationBehavior.DesiredDeceleration in the ManipulationInertiaStarting event to allow inertia when translating using touch manipulation. This allows an element to continue moving a little bit after you lift your finger off the element while doing translation manipulation.

You can also enable inertia for expansion (i.e. scaling) during touch manipulation. An element will then continue expanding or contracting when you lift your fingers from the screen while doing expansion using touch. You do this by setting the ExpansionBehavior.DesiredDeceleration property.

When handling a ManipulationDelta event during touch manipulation, you often care about the ManipulationDelta.Scale property, which indicates the updated scale of an element, relative to its previous size (e.g. 0.5 = 1/2 size).

You can also access a ManipulationDelta.Expansion property, which tells you the actual number of device independent units (1/96th in) that the element is changing, relative to its last known size.

The example below dumps out both scale and expansion values as we scale with touch.

When handling the raw touch events, if you touch a user interface element and then lift your finger off the element, you’ll see the events:

TouchDown

TouchUp

These two events do not capture, however, cases when you slide your finger onto or off of the element. For this situation, you can handle the TouchEnter and TouchLeave events, which fire when a touch contact enters or leaves the boundaries of the element.

If you then touch an element and then lift your finger off the element, you’ll see:

TouchEnter

TouchDown

TouchUp

TouchLeave

If you slide your finger onto the element and then lift your finger off the element, you’ll see:

In the handlers for the various raw touch events, you can get information about the size of the actual touch contact (the area where your finger is touching the screen). You get the size from the TouchPoint.Bounds property, which contains the touch position and its size.

Note that the touch contact will not have a non-zero size on every device. In some cases, the size may be reported if this feature is not supported.

Here’s an example of drawing an ellipse at a size based on the size of the touch contact.

We’ve talked about how to calculate a value for inertial deceleration. Once you know what deceleration value that you want, you can implement inertia during touch manipulation by handling the ManipulationInertiaStarting event.

The ManipulationInertiaStarting event will fire after the user lifts their finger off of the screen. In the event handler, if you specify a deceleration value, the inertia will be modeled and the object will continue to move after the user lifts their finger.

The example below specifies a deceleration value of 40 in/sec^2. It also dumps out the initial velocity of the object being translated, for informational purposes.

Note that we are setting up inertia for translation only. We could also specify different deceleration values for rotation and scaling to get touch-based inertia while rotating or scaling.

In addition to using the touch manipulation events to handle translation of an element, we can use the same mechanisms to allow a user to rotate an element using touch.

We can do both translation and rotation in the same event handler. The ManipulationDelta object gives us both a translation (vector) and a rotation (angle). Both are automatically incorporated into the ManipulationDelta object, based on how the user is touching the screen. The user can translate by sliding one finger around and can rotate by placing two fingers on the object and rotating it.

We transform the element by calling two different functions of the underlying Matrix, for both translation and rotation.

Here’s the XAML, containing a single Image control that we’ll interact with.