"As the Linux desktop increases in popularity, the user interface experience has become increasingly important. For example, most laptops today have multitouch capabilities that have yet to be fully exposed and exploited in the free software ecosystem. Soon we will be carrying around multitouch tablets with a traditional Linux desktop or similar foundation. In order to provide a high-quality and rich experience we must fully exploit multitouch gestures. The uTouch stack developed by Canonical aims to provide a foundation for gestures on the Linux desktop."

Throwing a few stones doesnt suddenly make them great contributors. They need to do more and work with the upstream communities too to make sure that any good work done survives beyond canonical/Ubuntu.

uTouch is a hack that isnt suported by the wider community (which has been working on xinput 2/2.1/2.2) where the major developers think that uthouch is doing things in the wrong part of the stack.

This will likely remain Ubuntu only technology where everyone else in the FOSS community goes another way.

The big question here is where are gestures interpreted and at what level. should the X environment interpret them? the toolkit or the application?

The people working in X Inout believe that X is the wrong place as it wouldnt know enough about the applications and they provide inpout to feed up the layer (which then GTK3 or the application can interpret or ignore as needed).

Canonical/Ubuntu feel that X is the right place as for the few gestures, they should be consistent and also would appear quicker and with less work in the short term (though then when there is a right way that works well, this will create more work).