If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Canonical's X Gesture Extension Being Re-Evaluated

Phoronix: Canonical's X Gesture Extension Being Re-Evaulated

Earlier this month Canonical introduced its own multi-touch framework for Ubuntu that is set to premiere with Ubuntu 10.10 "Maverick Meerkat" and it's called UTouch and is joined by their own gesture/touch language. That same day as announcing UTouch for Ubuntu that will support devices like the Apple Magic TrackPad and Dell XT2, Canonical proposed the X.Org Gesture Extension to the X.Org development community. While it's good to see Canonical making more contributions to upstream projects that it depends upon for Ubuntu Linux, the X.Org Gesture Extension is already being re-evaluated and may in fact not be needed...

The argument against putting gesture recognition inside the server based on the observation that it'd mean adding a lot of code to the server seems something that could have been said about KMS. The point is though that, if that's where that code should be to make things actually work, it should be put there!
That said, I didn't really follow enough of the technical discussion to say if that's the case.

The argument against putting gesture recognition inside the server based on the observation that it'd mean adding a lot of code to the server seems something that could have been said about KMS. The point is though that, if that's where that code should be to make things actually work, it should be put there!
That said, I didn't really follow enough of the technical discussion to say if that's the case.

When i read that, I was thinking that probably translated to something like: "You want to add all this code into the server, and then you're probably going to completely abandon it and force us to maintain it for you, aren't you?"

I can't help but think that if Ubuntu had a history of contributing to the server this argument may not have come up.

I'm not sure. Developers tend to be a little hostile to Canonical and their ideas, in my opinion.

However, Carsten makes a really good point:

Frankly the problem is that input from a mouse, or a tablet or multiple touch-points is ambiguous. You may be painting in GIMP - not trying to "swipe to scroll". I can go on with examples (dragging a slider inside a scrollable area as opposed to wanting to scroll with a drag). Only the client has enough detailed knowledge of the window contents, application mode etc. To possibly make a reliable guess as to which one to do. It's X's job to provide as much device input to the client as it needs in a sane digestible way to make such a decision, but... that's [in my honest opinion] where the server's job ends."

I'm not sure. Developers tend to be a little hostile to Canonical and their ideas, in my opinion.

However, Carsten makes a really good point:

And you can point at all the contributions Canonical have made up until now and all the places developers tend to be a little hostile? most developers are hostile towards Canonical not doing *any* work upstream, good ideas or not.

In this case they built something internally in private, did a blog spam from spaceboy about how cool it was, but when it came to asking upstream they got told what they would have been told if they'd asked before they'd implemented it and made it seem like it was a feature you couldn't live without. Its not a good idea to do this. Experience counts.

I've been using touchscreens for nearly 30 years and I have some opinions about them. One of the more firm opinions I have is that it is critical that a pixel know the difference between someone touching it and any kind of mouse input. A touch should NEVER be assumed to be equivalent to a mouseclick. Th only time a mouseclick shoulc be considered the virtual equivalent of a touch is if a user has no touchscreen and if the user wants a mouseclick to have the effect of a touch.

I am used to touches on buttons being used to navigate around the various aspects of the user interface and mouseclicks being used to edit the button's appearance or its properties.

To click on a button to do such an edit and have the click interpreted as a touch, and turned into a navigation command is wholly unacceptable, as is a touch to navigate interpreted, instead, as a command to do an edit on the button or its properties.