First of all, I am interested in this project, as I've worked on something similar in the past!

In the video, it is shown how new edit parts can be created by means of sketching. Do you intend to support issuing requests on existing edit parts, too? This is more like gesturing, e.g. zigzag or crross on an edit part (or rather its figure) to delete it, rather than sketching the shape of a symbol.

In my own work, gestures were matched against pre-defined polylines, and then translated into requests that were handled by the existing edit policy mechanism. The type and target of the request was determined by rules, e.g. a Z gesture inside an X edit part became a deletion request. The rules were encoded as XML in an extension.

A nice trick was the behavior if no rules matched. First it located the edit parts underneath the start and end points and then tried to connect them using a specified (and existing) connection tool. If this failed, all the points that the gesture consisted of were used to drive the selection tool, so selecting, moving and resizing could be performed without leaving the gesture tool.

Yes, we're definitely planning to support gestures, Jens von Pilgrim and Kristian Duske
from GEF3D showed interest on that feature as well, I think its an essential feature to have it on the API.
The far i got on that was to be able to translate a touch like a click to direct-edit, on a editpart.

Is interesting how you did, I would like to see it. The current implementation match the gestures with polylines
translated to sequences of words (serialized with Properties..), so they can be matched using levenshtein algorithm.
I think the rest is the same, pretty much.

Anyway i think its crucial for this API to have a flexible mechanism in a fashion that many algorithms could take place
at the recognition, deciding what is going on (creation, gesture, connection, and so on).
Would you like to contribute to the project, once its created?

Best,
Ugo

Hallvard Traetteberg escreveu:
> First of all, I am interested in this project, as I've worked on
> something similar in the past!
>
> In the video, it is shown how new edit parts can be created by means of
> sketching. Do you intend to support issuing requests on existing edit
> parts, too? This is more like gesturing, e.g. zigzag or crross on an
> edit part (or rather its figure) to delete it, rather than sketching the
> shape of a symbol.
> In my own work, gestures were matched against pre-defined polylines, and
> then translated into requests that were handled by the existing edit
> policy mechanism. The type and target of the request was determined by
> rules, e.g. a Z gesture inside an X edit part became a deletion request.
> The rules were encoded as XML in an extension.
>
> A nice trick was the behavior if no rules matched. First it located the
> edit parts underneath the start and end points and then tried to connect
> them using a specified (and existing) connection tool. If this failed,
> all the points that the gesture consisted of were used to drive the
> selection tool, so selecting, moving and resizing could be performed
> without leaving the gesture tool.