To scroll, just place two fingers on your trackpad instead of one. Both fingers

+

need to be placed next to each other horizontally (not vertically, the trackpad

+

cannot detect that). Some people get better results with their finger spaced a

+

little bit apart, while others prefer having the fingers right next to each other.

+

+

iScroll2 provides two scrolling modes: Linear and circular scrolling.

+

+

For linear scrolling, move the two fingers up/down or left/right in a straight

+

line, respectively, to scroll in that direction.

+

+

Circular scrolling works in a way similar to the iPod's scroll wheel: Move the two

+

fingers in a circle to scroll up or down, depending on whether you move in a

+

clockwise or counterclockwise direction.

+

+

Maybe we can port/adapt/get inspiration from this macintosh driver.

===Improved virtual keyboard===

===Improved virtual keyboard===

Line 211:

Line 215:

Hints:

Hints:

* ZIP's huffmann compression applied to SMSs/mails for detecting the most used characters/words/sentences.

* ZIP's huffmann compression applied to SMSs/mails for detecting the most used characters/words/sentences.

−

* html tag-couds, one-letter tag clouds ; font size proportional to the probability of being used

+

* html tag-clouds, one-letter tag clouds ; font size proportional to the probability of being used

The most critical point is the initial disposition of the letters, before any letter is typed. We may also want to use horizontal two-parts keyboard (with the neo in hands like a psp..)

The most critical point is the initial disposition of the letters, before any letter is typed. We may also want to use horizontal two-parts keyboard (with the neo in hands like a psp..)

Line 217:

Line 221:

The [http://www.strout.net/info/ideas/hexinput.html hexinput] concept is interesting. What about hiding the less probable letters and increasing the remaining ones during typing?

The [http://www.strout.net/info/ideas/hexinput.html hexinput] concept is interesting. What about hiding the less probable letters and increasing the remaining ones during typing?

−

===Choosing the right GUI toolkit===

+

A plain old dialpad like any other phone would be a nice fallback.

−

There are [http://www.hbmobile.org/wiki/index.php?title=GUI_Frameworks lots of possible GUI frameworks] with various software architectures that can be used for OpenMoko.

+

===Towards OpenGL compositing===

−

GTA01 hardware uses GTK+/matchbox without hardware acceleration, and it's not enough: this is a first that a mobile linux device has such a high DPI resolution.

+

There are [http://www.hbmobile.org/wiki/index.php?title=GUI_Frameworks lots of possible GUI frameworks] with various software architectures that could be used for Openmoko.

−

Benchmarking will be needed to compare the different alternatives: qt, directfbgtk, ...

+

GTA01 hardware uses GTK+/matchbox without hardware acceleration, and it's not enough: this is a first that a mobile Linux device has such a high DPI resolution. OpenGL ES compositing seems to have a bright future on embedded devices, because compositing seems to give natural zooming interfaces reality (at last!).

−

====Switching to the Enlightenment Foundation Libraries====

+

Considering recent changes in destkop applications, opengl has a definite future. For instance, the expose (be it apple's or beryl's) is a very interesting and usable feature. Using compositing allows the physics metaphore: '''the human brain doesn't like "gaps"/jumps (for instance while scrolling a text), it needs continuity''', which can be provided by opengl. When you look at apple's iphone prototype, it's not just eye candy, it's maybe the most natural/human way of navigating, because it's sufficiently realistic for the brain to forget the non-physical nature of what's inside.

−

''Moved [[E17|here]]''

+

So, opengl hardware will be needed in a more or less distant hardware, for 100% fluid operation. Benchmarking will be needed to compare the different alternatives that are cited further.

−

====Distant future: OpenGL compositing====

+

====The Enlightenment Foundation Libraries====

−

Compositing seems to give zooming interfaces reality (at last!).

+

EFL's Evas is a powerful and power sparing canvas drawing library. It can be OpenGL accelerated. Python/Ruby bindings are available in the "proto" e17 cvs folder.

−

Well, considering recent changes in destkop applications, opengl has a definite future. For instance, the expose (be it apple's or beryl's) is a very interesting and usable feature. Using compositing allows the physics metaphore: '''the human brain doesn't like "gaps"/jumps (for instance while scrolling a text), it needs continuity''', which can be provided by opengl. When you look at apple's iphone prototype, it's not just eye candy, it's maybe the most natural/human way of navigating, because it's sufficiently

+

''Moved [[E17|here]]''

−

realistic for the brain to forget the non-physical nature of what's inside.

+

−

+

−

So, opengl hardware will be needed in a more or less distant hardware,

+

−

for 100% fluid operation.

+

−

+

−

How cool will solid-based (polygons, as seen in beryl) interfaces be? :) Real ZUIs...

+

====Clutter Toolkit====

====Clutter Toolkit====

−

Clutter, an openedhand project, is an open source software library for creating fast, visually rich graphical user interfaces. The most obvious example of potential usage is in media center type applications. We hope however it can be used for a lot more.

+

Clutter, an [http://o-hand.com/ OpenedHand] project, is an open source software library for creating fast, visually rich graphical user interfaces. The most obvious example of potential usage is in media center type applications.

http://clutter-project.org/

http://clutter-project.org/

Line 249:

Line 247:

Clutter uses OpenGL (and optionally '''OpenGL ES''') for rendering but with an API which hides the underlying GL complexity from the developer. The Clutter API is intended to be easy to use, efficient and flexible.

Clutter uses OpenGL (and optionally '''OpenGL ES''') for rendering but with an API which hides the underlying GL complexity from the developer. The Clutter API is intended to be easy to use, efficient and flexible.

−

From the [http://en.wikipedia.org/wiki/OpenGL_ES wikipedia article], OpenGL ES (OpenGL for Embedded Systems) is a subset of the OpenGL 3D graphics API designed for embedded devices such as mobile phones, PDAs, and video game consoles.

GTK off screen rendering is supposed to be on it's way; once it is here, there will be a possibility of using GTK apps directly within OpenGL apps as textures, which would lead to the possibility of creating a full OpenGL "application manager" (as well as media consuming app) with ZUI features.

Be sure to check out this demo (scrolling list with inertia scrolling)

+

http://files.mdk.am/demos/graff-demo-3.avi

+

+

Of course it will remind you of Apple iPhone's UI. But this one runs in software mode on Nokia N770&800 already. The most notable part of Graff seems to be the inertia and physics integration in general.

+

+

====Pigment API====

+

+

Fluendo's (the Gstreamer guys) ''[https://core.fluendo.com/pigment/trac Pigment] is a Python library designed to easily build user interfaces with embedded multimedia. Its design allows to use it on several platforms, thanks to a plugin system allowing to choose the underlying graphical API. Pigment is the rendering engine of Elisa, the Fluendo Media Center project.''

+

+

Features:

+

* Core languages: C OpenGL

+

* Bindings: Python

+

* Backends: DirectFB OpenGL

+

* Media playback integration: using Gstreamer

+

+

====Choosing====

+

+

Benchmarking will be needed. We have therefore to define a std testing application that would allow to compare alternatives.

enjoy the video demo; everything seems to be rendered software fine. More details about the implementation to come.

====Description====

====Description====

−

Take an item list (ex: adress book), print it on a ribbon of paper, and glue it on a wheel (on the tire). You're looking in the front of it, so when you want to go from the A to Z, you touch the wheel and drag it up. When you let the wheel go, it goes furter, taken by it's inertia. Stop the wheel when you got your contact. Got the idea? That's why we may speak of an "infinite wheel", so that the surface is flat. For our case here, we always want to display square content, so the [http://en.wikipedia.org/wiki/Uniform_prism n-sided uniform prism] analogy is mathematically more exact.

* it's "round"/cyclic, so you can '''browse the list in two directions'''

+

−

* we may want to add a "progression indicator", ex the current alphabetical letter, with a font size adequate to the proportion of the number of entries in the letter subcategory, or a reduced map of the distribution of the first letters...

+

−

+

−

We can add "parallel wheels", symbolizing different sorting methods. Slide long to the left / right to look at a different wheel = items organization.

+

====Controls====

====Controls====

Line 316:

Line 354:

====Description====

====Description====

−

A discussion on the community list identified a desire to have the ability to switch the OpenMoko UI into "left-handed" mode.

+

A discussion on the community list identified a desire to have the ability to switch the Openmoko UI into "left-handed" mode.

−

The main problem is scrollbars, when they're on the right dragging

+

The main problem is scrollbars, when they're on the right, dragging

the scrollbar left handed results in your hand covering the screen so

the scrollbar left handed results in your hand covering the screen so

you can't see what you are doing. So having the option of scrollbars

you can't see what you are doing. So having the option of scrollbars

Line 326:

Line 364:

that should remain..like the main top bar with the status icons and such.

that should remain..like the main top bar with the status icons and such.

Scrollbars are the main thing I can think of right now.

Scrollbars are the main thing I can think of right now.

+

+

===3D Launcher===

+

+

Instead of a traditional filemanager type launcher or a start menu, we could use 3D capabilities and have a cylindrical launcher. Slide your finger left and right to rotate, up and down to move up and down the cylinder. There are arrows top and bottom to indicate you can move in that direction. For when you enter sub folders there will be a button top left/right allowing you to go back up a level.

+

+

Mock up of the launcher, the final version would have proper icons and the text would bend around the cylinder.

+

+

[[Image:launcher.jpg]]

===Handgesture recognition proposals===

===Handgesture recognition proposals===

Line 374:

Line 420:

We need to emulate key presses. We need to work at a layer where we can get raw cursor coordinates. <---- X server layer?

We need to emulate key presses. We need to work at a layer where we can get raw cursor coordinates. <---- X server layer?

+

+

There is a fake keyboard module (for dev purposes) in the main kernel tree, which could be used to simulate keyboard presses (hence keeping keyboard-enabled apps unmodified).

====Full multi-touch emulation====

====Full multi-touch emulation====

Doable, but tricky...

Doable, but tricky...

+

+

==Preparing the multi touch==

+

+

One day we might get multitouch devices. Let's get ready.

+

+

===MPX===

+

+

The Multi-Pointer X Server is a modification of the X server to support multiple mice and keyboards in X. It provides users with one cursor per device and one keyboard focus per keyboard. Each cursor can operate independently. MPX is the first multicursor windowing system and allows two-handed interaction with legacy applications, but also the creation of innovative applications and user interfaces.

+

+

[http://wearables.unisa.edu.au/mpx/ The multipoint X server project]

+

+

===MacSlow's Lowfat getting multitouched===

+

+

http://dlai.jafu.dk/?p=1

+

+

If you want details, you can contact [[User:fursund|fursund]]

==Open questions==

==Open questions==

−

* will the neo/openmoko graphics system be powerful enough for such uses? I apple uses an OpenGL ES acceleration on this device (as well as on recent iPods), which is on the way with GTA02

+

* will the neo/openmoko graphics system be powerful enough for such uses? I apple uses an OpenGL ES acceleration on this device (as well as on recent iPods), which is on the way with [[GTA02#.22Phase_2.22_.28GTA02.2C_.22Mass_Market.22.29|GTA02]].

* how does the touchscreen behave? We need a detailed touchscreen wiki information page, with visual traces. How hardware-specific is it?

* how does the touchscreen behave? We need a detailed touchscreen wiki information page, with visual traces. How hardware-specific is it?

Obviously the tools are in the wild to build interfaces that could rival
(or better IMO) anything Apple comes up with. We just need to organize
this stuff. This would need hardware that can support dynamic
interfaces. I can help here, too.
sean@openmoko.com

In fact, this place shall be dedicated to human-machine interactions improvements discussion;

Human-machine interaction can be separated into several aspects:

the physical contact/input device: the mono-touch touchscreen

the graphics:

accelerated rendering can add more consistency to zooming user interfaces, which seem to be quite an interesting concept for embedded scrensize-limited devices

When we want to navigate files, mp3s in an mp3 player, etc... Every control that the application needs is a button. What about looking at the polyhedrons? We could find one for each usage, with as many surrounding subzones that may be used as controls. Ex: you need 5 buttons, let's take a pentagon with 5 surrounding zones all around. That way, it's always optimized...

We can't improve the human-machine interface without knowing the strengths / weaknesses of our hardware; some of the weaknesses might turn out as exploitable features, some strengths as limiting constraints.

What exactly does the touchscreen see when you touch the screen with 2 fingers
at the same time, when you move them, when you move only one of the 2, etc. I'm
also interested in knowing how precise the touchscreen is (ex: refresh rate,
possible pressure indication, ...)?

Answer:

The output is the center of the bounding box of the touched area

The touch point skips instantly on double touch

Pressure has almost no effect on a single touch, but not so on a double touch. The relative pressures will cause a significant skewing effect towards the harder touch. You can easily move the pointer along the line between your two fingers by changing the relative pressure.

Conclusions:

we can detect double touch as jumps, and that's all

no pressure

This could be an interesting input method for games - e.g. holding the Neo in landscape view, letting each thumb rest on a specific input area; probably needs to be checked for usability with a real device

Question:

What does one see when sliding two fingers in parallel up(L,R)->down(L,R)?

Answer:

In theory you see a slide along the center line between your two fingers. In practice, you can't keep the pressure equal, so you will see some kind of zig-zag line somewhere between the two pressure points in the direction of your slide.

Question:

What does one see when narrowing two fingers in slide (=zoom effect on iphone)?

Answer:

In theory you see the pointer stay at the center of the zoom movement. In practice, you can't keep the pressure equal for both fingers, so the pointer will move towards one of the two pressure points.

If we want to add eye candy & usability to the UI (such as smooth realistic list scrolling, as seen in apple's
iphone demo on contacts lists for instance), we'll need a physics engine, so that moves & animations aren't all linear.

The following aticle explains the Digital Physics term from the iPhone example.

The most used technique for calculating trajectories and systems of related geometrical objects seems to be verlet integration implementation; it is an alternative to Euler's integration method, using fast approximation.

We may have no need for such a mathematical method at first, but perhaps there are other use cases. For instance, it may be useful to gesture recognition (i'm not aware if existing gesture recognition engines measure speed, acceleration...).

ODE is an open source, high performance library for simulating rigid body dynamics. It is fully featured, stable, mature and platform independent with an easy to use C/C++ API. It has advanced joint types and integrated collision detection with friction. ODE is useful for simulating vehicles, objects in virtual reality environments and virtual creatures. It is currently used in many computer games, 3D authoring tools and simulation tools.

The only (AFAIK) application using this library is kiba-dock, a *fun* app launcher, but we may find another use to it in the future.

As suggested on the mailing list, it is mostly overkill for the uses we intend to have, but this library may be optimized already, the API can spare some time for too. Furthermore, Qui peut le plus, peut le moins.

If we got it right, when touching the screen on a second place, the cursor oscillates between the two points depending on relative pressure distribution. Using averaging algorithms, we may have the opportunity to detect peculiar behaviours.

We need raw data (x,y,t) from the real hardware for the following behaviours:

slide two fingers in parallel - vertical up/down (scroll)

turn the two fingers around (rotate)

slide two fingers towards each other (zoom-)

slide two fingers apart (zoom+)

When touching the screen with two fingers at the same time, we necessarily see the two points, or are able to extrapolate the position of the second one. This solution can add feature, but will probably be little erratic...

Touchscreen kernel module hacking

We may correct the "half distance" phenomenon on double touch: if double touch is detected, then assimilate the cursor as twice further than the first touch. It would allow finer control, but higher instability.

The double touch detection may be implemented in the driver itself, as well as stabilization.

Other detectable behaviours

The warping can be used in the 4 diagonals, plus the up/down/right/left cross:

To scroll, just place two fingers on your trackpad instead of one. Both fingers
need to be placed next to each other horizontally (not vertically, the trackpad
cannot detect that). Some people get better results with their finger spaced a
little bit apart, while others prefer having the fingers right next to each other.

iScroll2 provides two scrolling modes: Linear and circular scrolling.

For linear scrolling, move the two fingers up/down or left/right in a straight
line, respectively, to scroll in that direction.

Circular scrolling works in a way similar to the iPod's scroll wheel: Move the two
fingers in a circle to scroll up or down, depending on whether you move in a
clockwise or counterclockwise direction.

Yet, optimization does not only apply to the plain one-letter-at-a-time input. We need some sort of T9 (dictionary-based input help). When typing a word, the first letter determines the next possible ones. Therefore, we may let disappear the less-probable following letters. Ex: type an L, there's no way an X follows...

Hints:

ZIP's huffmann compression applied to SMSs/mails for detecting the most used characters/words/sentences.

html tag-clouds, one-letter tag clouds ; font size proportional to the probability of being used

The most critical point is the initial disposition of the letters, before any letter is typed. We may also want to use horizontal two-parts keyboard (with the neo in hands like a psp..)

The hexinput concept is interesting. What about hiding the less probable letters and increasing the remaining ones during typing?

GTA01 hardware uses GTK+/matchbox without hardware acceleration, and it's not enough: this is a first that a mobile Linux device has such a high DPI resolution. OpenGL ES compositing seems to have a bright future on embedded devices, because compositing seems to give natural zooming interfaces reality (at last!).

Considering recent changes in destkop applications, opengl has a definite future. For instance, the expose (be it apple's or beryl's) is a very interesting and usable feature. Using compositing allows the physics metaphore: the human brain doesn't like "gaps"/jumps (for instance while scrolling a text), it needs continuity, which can be provided by opengl. When you look at apple's iphone prototype, it's not just eye candy, it's maybe the most natural/human way of navigating, because it's sufficiently realistic for the brain to forget the non-physical nature of what's inside.

So, opengl hardware will be needed in a more or less distant hardware, for 100% fluid operation. Benchmarking will be needed to compare the different alternatives that are cited further.

Clutter, an OpenedHand project, is an open source software library for creating fast, visually rich graphical user interfaces. The most obvious example of potential usage is in media center type applications.

Clutter uses OpenGL (and optionally OpenGL ES) for rendering but with an API which hides the underlying GL complexity from the developer. The Clutter API is intended to be easy to use, efficient and flexible.

GTK off screen rendering is supposed to be on it's way; once it is here, there will be a possibility of using GTK apps directly within OpenGL apps as textures, which would lead to the possibility of creating a full OpenGL "application manager" (as well as media consuming app) with ZUI features.

Of course it will remind you of Apple iPhone's UI. But this one runs in software mode on Nokia N770&800 already. The most notable part of Graff seems to be the inertia and physics integration in general.

Fluendo's (the Gstreamer guys) Pigment is a Python library designed to easily build user interfaces with embedded multimedia. Its design allows to use it on several platforms, thanks to a plugin system allowing to choose the underlying graphical API. Pigment is the rendering engine of Elisa, the Fluendo Media Center project.

A discussion on the community list identified a desire to have the ability to switch the Openmoko UI into "left-handed" mode.

The main problem is scrollbars, when they're on the right, dragging
the scrollbar left handed results in your hand covering the screen so
you can't see what you are doing. So having the option of scrollbars
on the left would be useful.

I don't think the whole screen should be mirrored! There are some elements
that should remain..like the main top bar with the status icons and such.
Scrollbars are the main thing I can think of right now.

Instead of a traditional filemanager type launcher or a start menu, we could use 3D capabilities and have a cylindrical launcher. Slide your finger left and right to rotate, up and down to move up and down the cylinder. There are arrows top and bottom to indicate you can move in that direction. For when you enter sub folders there will be a button top left/right allowing you to go back up a level.

Mock up of the launcher, the final version would have proper icons and the text would bend around the cylinder.

If you hold down one finger and tap the other one, the mouse pops over and back
again. If you keep your second finger touching, the cursor follows it. When you
release it, cursor goes back to first finger position. This could be a way to
set a bounding box or turn on the mode. So the second finger can do something
like rotating around the first, or increase or lower the distance to the first.

the so-called "first touch" can be done on the mokowheel zone itself: put your left thumb on the black area; if you touch the screen with another finger, there is a warp; the warp is detectable and allows to enter "fake multi-touchscreen mode"

The Multi-Pointer X Server is a modification of the X server to support multiple mice and keyboards in X. It provides users with one cursor per device and one keyboard focus per keyboard. Each cursor can operate independently. MPX is the first multicursor windowing system and allows two-handed interaction with legacy applications, but also the creation of innovative applications and user interfaces.

Views

Personal tools

Introduction

Obviously the tools are in the wild to build interfaces that could rival
(or better IMO) anything Apple comes up with. We just need to organize
this stuff. This would need hardware that can support dynamic
interfaces. I can help here, too.
sean@openmoko.com

In fact, this place shall be dedicated to human-machine interactions improvements discussion;

Human-machine interaction can be separated into several aspects:

the physical contact/input device: the mono-touch touchscreen

the graphics:

accelerated rendering can add more consistency to zooming user interfaces, which seem to be quite an interesting concept for embedded scrensize-limited devices

To scroll, just place two fingers on your trackpad instead of one. Both fingers
need to be placed next to each other horizontally (not vertically, the trackpad
cannot detect that). Some people get better results with their finger spaced a
little bit apart, while others prefer having the fingers right next to each other.

iScroll2 provides two scrolling modes: Linear and circular scrolling.

For linear scrolling, move the two fingers up/down or left/right in a straight
line, respectively, to scroll in that direction.

Circular scrolling works in a way similar to the iPod's scroll wheel: Move the two
fingers in a circle to scroll up or down, depending on whether you move in a
clockwise or counterclockwise direction.

Maybe we can port/adapt/get inspiration from this macintosh driver.

n-D navigation: the polyhedra inspiration

When we want to navigate files, mp3s in an mp3 player, etc... Every control that the application needs is a button. What about looking at the polyhedrons? We could find one for each usage, with as many surrounding subzones that may be used as controls. Ex: you need 5 buttons, let's take a pentagon with 5 surrounding zones all around. That way, it's always optimized...

Our weapons

We can't improve the human-machine interface without knowing the strengths / weaknesses of our hardware; some of the weaknesses might turn out as exploitable features, some strengths as limiting constraints.

The touchscreen

Question:

What exactly does the touchscreen see when you touch the screen with 2 fingers
at the same time, when you move them, when you move only one of the 2, etc. I'm
also interested in knowing how precise the touchscreen is (ex: refresh rate,
possible pressure indication, ...)?

Answer:

The output is the center of the bounding box of the touched area

The touch point skips instantly on double touch

Pressure has almost no effect on a single touch, but not so on a double touch. The relative pressures will cause a significant skewing effect towards the harder touch. You can easily move the pointer along the line between your two fingers by changing the relative pressure.

Conclusions:

we can detect double touch as jumps, and that's all

no pressure

This could be an interesting input method for games - e.g. holding the Neo in landscape view, letting each thumb rest on a specific input area; probably needs to be checked for usability with a real device

Question:

What does one see when sliding two fingers in parallel up(L,R)->down(L,R)?

Answer:

In theory you see a slide along the center line between your two fingers. In practice, you can't keep the pressure equal, so you will see some kind of zig-zag line somewhere between the two pressure points in the direction of your slide.

Question:

What does one see when narrowing two fingers in slide (=zoom effect on iphone)?

Answer:

In theory you see the pointer stay at the center of the zoom movement. In practice, you can't keep the pressure equal for both fingers, so the pointer will move towards one of the two pressure points.

Graphics and computational capabilities

It would be good to report what performance the current hardware allows:

There was no pure X11 benchmarking done (AFAIK) (how many fps at full VGA scrolling, ex: 1024*480 image scrolling?)

what about the lcd reactivity? What if we don't see anything but blur while moving items fast?

Physics-inspired animation a.k.a. "Digital Physics"

If we want to add eye candy & usability to the UI (such as smooth realistic list scrolling, as seen in apple's
iphone demo on contacts lists for instance), we'll need a physics engine, so that moves & animations aren't all linear.

The following aticle explains the Digital Physics term from the iPhone example.

The most used technique for calculating trajectories and systems of related geometrical objects seems to be verlet integration implementation; it is an alternative to Euler's integration method, using fast approximation.

We may have no need for such a mathematical method at first, but perhaps there are other use cases. For instance, it may be useful to gesture recognition (i'm not aware if existing gesture recognition engines measure speed, acceleration...).

Open Dynamics Engine

ODE is an open source, high performance library for simulating rigid body dynamics. It is fully featured, stable, mature and platform independent with an easy to use C/C++ API. It has advanced joint types and integrated collision detection with friction. ODE is useful for simulating vehicles, objects in virtual reality environments and virtual creatures. It is currently used in many computer games, 3D authoring tools and simulation tools.

The only (AFAIK) application using this library is kiba-dock, a *fun* app launcher, but we may find another use to it in the future.

As suggested on the mailing list, it is mostly overkill for the uses we intend to have, but this library may be optimized already, the API can spare some time for too. Furthermore, Qui peut le plus, peut le moins.

Verlet integration implementation from e17

There's an undergoing verlet integration implementation into the e17 project (by rephorm) see http://rephorm.com/news/tag/physics , so we may see some UI physics integration into e17 someday.

Robert Pernner's easy equations

See the demo: implements non linear behaviour (actionscript), but may give inspiration

Extending the touchscreen capabilities and input methods

Multitouchscreen emulation

If we got it right, when touching the screen on a second place, the cursor oscillates between the two points depending on relative pressure distribution. Using averaging algorithms, we may have the opportunity to detect peculiar behaviours.

We need raw data (x,y,t) from the real hardware for the following behaviours:

slide two fingers in parallel - vertical up/down (scroll)

turn the two fingers around (rotate)

slide two fingers towards each other (zoom-)

slide two fingers apart (zoom+)

When touching the screen with two fingers at the same time, we necessarily see the two points, or are able to extrapolate the position of the second one. This solution can add feature, but will probably be little erratic...

Touchscreen kernel module hacking

We may correct the "half distance" phenomenon on double touch: if double touch is detected, then assimilate the cursor as twice further than the first touch. It would allow finer control, but higher instability.

The double touch detection may be implemented in the driver itself, as well as stabilization.

Other detectable behaviours

The warping can be used in the 4 diagonals, plus the up/down/right/left cross:

Improved virtual keyboard

Yet, optimization does not only apply to the plain one-letter-at-a-time input. We need some sort of T9 (dictionary-based input help). When typing a word, the first letter determines the next possible ones. Therefore, we may let disappear the less-probable following letters. Ex: type an L, there's no way an X follows...

Hints:

ZIP's huffmann compression applied to SMSs/mails for detecting the most used characters/words/sentences.

html tag-couds, one-letter tag clouds ; font size proportional to the probability of being used

The most critical point is the initial disposition of the letters, before any letter is typed. We may also want to use horizontal two-parts keyboard (with the neo in hands like a psp..)

The hexinput concept is interesting. What about hiding the less probable letters and increasing the remaining ones during typing?

Switching to the Enlightenment Foundation Libraries

Distant future: OpenGL compositing

Compositing seems to give zooming interfaces reality (at last!).

Well, considering recent changes in destkop applications, opengl has a definite future. For instance, the expose (be it apple's or beryl's) is a very interesting and usable feature. Using compositing allows the physics metaphore: the human brain doesn't like "gaps"/jumps (for instance while scrolling a text), it needs continuity, which can be provided by opengl. When you look at apple's iphone prototype, it's not just eye candy, it's maybe the most natural/human way of navigating, because it's sufficiently
realistic for the brain to forget the non-physical nature of what's inside.

So, opengl hardware will be needed in a more or less distant hardware,
for 100% fluid operation.

How cool will solid-based (polygons, as seen in beryl) interfaces be? :) Real ZUIs...

Clutter Toolkit

Clutter, an openedhand project, is an open source software library for creating fast, visually rich graphical user interfaces. The most obvious example of potential usage is in media center type applications. We hope however it can be used for a lot more.

Clutter uses OpenGL (and optionally OpenGL ES) for rendering but with an API which hides the underlying GL complexity from the developer. The Clutter API is intended to be easy to use, efficient and flexible.

From the wikipedia article, OpenGL ES (OpenGL for Embedded Systems) is a subset of the OpenGL 3D graphics API designed for embedded devices such as mobile phones, PDAs, and video game consoles.

Improvement ideas

Please add here any idea that seems of relevance.

1D list scrolling: looped physics-driven item list

Description

Take an item list (ex: adress book), print it on a ribbon of paper, and glue it on a wheel (on the tire). You're looking in the front of it, so when you want to go from the A to Z, you touch the wheel and drag it up. When you let the wheel go, it goes furter, taken by it's inertia. Stop the wheel when you got your contact. Got the idea? That's why we may speak of an "infinite wheel", so that the surface is flat. For our case here, we always want to display square content, so the n-sided uniform prism analogy is mathematically more exact.

Important features:

weight: the biggest the item list, the faster it scrolls; that way, you don't have to wait too long for big lists, and you don't miss your item on shorter lists

friction: there is friction where the wheel is fixed, so that the wheel doesn't turn infinitely; more friction on short lists, less on big ones

the initial speed and acceleration vector you give it determines it's futher rotation

it's "round"/cyclic, so you can browse the list in two directions

we may want to add a "progression indicator", ex the current alphabetical letter, with a font size adequate to the proportion of the number of entries in the letter subcategory, or a reduced map of the distribution of the first letters...

We can add "parallel wheels", symbolizing different sorting methods. Slide long to the left / right to look at a different wheel = items organization.

The same, but for the wheel. It can be very short to do: you don't have 1:1 anymore, but, for example, 1/4 wheel turn = 1 item. It's demultiplicated, but has inertia.

Left-handed UI Support

Description

A discussion on the community list identified a desire to have the ability to switch the OpenMoko UI into "left-handed" mode.

The main problem is scrollbars, when they're on the right dragging
the scrollbar left handed results in your hand covering the screen so
you can't see what you are doing. So having the option of scrollbars
on the left would be useful.

I don't think the whole screen should be mirrored! There are some elements
that should remain..like the main top bar with the status icons and such.
Scrollbars are the main thing I can think of right now.

Handgesture recognition proposals

Using simple, localized warp as modifier key

As discussed on community list:

If you hold down one finger and tap the other one, the mouse pops over and back
again. If you keep your second finger touching, the cursor follows it. When you
release it, cursor goes back to first finger position. This could be a way to
set a bounding box or turn on the mode. So the second finger can do something
like rotating around the first, or increase or lower the distance to the first.

the so-called "first touch" can be done on the mokowheel zone itself: put your left thumb on the black area; if you touch the screen with another finger, there is a warp; the warp is detectable and allows to enter "fake multi-touchscreen mode"