Looks very interesting, I thought when they started talking about linear organisation of windows that they would be stacked something like this:http://smarthistory.org/

10/GUI

(which could fit in well with their ideas)

Did they talk about where the keyboard would go though? If their 'touch interface' goes where the keyboard normally would - it would have to double as a keyboard too?

Another thing that I would miss would be the ability to have say two windows one side of the screen and one on the other - but neatly filling the screen - a la GridMove. This hasnt really been considered I think.

okay, at the very end of the video they showed a keyboard with their 'touch interface' in front of it like a large touchpad - also not very user friendly (I have to admit I stopped the vid before it was quite finished - the music was doing my head in)

[edit] ooups! just saw this

Did they talk about where the keyboard would go though? If their 'touch interface' goes where the keyboard normally would - it would have to double as a keyboard too?

Really really interesting concept. I hope it makes it to the prototype stage. I'd love to give it a try.

Combine the core concepts with a smart changeable graphic touchpad (OLED once the prices come down ) and I think it would be an absolute winner. I'd configure a sector of the panel to be a context sensitive control surface based on which application was open, much the way high end drawing tablets can have 'soft buttons' configured for specific apps.

Just think. No more arbitrary menus you always have to "reach" for and open. It would almost be like those nifty LCARS touchpad/keyboards you've seen on StarTrekNG.

i think most of the stuff in the 2nd half of the 10gui video is a step backward -- at least the software part.

i think 40hz's old startrek picture does point the way to the future -- customized input pads tailored to the application you are working on.

so many of these new futuristic desktop managers try to come up with ways one can live inside a desktop of multiple windows -- zoom around, pan and scroll, flip through, 3d navigate, etc. but isn't the truth that this kind of navigation is all completely counter productive. as soon as you have more than a few windows, that are designed to be run fullscreen, open at a time, you dont ever want to be "navigating" around these in some active way that requires animations and movement. at that point you've basically lost the game.

i think it would be more productive to have a kind of predictable input layout map like the above, which showed a map of common applications, with the ones *currently running* lit up like on a display panel, with a press to activate full screen. any time you have to do some animation (such as dragging and panning) to see what it running, you've lost the game in my opinion.

another thing to think about, at least from my experience, is that multiple monitors solves this issue with switching between running applications -- far better to have 4 monitors for 4 full screen applications. if you find yourself running more than 4 or 5 fullscreen big applications at a time, you're already in trouble.

In fact a separate 7" touch sensitive LCD, to be used as a sort of keyboard with programmable layout would be a very interesting addition to any existing setup, and probably quite cheap to. It would be just a matter of a nice software to manage the programming, theming, etc.ECK! It could even be a DonationCoder Do-It-Yourself project!

Sorry, I have to go against this idea. It sounds too much like chopsticks meets tablet PC meets keyboard meets forcing users to be macro users and further confusing casual users because now they not only have to remember keyboard shortcuts, they have to remember finger shortcuts.

The way I see it, if you want to push for this concept, you'd better off adding motion sensors on to a monitor than making the keyboard a much bigger utility that makes it harder to fit on any low on cash person's desk.

Have it so you can optionally remove the covers on either side of a monitor and you can play 1-2-3 fingers on it's motion detection system.

Less carpal tunnel risk too because your hands aren't resting on a flat surface but are hovering on the sides of the monitor and it would mimic a finger point or if the motion technology is too fragile, you could opt for side buttons. You could even make it flash colors as to help one's memory by associating a color with a follow up command.

The only good idea was that panel/browser hybrid. I remember suggesting something similar in Opera although I no longer have a link.

It was basically an idea to replace the Firefox extension which mimics PDF Player's hand icon except for the browser. The idea being a much easier scroll area for horizontal scrolling.

My preference for a mouse replacement still is biased towards a desktop equivalent for KeystrokeCE.

Mostly because I was shocked how I was able to cope with it on a PocketPC despite the complexity of it (in concept) compared to some of the other tools using hand gestures and typing suggestions. (I was basically desperate and tried different things expecting this to be one of the interfaces I turn down but instead, this and the visual keyboard were the only two interfaces I learned.)

Basically, press a hotkey. Open a default hotkey combination menu/custom settings menu. Have the preferred menu setting pop up and use the numpad as the replacement for the mouse. (especially with regards to selecting specific files that aren't grouped side by side or editing several words edged between sentences)

Edit:

I also forgot to point out the ramifications of a motion detecting monitor for gaming.

Already there are software that tries to turn webcams into Wii-cams but a motion detecting side panel not only does not detract from a Wii-cam, it adds to it by virtue of it having certain advantages that makes it more suitable for the PC than for a console.

With this, you could call out plays on any PC sports games, create micro hand gestures with squad based games and even have commands that don't interfere with the keyboard, mouse and Webcam and it works because the monitor is naturally much closer when using a pc compared to the television.

Actually, I was more thinking that the command surface would also dynamically change based on the app. When you're running something like Photoshop, the main part of the panel would display Photoshop controls. When you switched apps, it would switch to a 'control panel' for that app.

Standard items like file open/close/save/print/next/previous/etc. could be assigned permanent locations (ex: an icon bank across the top of the command area) among all apps for consistency. (Although I was always an advocate of a triple-tap/click anywhere on an open document to initiate a [close|save w/changes|save as] selector.)

This control area would also be movable and 'pinable' (or dockable if you prefer) anywhere within the touch surface area. In a perfect world it should also be able to render displays using a mirrored layout to accommodate the world's growing left-handed human population. Support for Unicode should also be a given in order to assure global compatibility.

In many respects (and much as it pains me to say it *choke*) Apple's iPhone incorporates a lot of this already. My GF just upgraded her AT&T cellular plan and got a 3G as part of the deal. Despite my general dislike of Apple for their proprietary closed platform and elitist mindset, even I have to grudgingly admit that the interface design is, for the most part, quite impressive.

But with the way most apps work these days, right now I think the alphanumeric keyboard might actually be in danger of being on the lagging edge of where interfaces are heading.

I know a lot of people (nothing like us to be sure, but good people just the same ) who use their mouse for almost all their input. They rarely type in anything. And what little they do type in is mostly short things like tweets, passwords, and search terms. For that limited amount of routine textual input, an onscreen keyboard would be more than sufficient. They can always keep a small wireless keyboard in a desk drawer (right next to that POS microphone nobody ever uses) for those rare occasions when they actually do need to type something of length.

Hmm...imagine...a world without QWERTY. How could we make that work if we had to?

Now there's something to think about!

------------

another thing to think about, at least from my experience, is that multiple monitors solves this issue with switching between running applications

Couldn't agree more. I've been running dual monitors for a few years now. And I can no longer remember how I lived without them. Sitting in front of a single screen - no mater how large - makes me feel like I'm trying to play a game of tennis while wearing an overcoat.

KeystrokeCE was pen gesture meets keyboard. That was pretty far considering how limited mouse gestures can do in a PC and how effective it was compared to stuff like TenGo below.

It also did it in such a way where the alphanumeric symbols were used as a way to serve as a typing tool as opposed to a necessity like TenGo which was still a visual keyboard. (site seems to be down though)

Well, I think Mouser has already expressed some of my concerns. I like the idea of a better multi-touch interface close to hand, rather than the silly idea of trying to actually use your monitor which is A: too far away 90% of the time and B: you don't want fingerprints all over. However I think all the potential of the touch interface is wasted in this concept because it spends too much time and UI commands trying to improve on existing window management solutions when I honestly don't find the existing solutions to be that big a problem. I routinely have 10 or more apps/windows open, and many of these have tabs of their own inside (pspad text editor with tabs, Firefox, Chrome, IE all with tabs, etc.). So for me this concept video is trying to solve a problem I don't have with an intriguing interaction device that is ultimately wasted due to the misdirected UI changes.

UI aside: I love the input device. Currently I use a Contour Roller Mouse, that pretty much has similar input dynamics: I love it and even talked my employer into buying one for me at work So I think the basic concept would work well.

What's also funny is that just the other day I was looking at LCD Tablets thinking that something like that may be my next major purchase; but what occurred to me (and what was demonstrated in the video) is that having the tablet and keyboard is a problem: either the tablet/monitor or the keyboard is too far away. Wouldn't it be better to have the keyboard as part of the display?

All through the video I was wondering how the typing was happening, not until the end did they show that the device sat just below the keyboard. But I was wondering about a screen keyboard, and how that would work (I've never used a LCD tablet and have no idea what it would be like to type on one: unforgiving I'd think -- no key movement)... but it did make me wonder about an integrated keyboard that was part of the windowing system? Sort of select the window, click the button on the window and a screen keyboard displays and sends keypresses to that window.

And I agree with Oshyan and others: I'm happy with multiple monitors and found it a little restrictive in the display.

I'm still waiting for them to get innovative with response in touch surfaces. There are cell phones that sort of buzz or vibrate when you click something, but that's really not what I mean. Something more like the sensation of physical resistance. Or hell, what about a flexible OLED over the top of a raisable clicky key set? The reason I'm talking about this is I don't think I could ever grow to love any keyboard without decent tactile feedback. But imagine if the keyboard keys were touch/pressure sensitive and could just recess themselves when not in use. You could use it like a regular touch keyboard for quick typing if you wanted, and the fully raise it with normal key response for "real typing". This would of course be very complicated and expensive without some significant innovation in materials, maybe "memory materials" or something, but it's the best I can think of as far as an "ideal" concept. Everything else is a compromise of where to put this or that piece - do I have this nifty big touch surface right in front of me and have to reach for the keyboard when I need it, or do I have the keyboard close and use a smaller touch surface off the right thus negating left-handed multi-touch, etc.

Well, I think Mouser has already expressed some of my concerns. I like the idea of a better multi-touch interface close to hand, rather than the silly idea of trying to actually use your monitor which is A: too far away 90% of the time and B: you don't want fingerprints all over.

Well, it was just a suggestion but I think you miss the part where I talked about covers for the sides and both situations are weird complaints.

One: any multi-touch interface is always going to be farther than any monitor because you're increasing utilities + you're increasing the base size of your desk area.

Don't believe me? Remove your mouse now and press the buttons on any of your monitors. Did the space really increase without the mouse?

Which is why as much as I don't mean to offend (although I was offended by your use of the adjective silly here), the idea that you want no fingerprints on your monitor sounds absurd.

It's ridiculous! You're already smudging a monitor with your fingerprints when you carry it. You already smudge it when you press the power button. You already smudge it when you use the other buttons to change the brightness/contrast and position of the screen.

Unless you have a humongous monitor that is as far away as your TV (which I remind you is only available to a select few), your monitor is never so far that your arm cannot reach it.

Even in that situation, it is ridiculous to even add the 2nd bit about fingerprints as if you were a thief in fear of fingerprints or even use the 90% analogy as if 90% of the time a regular monitor is so far out of reach that the common monitor today involves buying a remote just to click the power button.

Sort of select the window, click the button on the window and a screen keyboard displays and sends keypresses to that window.

-Perry Mowbray

Windows does have an on-screen keyboard as part of it's accessibility tools.

I know it may seem like pointing out the obvious but you make it seem like there's no built-in (well as much as built-in) on-screen keyboard in the accessibility tools.

It's clunky true but a pop-up on-screen keyboard really solves very little. As you said, no keypresses but you must also follow the keyboard lay-out that is in front of you.

Everything else is a compromise of where to put this or that piece - do I have this nifty big touch surface right in front of me and have to reach for the keyboard when I need it, or do I have the keyboard close and use a smaller touch surface off the right thus negating left-handed multi-touch, etc.

That is no compromise. That is sacrifice. It would be no different than having a tablet PC and a mouse.

Even if they are both at arm's length, you have to factor in the additional usb slot and additional wires and additional room ergonomics as you're now not dealing with a mouse but a mouse/keyboard hybrid.

At most, it could be a luxury gadget to many but it will not be a standard item nor will it solve anything because it will add new problems to the stuff it claims to solve. (Again, compare this to motion sensors on a monitor where not having/utilizing the motion sensors adds very additional space to the PC desk area.)

At best, it is a more juiced up tablet PC but who is to say that the tablet PC industry is dead and wouldn't evolve here eventually?

Yet no matter how much a tablet PC evolves, it will not change the mouse market because the mouse is still cheaper, of more standard shape, easier to figure out and is all in all, lighter and a standard in many homes.

Hmm, I guess we'll just have to agree to disagree then. The idea of having to wave my hands around in the air to control anything in a precision way seems both inaccurate and tiring, and fingerprints are a real concern if you're actually touching the display surface (obviously not if you're talking about the edges, where you'd use the power button or carry it - I'm not objecting because I'm a clean freak). There is no precedent for such a UI being used for any precision purpose or as the general interface for a normal computer system, whereas the device I suggest is merely a potentially novel combination of existing and proven technologies.