Posted
by
kdawson
on Sunday October 29, 2006 @11:08PM
from the audience-goes-wild dept.

Down8 writes, "Jeff Han, an NYU researcher, has recently shown off his 'interface free' touch screen technology at the TEDTalks in Monterey. Some sweet innovation that I hope makes it to the mainstream soon." The photo manipulation interface is reminiscent of "Minority Report."

This is an exciting setup...and I agree with his assertion that the OLPC (one laptop per child) is sort of like introducing millions of children to our inane weaknesses instead of our strengths. Really, I know that something like this wouldn't completely remove the need for a keyboard and such for many years, but it is a striking evolutionary step forward.

Just think how easy all those dramatic situations would have been in the 24th century if the Starship enterprise had some of these!

Not only is the keyboard an issue, consider the rest of his body! He's bent over the screen, neck bent to view the screen that's 2 feet below eye level. Any basic ergonomics advise says you should put the top edge of your display at eye level. Anything lower than that and you'll experience neck and back pain. Keyboard-related RSI will go nicely with a stiff neck.

I swear, if this were from a business selling some new product, I'd say they were trying to boost sales. But he's a researcher. I guess they must be up for more funding or something...?

I recently attended a demo of a similar device at my company. The pentagon already has purchased units and the company is trying to branch out to private sector applications. They were using for collaboration with geographical software (gis data).

Exactly, you can't have an interface free interface, we are interfacing with the world. Want some really mind blowing interface design work check out Jeff Raskin's The Humane Interface [amazon.com] Go back to the fundamentals of how humans interact with the world, find where we retain the most information, are the fastest to react, what gives us higher error rates, etc and redisign computer interfaces. Imagine an OS without applications or files. That's what he outlines. This is just another input device.

Even if you are not designing an OS, any programmer, designer, or engineer (computer related or not), can gain a lot from this book.

I'm not sure about this. In the photo library application demo, he brought up a keyboard with his hands, typed out a label for a photo, and put it away, in fewer than 10 seconds.

Right, but again, this was a demo application that was designed to look neat and take advantage of the multitouch screen... not be useful. How much time a day do you spend rearranging your photos on a lightboard? While it looked cool, it didn't do much. You couldn't sort, there was no categorization, no album interface, no way to post them, no real photo manipulation, or basically anything that would be useful for anything beyond browsing pr0n. (Any bets on whether this was their very first app?)

It seems pretty widely adaptable and convenient, especially if we can make the transition from physical keyboard and mouse to "virtual" keyboard and our hands, respsectively.

What I'm saying is that while this is neat for quick applications that don't require much text, it would be painful for multi-hour coding or authoring. And this is what most people do. For this kind of use, there is one absolute requirement: you don't need to look at it. And if you're not looking at it, you don't need an LCD powering it. (Any sort of predictive or dynamic keyboard violates this rule and makes typing require too much thought.)

The mouse was supposed to be away of extending our native manual precision and dexterity into our computer programs - now that this screen is here, the mouse is pretty mcuh obsolete, and we can bridge the hand-computer gap in a seemingly more natural, more direct way.

Yes, touchscreen interfaces are very neat. But they're not the answer to the world's problems. They won't magically make you be able to produce art where the mouse or tablet or whatever was getting in the way before. It may streamline things a bit, but it doesn't remove the need for skill.

Not only that, but the virtual keyboard frees us from the physical constraints and space requirements imposed by having an actual physical keyboard.

Again, only in very limited situations. Plus, onscreen keyboard means losing screen space, which is arguably far more valuable than desk space.

No matter how interesting, I'll NEVER bear an ad before a small online video.

I think I can help [ie7.com] you [mozilla.org].
Well, in most situations, anyway... hehe.

On topic: I feel this technology really could grow... I would like to see it more like the Nintendo DS. With Dual screens. One being your main form of input. Perhaps by having an overlay application of a scalable keyboard similar to the one featured in the video. And you can use the primary display for, well, display. I dunno. It's late and I'm tired... if you understand what I mean, mod me up!;-)

As he was manipulating the map application it really jumped out at me how cool it would be to run a Mandelbrot set app that way. It would have made a fun and awesome addition to the presentation. If I were working in his lab that would almost certainly be the first thing I would add to the system.

I realize that the point of this (TFA) is about trying to make things more intuitive and natural. But, as others have pointed out in other words, interfaces are a natural aspect of life.

I have an interface in front of me right now. I have pen, paper; I've got a camera... if I want to record a visual of something, I have to pick up my camera. Never mind that the camera has one of these "non-intuitive interfaces" that we (rather, the article) are trying to remove, I still have to do something to get it done. Anything that I do interfaces with reality.

One of the goals of the iconic desktop originally was to duplicate the real desktop in some fashion to make things simpler for humans to interact with their work on a computer, so that there wouldn't be too much of a translation layer to build between real and virtual work. Similarly, some try to implement handwriting recognition to remove the interface of the keyboard from the writing process.... until they realize that geeks like us can't write for crap and can type ten times faster as well.

Regardless, of course, there's got to be some way to tell the computer that you actually want to resize the strange hand-like object on that screen the guy had (I think it was a hand, my sound was off and I lost interest rapidly) rather than add to the drawing. There's got to be some way to change modes, as he did between drawing the outline, getting it filled in, and then moving it around - that's all interface. Sure, it looked sweet that there wasn't any menu pull-down happening, no mouse, but really, you've got a pretty damn simple application that can be manipulated in this fashion.

Do anything complex, and you'll have to have a more complex interface suddenly.

Even talking to a computer would be an interface..... a pretty complex one, though definitely one that could be considered intuitive, if you could use your chosen language for commanding it rather than some cryptic "ok, list the files, sort by date then name.... uh.... ok that one no that shit fucking computer where's my mouse"

His keyboard Idea sounds pretty cool. I would like to see some more practical applications than what he showed. Games would be cool with this interface. I think the idea is great, moving objects on your screen as if they were actually on your desk. But gestures will still need to be learned.
Also, we would all get neck problems from staring down all the time at the screen rather than looking straight ahead.
All in all, this technology seems very interesting.