Posted
by
samzenpus
on Wednesday October 23, 2013 @07:07PM
from the goggles-they-do-something dept.

Lucas123 writes "A Taiwanese non-profit R&D organization is demonstrating a new heads-up type display that allows users to interact with the floating virtual screens using finger swipes. The new i-Air Touch technology from the Industrial Technology Research Institute is being developed for an array of devices, including PCs, wearable computers and mobile devices. The technology allows a user's hand to be free of any physical device such as a touchpad or keyboard for touch input. ITRI plans to license the patented technology to manufacturers. The company sees the technology being used in not only consumer arenas (video), but also for medical applications such as endoscopic surgery and any industrial applications that benefit from hands-free input."

Who exactly finds the "minority report" interface desireable? Do these people really look at their computers and think to themselves, "you know waving my arms around like an idiot just to do basic work is a great idea!"?

I've been playing around with my LEAP motion sensor for a while now, which is close as affordable consumer tech currently gets, and it's a mixed bag.
On one hand, I get gorilla arm; also, my computer is in a basement with poor lighting and I think that affects it's tracking some; there's also the very good point that there is no tactile feedback (that could be rectified if such an interface required some kind of finger glove;) but regards gorilla arm, it makes more sense to me to stand and gesticulate t

That Tom Cruise has any success as an actor is one of the great mysteries of this era; he is about as expressive as a block of wood. I don't know if the "MR interface" is a good idea or not, but the association with him has permanently tainted an otherwise good story.

I can sort of imagine situations where an interaction style sort of like might be meaningful, but to my mind it is just science fiction effects: the sort of things you put into a story to make it impressive, but which don't actually have any ad

There's been a push for AR as the output device for expert systems, basically to walk a nonspecialist through a procedure in situations where it's not possible to send a specialist. The classic is instructing an astronaut on an unanticipated medical procedure or a soldier through a battlefield repair on some important apparatus. If you're going to use AR to guide the operator, it should probably be their UI too.

The consumer applications are a way to sell a few bad prototypes and keep the company going while

Apparently in the future everyone will have tremendous upper body strength. In all seriousness though, I consider the enumerable guys in my office building who insist on taking their business meetings all the way to the urinal via bluetooth. I can just imagine the sensory input overload being so overwhelming that they get mixed up, start waving their arms around and pee all over themselves and anyone within range.

Apparently in the future everyone will have tremendous upper body strength.

Yup, it's the gorilla arm problem all over again: Try waving your hand in front of your face for about 10 minutes and see how you feel after a while.

An interesting experiment to demonstrate what an actual comfortable interface would look like: (1) Drop your arms to your sides. (2) Bend the arms at the elbow to bring them up to desk height, keeping everything else relaxed as possible. (3) Wiggle your fingers. That's where the thing you interact with should be: If you're like most people, your fingers will be

When doing that, you typically don't put your arms up continuously. For example, when I'm putting up sheet rock, sometimes I'm cutting things into shape, sometimes I'm moving a section over to the wall, sometimes I'm taking measurements, and sometimes I'm reaching up to put in screws. When I'm painting, sometimes I'm reaching up to roll paint on the surface, but sometimes I'm reaching down to put paint on the roller. When I've done building projects, a lot of the time was spent working at the much more conv

oh god, another geek who still hasn't figured out how to use his arms properly. Gorilla arm has never been a problem for the hundreds of occupations that use their arms in that way. It is only a problem for those people who have so little experience using their arms that they think it would be a problem.

(also replying to belthize above) Earlier today I was toenailing rafters with a hammer, and yet right now I'm still glad I can rest my forearms on my desk and nudge the mouse around. (And no, it's not because I'm tired or out-of-shape!)

The difference with physical labour is that it's usually gross motor skills in short intervals. Also, you're usually benefiting from the interface being not only haptic but able to resist and support real forces, e.g.: to place a block, you only have to wrestle it in the p

Tai chi, yoga, most martial arts, all strive for economy of effort with maximum work.

The key is to stop fighting yourself. Most people don't realize, and mostly geeks with very little physical experience, that frequently they have opposing muscle pairs in constant battle. One must learn to seperate the force necessary for movement and the force necessary to counteract gravity, tai chi is excellent for this because the slow movement brings out the opposing forces.

It is only a problem for some. Mainly for those who do not know how to support their own weight, so they come up with myopic explanations, like the AC below, that it's all because you can't push (hard) against a touchscreen or the air. These people will never understand until they can support their own weight. Tensegrity.

There are two reasons that make gorilla arm an issue in particular for touch screen (or gesture based) interfaces:
- When using a touch screen or gesture interface continuously, you have very little (if any) opportunity to rest your arms a moment by leaning them against something. This puts a constant strain on your muscles that becomes problematic after a while
- The required precision of gestures puts an additional strain on your muscles

So you think that a touch/air interface requires you to always have your arms moving? Do you know how stupid that sounds? Do you constantly need your fingers typing or your hand on the mouse? Do you never have to think?

In how many occupations do you hold your hands up in front of your face for long periods of time? From art to shelf stacking I can't say any one of them involved persistently waving my forelimbs around in my eyeline.

Here's a test: for the next week, only use your handhelds with them held up in front of you, level with your head.

I've done three of those, and none of them involved me waving my arms in front of my face. Moving my arms up there periodically, and back down, yes. The point with the tablet is to emphasise that nobody spends more than a few minutes at a time with their arms up and it'd be excruciating to operate a computer where the tactile interface was only in your eyeline.

It's not so much Minority Report as it is them trying to find a viable interaction method for augmented reality. The AR versions has some significant benefits over the Minority Report interface, in that it can theoretically overlay data on real-world objects and make things ranging from internal surgery to constructing aircraft a simpler undertaking by allowing you to see inside or where things should go. This method of interacting is severely limited (it's essentially a 1.5inch thick virtual touch screen h

So basically, the depth camera uses longitudinal chromatic aberration (the red green and blue colors are not in focus for the same distances). So it needs to have your finger well lighted at any time...I see no problem with that at all...

There's a lot of push towards I/O systems that are more convenient for the manufacturers, but less so for the end consumer.

The first keyboards were heavy and had tactile feedback. If you fumbled a key typing in your password, you knew whether it entered because you could feel the "click". Nowadays the keyboard is lighter than a paperback and there's no feedback - accidentally brush a key with your finger and you have to look (for non-password text entry) or start over.

Twist knobs are highly intuitive, especially when coupled with feedback. Twist a knob and see the hands of the clock move, or see the numbers change. Control the speed natively, and if you go too far it's obvious how to back up. Nowadays we have buttons to tap, incrementing the count by 1 each time. Tap 50 times to set the minute display, and if you go too far you have to go all the way around again. This was done largely because buttons are easy to fabricate (using PCB contacts), not because they are inherently better.

Modern typing is done on the display (phone, surface), so not only don't you have tactile feedback you can't feel the boundaries of the keys, and your fingers mask the key display. And it's really tiny - in order to access all the keys you have to type extra keys that switch between keyboards (upper/lower/symbol). Again, it was done for ease of manufacturing, not ease of use.

Is the ribbon any easier than, for example, cascading menus? The problem with the Windows original menu system was that every application put their commands in the top-level Start->Programs folder, leading to start menus containing hundreds of links. (I take the time to move StartMenu command links into subfolders by type, which makes it much easier, but on my dad's computer it's impossible to find anything.)

Ever since minority report people have been touting the wonders of air-gesture input, and that it is the next "big thing", but is it better? (Actually, I remember it from Johnny Mnemonic, 7 years earlier.) Seems like this is just something that's easier to manufacture, but not easier to use. Sure, the customer will be able to do everything they could do with a mouse/keyboard, but more slowly, less conveniently, and with lots of frustration. That's an externality to the manufacturers, but it's better for them because they don't have to build in a touch interface. Probably [electrically] more reliable, too.

Is this really progress? I wonder what Edward Tufte would say about modern interfaces.

The first keyboards were heavy and had tactile feedback. If you fumbled a key typing in your password, you knew whether it entered because you could feel the "click".

If you pay attention that's not really how you type. You learn where keys are by muscle memory, not because they have a shape. When I'm typing if I'm doing it right I'm not feeling edges of keys, just keys depressing under my finger. And I notice mistakes on the screen, not from where my fingers hit.

Nowadays we have buttons to tap, incrementing the count by 1 each time. Tap 50 times to set the minute display, and if you go too far you have to go all the way around again.

We do? That sounds horrible. Just about anything I've ever used that has you set large sets of numbers just lets you type them directly.

Modern typing is done on the display (phone, surface), so not only don't you have tactile feedback you can't feel the boundaries of the keys, and your fingers mask the key display. And it's really tiny - in order to access all the keys you have to type extra keys that switch between keyboards (upper/lower/symbol). Again, it was done for ease of manufacturing, not ease of use.

Sorry, but I consider that wrong in lots of ways.

You have tactile feedback in that you can feel where you are in relation to the edge of the device. Also while your finger obscures the key pressing displays what the key is so you can see exactly what was hit. But touch typing is no harder than on a computer, because over time you learn where to press and also because predictive mechanisms correct most mistakes.

Also, the problem is inherently one of size. Touch screens are not "more convenient for the manufacturer". You have no idea how much software and hardware is involved to get a touch screen keyboard working well. A physical keyboard is just buttons. But the reason why touch screen keyboards are winning out in small form factors is because the are more convenient for the USER. As good as the Blackberry keyboard was, I hated using those tiny keys and the virtual keyboard on a touchscreen has larger keys - and also can tailor the keyboard to a task like entering numbers rather than having to deal with a row of tiny numbers on a tiny keyboard.

Is the ribbon any easier than, for example, cascading menus?

I don't like the ribbon much but yes, direct access is in fact easier than deeply nested menus.

Ever since minority report people have been touting the wonders of air-gesture input, and that it is the next "big thing", but is it better?

Even though I disagree with your core argument, I do agree with your conclusion. I'm not sure air-gesture input is better. But I think like all things, over time it will be folded into the mix as just another possible way to tell a computer what to do. I rank it slightly ahead of talking as a socially acceptable interface, although you probably look a bit crazier with the motion.

Nowadays we have buttons to tap, incrementing the count by 1 each time. Tap 50 times to set the minute display, and if you go too far you have to go all the way around again.

We do? That sounds horrible. Just about anything I've ever used that has you set large sets of numbers just lets you type them directly.

I think that GP isn't talking about smartphones/etc here, but rather stupid idiot shit like microwaves, alarm clocks and "induction"* oven tops with touch controls. I am now in the process of buying a new oven top for the place we just moved in to, not because the old one is broken, but because the user interface is mind-numbingly stupid. It looks near -impossible to find an induction oven with good-ole fashioned physical switches. Grrr.

The fact that you can't feel the advantages that a tactile keyboard offers over a non-tactile, or even 'non-existent' one, says it allâ¦

On a desktop I see a plain advantage. But on a smaller device the advantage vanishes, especially when portability is desiredâ¦ until you get down to phone size and then physical keys are a detriment as I said.

I'm also not saying that physical keyboards are not good. Just that virtual keyboards are not bad, and can offer some real advantages.

In the past, it was said people who behaved like this had Tourette's (very real, debilitating and somewhat treatable...not making fun of them). Now,it's going to be the new normal.

Imagine a bunch of Congress critters wearing these things....it will make CSPAN funnier than the Comedy Channel. Especially when they start playing Doom or Halo during a crucial vote....yelling...Die F-er!!!!!!

We're just getting closer to William Gibson's concept of cyberspace.
Add some "trodes" for your fingers, slot an OS into the jack behind your ear, and you'll be sipping virtual coffee with Count Zero in the Sprawl.