Share this story

Google has been granted a patent for a projection system to display buttons and controls from a headset, according to a patent filing granted to Google on Thursday (uncovered by Engadget). In the patent art, the projector is shown as mounted inside a set of glasses, not entirely unlike the Google Glass project set to reach more developer hands this year.

The art shows a person wearing the glasses with two control schemes projected on their person: in one diagram, a number pad on their palm, and in another, a set of four buttons on the inside of their wrist. Another diagram shows the number pad mirrored inside the glasses display, as if the glasses would be able to show in a HUD which buttons were being pressed so the wearer would not have to focus directly on the number pad projected on their arm.

Inside the text of the patent, Google suggests further uses for the projector: “the processor may detect when the display hand is moving and interpret display hand movements as inputs to the virtual device.” Hence, the projector would not only display interactive control schemes but would also be able to accept gesture inputs to control what’s happening inside the glasses display.

The patent also covers many finer points of the design, such as how the glasses would determine the appropriate time to display controls (a gyroscope would detect when the head is moving) and when to stop projecting them (a camera could see, for instance, when the palm projected with the number pad falls to the person’s side). We’ve been wondering how Google might deal more subtly with control schemes than requiring users to speak commands or just use their smartphone, as with competing smart glasses. This projector could do the trick.

Share this story

Casey Johnston
Casey Johnston is the former Culture Editor at Ars Technica, and now does the occasional freelance story. She graduated from Columbia University with a degree in Applied Physics. Twitter@caseyjohnston

If the camera has to identify and track your limbs so that the projects can know where to project the UI, why not just paint the UI over the limb in the HUD, instead? Omit the projector entirely.

This is actually what I'm assuming the end result would be. That way you reduce hardware complexity and power consumption a TON, as well as making it more usable in more scenarios (High noon in the desert? Good luck having a 'mobile' projector bright enough to be visible.).

If the camera has to identify and track your limbs so that the projects can know where to project the UI, why not just paint the UI over the limb in the HUD, instead? Omit the projector entirely.

You beat me to it! It doesn't make sense to expend the energy to physically display something, as well as compete with ambient light, when they can just perform some augmented reality business instead.

The glasses will be able to work with virtual interface solutions, and truly compelling, when the product display overlays the entire field of vision, which will permit vision enhancement, i.e. information, images of objects behind other objects, thermal information, etc., rather than an isolated readout floating up off to the right.

Of course, that will need to be adjusted, or permit adjustment as necessary, so that it does not interfere with, but enhances regular vision when important, i.e., while driving.

The described interface sounds and looks exactly like Pranav Mistry's SixthSense from half a decade ago. Good on Google for taking an obviously good interface idea and running with it, but there is prior art on this (i'm ignoring the fact that software patents are innately invalid, since software is non-statutory subject matter, since the USPTO and courts also ignore this fact.)

Can't wait to see these things in action, since i've been complaining for years that noone put Pranav Mistry's FOSS software to use in a commercial product yet (and that was one of his purposes in making the software FOSS: the hardware was cheap and off-the-shelf, all that was needed was a company to put it into production as a self-contained product).

If the camera has to identify and track your limbs so that the projects can know where to project the UI, why not just paint the UI over the limb in the HUD, instead? Omit the projector entirely.

+1, thought it might be hard to translate the spatial relationship of where the buttons are supposed to be without projection.

Sort of like when you reach out to touch an object in a 3d movie but it's clear that even though the ball is RIGHT THERE your hand is never going to actually touch it.

This was my same thought. The distance projection is easy, as to project the proper size they need to know the distance to the projected viewing plane. Keeping everything in the HUD would be cleaner and easier. Personally I would think the hardest part would be visual recognition of things like key presses because the camera is facing in the same direction as the projector. This makes it hard to tell the difference between the movement in a finger corresponding to a key press versus the hand just moving slightly from jitter as there little information to be used to gather the depth of action.

Also personally I'd use the top of forearm for the display plane rather than turning my arm to have my palm facing up. Though this will like just be easily be translated into landscape/portrait mode. Top of forearm for landscape display, palm up for portrait. Pretty much first thing I noticed when I mimic'd the picture, as holding my arm like that was slight awkward. The forearm should make a great project plane, as visually its easy to determine a line that's parallel with the viewing plane, which is necessary to determine the angle between the glasses and the plane so that the display and text can be adjusted according. Also with HUD overlay versus projection, you can then expand the viewing field to as wide as the resolution of the glasses and what the cameras allow, instead of limiting it to just the display on the arm. You can then have information displayed in, for example, a 8.5x11 standard sheet size centered over your arm, where as a projection would only be able to display a 3" (width of my forearm) tall sheet, and that's only if the viewing plane was perpendicular/parallel to the glasses. Any angle would result in skewed projection and loss of vertical space with a physical projection

However this could involve more computational power, though at first glance it doesn't use any more information or computations than what the projector will need to know before it displays information. As distance to plane, angle of plane, and orientation of plane are needed to size a surface projection to proper scale. The only difference would be the physical projector is not needed. Or at least one not as strong, as you still have to project onto the glasses themselves. Then the key is the resolution that the projector will offer.

Or we could all be reading it wrong, and it just means project on the HUD, not the actual arm, while the eyes do the work in translating that it looks like its actually on the arm.

Also personally I'd use the top of forearm for the display plane rather than turning my arm to have my palm facing up.

I kind of wonder if this is somehow intentional. Thanks to bluetooth, we now have people wandering around like they are talking to themselves. The next logical leap in technology is to have people wandering around looking like junkies trying to find a vein.

Like the old joke goes: There are only two industries that call their customers "users".

If the camera has to identify and track your limbs so that the projects can know where to project the UI, why not just paint the UI over the limb in the HUD, instead? Omit the projector entirely.

+1, thought it might be hard to translate the spatial relationship of where the buttons are supposed to be without projection.

This is why I simply cannot understand why Glass is monocular - the possibilities for projecting elements into 3D space (maps anyone?) seem to make it an absolute no-brainer to go straight to a stereo HUD.

Dammit, knew I should have patented that shit myself. Had pretty much exactly this idea ages ago, only I was more concerned about where the images should be projected. I wanted to make sure that looking around wouldn't be obscured, you would just be able to nominate a surface as your screen, and another one as the input.

This meant that if you looked away from the "screen" the projector stopped so you could see what was going on. For example, if you're on a plane, and you're looking at the back of the seat in front of you for your "screen", looking at the stewardess as she asks you about your meal, wouldn't mean that her face was obscured by the movie you were watching.

Google needs to watch more Dragon Ball Z. The Scouters in that series are what they are trying to achieve in the first place anyway. And this projector thingy looks way more unwieldy than the touchpad/trackball thing they used on the DBZ Scouters...

No one seems to realize that we will all look like crazy people in 10 years. Gesturing to things that don't exist talking to people that are not there. Who needs the future when crazy people are here now !

Why don't they invent glasses with screens in both lens, which could (By choice) give a stereoscopic effect. The user can then "Press" a virtual UI that looks as if is in front of their heads, but is only projected on their glasses.

Dammit, knew I should have patented that shit myself. Had pretty much exactly this idea ages ago, only I was more concerned about where the images should be projected. I wanted to make sure that looking around wouldn't be obscured, you would just be able to nominate a surface as your screen, and another one as the input.

This meant that if you looked away from the "screen" the projector stopped so you could see what was going on. For example, if you're on a plane, and you're looking at the back of the seat in front of you for your "screen", looking at the stewardess as she asks you about your meal, wouldn't mean that her face was obscured by the movie you were watching.

This kind of ideas has been explored for ages actually, the hard part is realizing them.

So how about instead of a projector/camera/etc, pair the HUD glasses with something like a Leap. All the stuff in the glasses can get displayed in however many virtual layers you want in the HUD. It would take quite a bit of tuning to get things in the HUD to line up properly with the motion control field, but once that was figured out everyone could calibrate it to their own eyes and finally have 3D objects that respond to being grabbed and pulled around...