DIY “Project Glass” clone looks almost too good to be true

By now we’re assuming you are all familiar with Google’s “Project Glass”, an ambitious augmented reality project for which they revealed a promotional video last week. [Will Powell] saw the promo vid and was so inspired that he attempted to rig up a demo of Project Glass for himself at home.

While it might seem like a daunting project to take on, [Will] does a lot of work with Kinect-based augmented reality, so his Vuzix/HD webcam/Dragon Naturally Speaking mashup wasn’t a huge step beyond what he does at work. As you can see in the video below, the interface he implemented looks very much like the one Google showed off in their demo, responding to his voice commands in a similar fashion.

He says that the video was recorded in “real time”, though there are plenty of people who debate that claim. We’re guessing that he recorded the video stream fed into the Vuzix glasses rather than recording what was being shown in the glasses, which would make the most sense.

We’d hate to think that the video was faked, mostly because we would love to see Google encounter some healthy competition, but you can decide for yourself.

Project Glass is often referred to as augmented reality but I am wondering if that description is really accurate. I would have thought that to be augmented reality that the display would have to interact with your view of reality. As it is Project Glass simply overlays information over your view but its position and the information it displays doesn’t seem to relate to your view.

I would suspect that a better description than augmented reality would be that it is a wearable HUD (Head Up Display).

Google doesnt call it AR themselves.
Technically the name could be used as it is a augmented view of reality.
However, the term has come to be a bit more specific, requiring some sort of real world alignment.
However, in many ways thats more a software challange then hardware.

I think the take-away message in this whole thread is that we need better image recognition technology. We need cameras and software to function more like the human eye and human brain. Imagine if the glasses in this video recognized all the text on the pages, as well as the sources of all the advertisements on the page. You could see who is trying to sell you something, as well as why they are trying to sell it to you. Informed consumers could see why certain companies choose certain tactics.

I like this idea a lot. Going further, we start having it evaluate everything you are exposed to audio-visually for semantic content: You’re watching a commercial, and the display reminds you that the sponsor hasn’t actually made any substantive claims about the product at all, etc. Or, they suddenly alert you to seeing a friend/relative’s face in a shot of the crowd at a covert or sporting event. Sort of a buddy second awareness.

Computer vision technologies as they exist are pretty sophisticated. The bottleneck is processing power. Do a “Google Play” search for “OpenCV” and play with some of the apps available. Compare framerate between an LG Optimus V vs. an HTC Evo 3D. The frame rate on something as basic as a haar cascade based face detection app will tell you a lot about the horse power requirements.

Moors law and the singularity still aren’t to the point where you can put a cerebral cortex in a pair of sunglasses and expect equivalent performance to the human brain.

IMO we won’t see “human brain performance” in our lifetime. Also, I think you’re using Moore’s Law incorrectly here. Moore’s does not describe processing power, it just describes the shrinking of transistors over time, which is finite and will start slowing soon.

Might be. But if his rig has a camera it could just be recording that stream and overlaying the menu on top of that to give us an idea of what it looks like. The menu probably doesn’t look that good to him as Vuzix glasses don’t have a very good resolution if i recall correctly.

This isn’t trying to be a see-though display like Project Glass. He is overlaying the HUD on-top of a webcam video and playing that back on the glasses display. We are seeing a recording of the feed (i.e. he didn’t even need the glasses).

*IF* this is real, it was recorded by recording the displayed video. I believe these glasses are not see-through displays, but rather displays in front of your eyes. It is just showing video from a camera, overlaid with the information. Then the computer just records the displayed video.

Bio, you are incorrect. This is a recording of the stream, so the icons are getting injected into the video feed before displaying to the end user. There would be no movement of the icons as it’s displaying in exactly the same space. Same with the contrast. The fact that he doesn’t explain it probably means it’s fake, but not the other reasons you suggested.

I call fake. The video might be from the glasses, but the overlay is clearly just edited in. But imho google did the same thing and neither of the videos proves that there is anything like the google glasses that is actually working as well as google wants us to believe.

I see no reason why this should be faked – it isn’t groundbreaking. I don’t think those Vuzix glasses are proper look through hud technology, are they? Aren’t they just high end VR glasses? I.E. a small display and optics to give a virtual screen, onto which he’s projecting a composite of the video feed from the cameras and some Adobe graphics controlled via dragon.

I suspect he’s using the PC to do the work too, so not portable.

I’ve looked into this for my own (failed!) project – the optics are the truly challenging part. Getting a data feed wirelessly from a smartphone should be easy enough.

Brother have cracked the optics side with their retinal projection system, which I suspect is the route Google will take.

the tech is possible but the video is a fake a Bluetooth Smart Phone and a Bluetooth Glasses with a camera, mic, ear-buds, OLED screen are all the basic needs to make this all is available today.

What the guy had on in the video is a HUD Video Glasses that cover the eyes no seeing past them. Also on the whole magazine thing with a live camera should also bring up links to webpages on the topic other options could be to have the magazine read to them like text to speech.

The Smart Phones out today have more then enough power to handle the computing needs.

One day they will be all you need no need for cellphone, computer, Tv, Mp3 Player, Camera can all be replaced with A pair of glasses

What we don’t see is how good the image inside the glasses is. The main problem with augmented reality is the user experience. Previous incarnations were fuzzy and not completely aligned with the vision of the user.

Reading a newspaper wearing a video see-through HMD is like taking screenshots with a camera. Nevertheless, such device should be possible to make in a day or two. There may be problems with low-light situations, latency and limited field of view.

Theres been dozens, if not hundreds of researchers and people prototyping in these field for the past 50 years or more.
Lots of people have contributed to this field, theres no one person being “copied”

I’m glad to see others referencing the work of Steve Mann, who really is a seminal researcher in this area of computer-mediated vision. Google’s project is nothing new, and there are already people out there with years of experience using similar devices in their everyday lives.

The interesting question is what happens when, after years of depending on this technology, it is removed…

Frankly with off the shelf hardware what this guy’s done isn’t difficult. He seems to already have these Vizux glasses (which from a cursory glance seem like pretty uninteresting LCD display goggles with cameras mounted in front) and the rest uses Adobe AIR, which I’m pretty darn sure is a more or less full horsepower platform that runs on a PC.

What I’m saying is that if you had these glasses lying around, had some experience with some language that could interface with the easily, and have some third party voice recognition software, it’s probably a day or two’s worth of hacking to make software like what he showed work. Google glass is more of a cool idea–honestly, there isn’t really any single part of google glass which alone is really innovative. HUDs like glass have been used by the military since forever, and are already in a number of consumer products (high end cars). It’s a matter of putting that together with /good/ speech recognition, /good/ UI, and some smart way to retrieve information (‘course google’s already knows how to do that), and having well designed hardware (for google, this means contract to Samsung or someone who’s done plenty of this sort of thing before) for a well polished product you would want to use.

Of course, google has yet to demonstrate working hardware, so I’m actually about as skeptical about how great their product will work in practice until I see it for myself.

Back to this thing, I think it’s more or less given that it’s only a demo, and hardly has all the features of what google glass is supposed to have (most of the functionality, softwarewise, is probably faked or limited to whatever he showed)

nice but this video is not realtime and fake why you see that he puts the glasses on. but where is the camera that films what you see through the glasses. we must believe that we are in real time along the way see what I see now can not because there is no camera for your eyes. I think the google menu, there’s tunes with editing in.
remember the 3d whitout a glasses put 2 things on your head and you can see the 3d but turn out it is fake.

My big problem with this, is I didn’t think they made wearable monitors with enough resolution to read a magazine. Obviously in the video itself you won’t see that, since the video’s own resolution is limited. But this is why a transparent screen is a good idea. Nobody could be expected to function with their vision coming purely through a monitor.

I’ve had an idea… use an LCD, monochrome, to switch between transparent / black areas on the display. On top of that, put an OLED or a side-lit LCD or something that gives off light. That way, computer graphics can go on the blacked-out bits, while normal vision can still come through the bits left transparent.