Google's Project Glass team has released the Linux-specific source code that ships with the Android-based augmented reality device. The source is available now at code.google.com, and Google said "it should be pushed into git next to all other Android kernel source releases relatively soon."

This doesn't mean all of the source code related to Google Glass has been released. Android code is released in two parts—changes to the Linux kernel are released under the GPLv2 free software license, just as Linux itself is, while most of the code that makes Android recognizable to users is released under the Apache License. With Glass, just the Linux code has been released.

While the public nature of Android source code has allowed forks such as Amazon's Kindle Fire platform, a brand-new glasses platform based on Glass code wouldn't be possible just from the kernel source release. For example, any modifications made to the Android user interface to optimize it for Google Glass wouldn't be part of the kernel code.

It seems likely that Google will also release the rest of the Glass-specific code eventually, particularly since Glass ships with a version of Android. We asked Google today if it plans to release all the Glass code and when, but we haven't heard back yet.

Google's reactions to developers tinkering with Glass should be promising to those hoping Glass will be as open as the rest of Android. Jay Freeman, who developed the Cydia app store for jailbroken iOS devices, reported Friday that he has jailbroken and modified the software on a developer version of Google Glass:

Other developers reported that rooting Google Glass is pretty easy and that Glass ships with the year-old Android 4.0.4, according to a 9to5Google story.

There's been some debate over whether developers actually gained root access to the devices or simply took advantage of a "fastboot OEM unlock" that Google itself provided. "Not to bring anybody down... but seriously... we intentionally left the device unlocked so you guys could hack it and do crazy fun shit with it," Google engineer Stephen Lau wrote. "I mean, FFS, you paid $1500 for it... go to town on it. Show me something cool."

So, does this mean someone will be able to easily write an app that disables the camera light, and lets them say "Ok, Glass, search for kittens" which is a codeword for "take a creep shot of this girl in front of me"?

There's been some debate over whether developers actually gained root access to the devices or simply took advantage of a "fastboot OEM unlock" that Google itself provided.

Technically, that command just unlocks the ROM. Google is giving you complete Read/Write access to whatever memory on the device you want. You can then change your account privileges to whatever you want.

You can call this "rooting" if you want, but its a little silly since the device is made to be used like this.

So, does this mean someone will be able to easily write an app that disables the camera light, and lets them say "Ok, Glass, search for kittens" which is a codeword for "take a creep shot of this girl in front of me"?

Pull out HTC phone. Turn off sound. Go to Camera app. Turn off flash. Hold up phone as if you're scrolling through something. Take picture.

We may already live in that world, but you still have to get your phone out and point it towards someone, and people can still screen watch which may temper willingness. But when you know you're the only one to see what you are snapping? Just sayin', people keep defending the idea of being secretly recorded with the light and voice commands, but that doesn't look to be true for very long.

We may already live in that world, but you still have to get your phone out and point it towards someone, and people can still screen watch which may temper willingness. But when you know you're the only one to see what you are snapping? Just sayin', people keep defending the idea of being secretly recorded with the light and voice commands, but that doesn't look to be true for very long.

I've actually done recon using a phone set to record and casually walking beside someone while pretending to hold a conversation. No one noticed. This is not the end of the world as we know it.

We may already live in that world, but you still have to get your phone out and point it towards someone, and people can still screen watch which may temper willingness. But when you know you're the only one to see what you are snapping? Just sayin', people keep defending the idea of being secretly recorded with the light and voice commands, but that doesn't look to be true for very long.

Yes, and all you have to do is buy a glass, root it, alter some setting or camera firmware, and you have a camera that can secretly take pictures from the position of your face.

Or you could just buy the tons of cameras that are available in small form factors and can go in places and at angles that are going to be more than a little noticeable when you stick your face in there, including the phones almost everyone carries now, yes.

We may already live in that world, but you still have to get your phone out and point it towards someone, and people can still screen watch which may temper willingness. But when you know you're the only one to see what you are snapping? Just sayin', people keep defending the idea of being secretly recorded with the light and voice commands, but that doesn't look to be true for very long.

Well, on the other hand, a phone you can point anywhere and don't have to face same way. Glasses look straight ahead, so creep shots/covert recording is pretty limited by the fact you'd have to stare at whatever you're recording.

Staring down someone's blouse gets a slap whether you wear Glass or not, but you could probably get that shot with a phone while looking like you're checking mail.

It's all just a question of ethics. Those who wouldn't do that with a phone, won't suddenly get mad with power after getting glasses. Those who would do that can do that (and probably are doing that) already with phones. I don't see Glasses as some revolutionary Creep Enabler like that.

We may already live in that world, but you still have to get your phone out and point it towards someone, and people can still screen watch which may temper willingness. But when you know you're the only one to see what you are snapping? Just sayin', people keep defending the idea of being secretly recorded with the light and voice commands, but that doesn't look to be true for very long.

Or I could just put on a cap, put a pen in my pocket, adjust my tie clip.. There lots of hidden cameras available at affordable prices. You have no clue who is recording you with what article of clothing or accessory, and who is not.

Are Google doing this because they have no idea on how to position the product? I get the faint feeling they are scratching their heads with what to do with Glass. Most companies come out with a list of things their product can do to showcase its potential and then the consumer shows them what else it can do...

Are Google doing this because they have no idea on how to position the product? I get the faint feeling they are scratching their heads with what to do with Glass. Most companies come out with a list of things their product can do to showcase its potential and then the consumer shows them what else it can do...

It's a bit like that, to be sure.

People have loved the *idea* of augmented reality/heads-up-display and always-on-camera technology since... well, for a long, long time. Google knows that; the people at Google have been reading Sci-Fi just as long as the rest of us.

Smartphones have taken us closer to the sci-fi world in the sense that they do things that we saw in Star Trek: two-way voice communication wirelessly over long distances. Really, though, the devices we saw on Star Trek were Satellite phones; smartphones on GSM/CDMA networks allow for significantly different sets of features, especially now that internet backhaul is pervasive and accessible with that same network. Star Trek never imagined cameraphones, because the use-case for the communicator was Military/scientific. Integrating the tricorder and the communicator was a no-brainer, but... not many people actually have any use for a tricorder. So nobody cares that Android phones from a few years ago were able to run some significant parts of the theoretical tricorder functionality in a free app.

So, AR headsets are now equally *possible*, but the sci-fi use-cases for a heads-up display are about as vague as the tricorder was. Remember that we've had consumer-grade "helmet cam" hardware for a long time. Google's hardware isn't even that good in comparison. We've also had head-mounted voice-activated communication functionality through bluetooth headsets for a while. What Google can do is, as with Android, take the existing hardware designs and add a layer of interconnection with networked services. Navigation, directory services, and personal data accessibility are all things Google excels at providing. Video chat, telephony (including telephone network routing), voice-recognition command systems, and social networking (including photo sharing) are all things Google is getting into in a big way. Does Glass bring anything substantively *new* to the table? Absolutely not. Neither did the iPhone.Neither did the Laptop computer.

Does Google know how to sell this thing?Of course they do. They're selling it to people who have been asking for it. There are plenty of those people, and they're happy that Google is the one selling it. It might not be a breakthrough billions-of-units-sold device, but it doesn't have to be. It's not even a flagship product... it's a labour of love created by a company full of nerds who want to help other nerds expose newer, cooler (and, yes, even more perverse) ways to do things with global information networks. I'm not one of those people, but I know a few of them. I'm frightened by what might be coming. But, on the whole, I'm hopeful that human beings will find a way to maintain humanity in their interactions, and I'm convinced that heads-up-display hardware giving people relevant information when they need it is at least a better way forward than everyone walking around staring at tiny screens playing Angry Birds and browsing Facebook in their hands all the time.

"I mean, FFS, you paid $1500 for it... go to town on it. Show me something cool."

++

Glad to see common sense prevail.

Except they didn't. Google is still #$^#^#$ up enough to try to limit you from selling the Google Glass YOU bought ... If Google was a person, they would have been diagnosed with multiple personality disorder.

"I mean, FFS, you paid $1500 for it... go to town on it. Show me something cool."

++

Glad to see common sense prevail.

Except they didn't. Google is still #$^#^#$ up enough to try to limit you from selling the Google Glass YOU bought ... If Google was a person, they would have been diagnosed with multiple personality disorder.

I read that as them trying to make sure that the limited glass units were purchased by people actually wanting to develop for it, not people buying them to try and scalp them out. That might just be my naivete, but I don't really see some big conspiracy around Google Glass where they're trying to strip you of your rights. From what I recall, the terms were made clear before people signed up and payed $1500 for them.

They only released the kernel? That's a little underwhelming. Google is under obligation to release the kernel, since it's licensed under GPLv2. Not exactly a charity move on their part, since they are required to release at least that much.

Still, Glass does seem to be pretty hacker friendly, and having the kernel source is still super useful. Just not some big kindness by Google.

Smart money says Google is keeping an eye out for the most awesome projects out there to offer the creators professional development contracts to bring their ideas to the big leagues. That's what I would do!

To all the younger people getting excited by this us older ones, well me and anyone else I know don't think 'THIS WILL CHANGE THE WORLD'.It's a toy, a novelty. Watching too much science-fiction.

I disagree. The more integrated technology becomes into your daily life, the more it affects things.

The ability to call upon Google trivially in any conversation is pretty handy.

i hope it's a lot more than that. i have no idea what this iteration of augmented reality will become, but the field as a whole has the potential to be the turning point in human/machine interaction. up till now, a computer is something that we use or look at or work on - another world that we go to and interact with. this is the first step towards bringing machines into our world and possibly having them shape our experience and perception.

with the right programming and art design, a simple HUD text overlay could morph into adding objects into our world. an easy example would be how videoconferencing could turn into holograms of the people you're talking to "standing" right in front of you. you don't have to figure out how to project holograms in the real world when the glasses can add anything to your vision

with a 3d camera that gains depth perception and using object recognition software probably powered by google's search engine, the entire world could become a touchscreen. if we have face recognition software now, finger recognition can't be that difficult. holding something and saying "search" or using some gesture will result in information overlay next to the object. hell, the gesture could be a double click using your finger on an object

i hope it's a lot more than that. i have no idea what this iteration of augmented reality will become, but the field as a whole has the potential to be the turning point in human/machine interaction. up till now, a computer is something that we use or look at or work on - another world that we go to and interact with. this is the first step towards bringing machines into our world and possibly having them shape our experience and perception.

Computers already do this. Computers have completely warped our perspective of reality. The ability to instantly communicate with dozens of people from around the world, simultaneously, and access all of human knowledge is far more important than any sort of petty AR. AR is just a means of accessing said knowledge while wandering around - so you could look at the Eiffel Tower and bring up its history or whatever, leave virtual notes for people, ect.

Quote:

with the right programming and art design, a simple HUD text overlay could morph into adding objects into our world. an easy example would be how videoconferencing could turn into holograms of the people you're talking to "standing" right in front of you. you don't have to figure out how to project holograms in the real world when the glasses can add anything to your vision

Thing is, videoconferencing rather sucks. The primary use of it is actually to share powerpoints - actually looking at people talking is pointless, and people communicating on glass actually can't see each other - you would see what each other are looking at.

Quote:

with a 3d camera that gains depth perception and using object recognition software probably powered by google's search engine, the entire world could become a touchscreen. if we have face recognition software now, finger recognition can't be that difficult. holding something and saying "search" or using some gesture will result in information overlay next to the object. hell, the gesture could be a double click using your finger on an object

Figuring out where an object is in space is about a million times more difficult than facial recognition. Facial recognition is actually easy compared to figuring out how far out your fingers are, which is why Kinect postdates facial recognition software by ages and ages.