Google Glass spyware app is cute, but not the end of the world

Sneaky app takes photographs without informing the user.

Google's stated policy for apps on its Google Glass head-mounted hardware is that apps aren't allowed to take photographs when the display is turned off. But it turns out there's nothing actually enforcing this policy. Two California Polytechnic students built an app that converts Glass into a spy camera, taking a photo every 10 seconds without any visible indication to the user, reports Forbes.

The app, built by graduate researchers Mike Lady and Kim Paterson, masquerades as a legitimate piece of note-taking software, albeit with the decidedly illegitimate name of Malnotes. It captures images of whatever the Glass wearer is looking at and uploads them to the Internet. The pair notes that although this violates the Glass terms of service, those terms of service have no actual enforcement in the Glass software.

They aren't sure if they could get the app into Google's curated MyGlass app store. They did manage to get it into the relatively wild Google Play app store, but when their professor tweeted about their work, they decided not to bother trying to submit it to the more restrictive storefront. Google has subsequently removed the app.

Talking to Forbes, Paterson expressed dismay that this was possible, noting that many current Glass apps are sideloaded, as developers are still experimenting with the platform.

In a statement given in response, Google said, "Right now Glass is still in an experimental phase and has not been widely released to consumers. One goal of the Explorer program is to get Glass in the hands of developers so they can hack together features and discover security exploits."

Further Reading

This isn't the first time this kind of exploit has been used on Glass. Last year, we reported on the way rooted devices could be used to spy, again with no indication to the wearer that anything untoward was happening.

We're inclined to agree with Google's response: this kind of attack isn't a big deal—at least, not yet. Glass is not a mainstream device. While there are all manner of dubious individuals trying to sell the device, officially it's only available to people accepted into Google's Glass Explorer program. As such, it's meant for experts—predominantly developers and other tech savvy individuals—and isn't billed as production-ready final software.

This is the kind of problem that needs to be addressed prior to Glass eventually becoming a mass-market consumer device, but until that happens, buyers simply need to beware. Third-party software, whether malicious or, as in this case, merely experimental, is liable to make the hardware do things you don't want it to.

Promoted Comments

Hey guys, I'm one of the researchers mentioned in the article. What I found when I was playing around with Glass's camera API is that it does require the camera to have a SurfaceHolder object inflated on the display to show a preview of what the camera is looking at.

What is not enforced by the API is the size of that preview SurfaceHolder object and that the display needs to be on. The developer can currently make the preview SurfaceHolder object 1x1 pixels, nullifying its intended purpose. The developer can also make a check to see if the display is on or off.

What I imagine Google is doing to fix these two problems is to add extra conditions to their Camera.takePicture(...) method:1. They can check the size of the SurfaceHolder that was passed to them (via Camera.setPreviewDisplay(...)) to make sure that it is greater than some minimum size or takes up the whole screen.2. They can do the same check that I do to ensure that the display is on.

If neither of these conditions are met, the Camera should throw an exception much like how it throws an exception for not having that SurfaceHolder object inflated on the display.

These are conditions that MUST be enforced by the Camera API before Glass is widely available. Otherwise, it can enable an entire ecosystem of nefarious apps that don't make it into the Glass store, but can still be downloaded from 3rd party sites and "side loaded".

To get away from the discussion how awful this is: what could google do to prevent this?

I guess they could either check API calls, to determine if the app is using the camera, they could also require apps to get confirmation for using the camera (i.e. Like apple does with the microphone). This, of course, could be prevented by the app masking as a legit camera app.

I'd prefer if their was some kind of hard wired signal whenever the camera is in use. This is hard to do correctly as could recently be seen by firmware hacks of those MacBook cameras, but it's probably the only useful solution. This signal should also be visible by others, do they know their taped.

If anything, I would think it would be built into the API. If it takes a picture, it shows up on the display with no exceptions. Google's wink mechanism already does that, so just move that into the API call (if it isn't already).

Apple has security flaw == End of WorldGoogle product has security flaw == No big deal.

Great neutral reporting guiz.

The point I got from the article is that Google glass is a limited release product and therefore security issues are less likely to be a problem to the general public rather than the point you seem to have gotten of Apple is somehow more important than Google.

I don't think that the malware issue is a big deal; but this does show - as everyone pretty much suspected all along - that a GG user can take your picture without there being any visible indication. More support for the idea that restaurants and bars that ban GG are on the right track.

Apple has security flaw == End of WorldGoogle product has security flaw == No big deal.

Great neutral reporting guiz.

Ars has long delivered neutral but hard-hitting coverage of Google security. Indeed, Android users have several times accused me of "sensationalizing" articles to make Android vulnerabilities appear worse than they really are. In just the past month, here are two examples of the kind of scrutiny Ars has given to security in Android:

I write more than 300 articles per year. Spend a few minutes searching and you'll find many, many more.

As an aside, comments like "Great neutral reporting guiz," even if intended to be playful, aren't constructive. Frequently, they add more noise than signal to a discussion. Rather than peppering your comments with sarcasm, please consider adding facts that support the argument you want to make.

And in case it's not clear, the argument I'm making is that Ars delivers tough but fair security coverage on a whole spectrum of products and platforms, including iOS, Windows, open source, Android, and Internet of things. It may be tempting to claim we favor one thing over another, but those arguments aren't supported by the articles we publish. People upvoting dawesdust_12 please think again.

Apple has security flaw == End of WorldGoogle product has security flaw == No big deal.

Great neutral reporting guiz.

Ars has long delivered neutral but hard-hitting coverage of Google security. Indeed, Android users have several times accused me of "sensationalizing" articles to make Android vulnerabilities appear worse than they really are. In just the past month, here are two examples of the kind of scrutiny Ars has given to security in Android:

I write more than 300 articles per year. Spend a few minutes searching and you'll find many, many more.

As an aside, comments like "Great neutral reporting guiz," even if intended to be playful, aren't constructive. Frequently, they add more noise than signal to a discussion. Rather than peppering your comments with sarcasm, please consider adding facts that support the argument you want to make.

And in case it's not clear, the argument I'm making is that Ars delivers tough but fair security coverage on a whole spectrum of products and platforms, including iOS, Windows, open source, Android, and Internet of things. It may be tempting to claim we favor one thing over another, but those arguments aren't supported by the articles we publish. People upvoting dawesdust_12 please think again.

Except the last "giant" security article that was written about Apple/iOS was absolutely terrible (and now fixed as of iOS 7.1). It was written based off of a removed blog post (the iOS Screen Recording), with no vague information about how such a flaw existed, other than it did.

The missing information not in the article caused the masses to speculate about if such an app could exist on the AppStore, and the Android fanboys coming in and trying to blow it so far out of proportion that it was 4 dimensional.

There was so much speculation because of so much missing information in the article, that no article would have been better, and instead waiting for the exploit reporter to write a post-mortem of it AFTER it was fixed, and basing an article on that. At least that way the article could have been factual, rather than prospective and fear mongering to the more gullible.

Apple has security flaw == End of WorldGoogle product has security flaw == No big deal.

Great neutral reporting guiz.

Ars has long delivered neutral but hard-hitting coverage of Google security. Indeed, Android users have several times accused me of "sensationalizing" articles to make Android vulnerabilities appear worse than they really are. In just the past month, here are two examples of the kind of scrutiny Ars has given to security in Android:

I write more than 300 articles per year. Spend a few minutes searching and you'll find many, many more.

As an aside, comments like "Great neutral reporting guiz," even if intended to be playful, aren't constructive. Frequently, they add more noise than signal to a discussion. Rather than peppering your comments with sarcasm, please consider adding facts that support the argument you want to make.

And in case it's not clear, the argument I'm making is that Ars delivers tough but fair security coverage on a whole spectrum of products and platforms, including iOS, Windows, open source, Android, and Internet of things. It may be tempting to claim we favor one thing over another, but those arguments aren't supported by the articles we publish. People upvoting dawesdust_12 please think again.

Except the last "giant" security article that was written about Apple/iOS was absolutely terrible (and now fixed as of iOS 7.1). It was written based off of a removed blog post (the iOS Screen Recording), with no vague information about how such a flaw existed, other than it did.

The missing information not in the article caused the masses to speculate about if such an app could exist on the AppStore, and the Android fanboys coming in and trying to blow it so far out of proportion that it was 4 dimensional.

There was so much speculation because of so much missing information in the article, that no article would have been better, and instead waiting for the exploit reporter to write a post-mortem of it AFTER it was fixed, and basing an article on that. At least that way the article could have been factual, rather than prospective and fear mongering to the more gullible.

1. If you would like to comment on the coverage of Apple/iOS issues (e.g. calling it "terrible"), please do so in the comment threads for those articles, rather than derailing unrelated comment threads.2. If you're dissatisfied with Ars writers' fairness, you can start a thread (maybe in the main forum area) comparing coverage of iPhone/iPad issues to coverage of Android phone/tablet issues.3. What the "fanboys" say in the comments is not the responsibility of the Ars writers, and there are fanboys for every platform, in approximate proportion to each platform's market share.4. Comparing the impact of flaws in a globally popular Apple device with huge market share to a flaw in a limited-release, experimental Google device is not useful. If Apple had limited-release, experimental products like Google Glass, the absence of software-level enforcement for terms of service would receive a similarly unconcerned reception unless the flaw was not fixed before a mass-market release.

I'm actually more concerned about the lack of anonymity and civil liberties being violated with this technology than any hacking concerns.

My state (Victoria, Australia) has just approved new anti protest laws. As part of the laws, if you're asked to move on 5 times in 12 months, it's 12 months jail time. While they seem aimed at protesters, they could also be used against homeless people and other "vagrants". Having police walking around with technology that can easily perform facial recognition just makes it infinitely more open (and likely) for it to be abused.

I'm definitely the target market for something like Glass, and I think the technology is great. But I just think it's much more likely for the technology to be abused by authorities than not.

I'm actually more concerned about the lack of anonymity and civil liberties being violated with this technology than any hacking concerns.

My state (Victoria, Australia) has just approved new anti protest laws. As part of the laws, if you're asked to move on 5 times in 12 months, it's 12 months jail time. While they seem aimed at protesters, they could also be used against homeless people and other "vagrants". Having police walking around with technology that can easily perform facial recognition just makes it infinitely more open (and likely) for it to be abused.

I'm definitely the target market for something like Glass, and I think the technology is great. But I just think it's much more likely for the technology to be abused by authorities than not.

Uh... Not that I don't think that law is awful, but I don't see the connection between it and anything to do with Glass.

I'm actually more concerned about the lack of anonymity and civil liberties being violated with this technology than any hacking concerns.

My state (Victoria, Australia) has just approved new anti protest laws. As part of the laws, if you're asked to move on 5 times in 12 months, it's 12 months jail time. While they seem aimed at protesters, they could also be used against homeless people and other "vagrants". Having police walking around with technology that can easily perform facial recognition just makes it infinitely more open (and likely) for it to be abused.

I'm definitely the target market for something like Glass, and I think the technology is great. But I just think it's much more likely for the technology to be abused by authorities than not.

Uh... Not that I don't think that law is awful, but I don't see the connection between it and anything to do with Glass.

Well you could wear Google Glass and document law enforcement telling you to move along. A few towns in California sew requiring police wear earpieces with cameras, which I guess you would call half a glasshole. It turns out civilian complaints drop when law enforcement is on the record.

I'm actually more concerned about the lack of anonymity and civil liberties being violated with this technology than any hacking concerns.

My state (Victoria, Australia) has just approved new anti protest laws. As part of the laws, if you're asked to move on 5 times in 12 months, it's 12 months jail time. While they seem aimed at protesters, they could also be used against homeless people and other "vagrants". Having police walking around with technology that can easily perform facial recognition just makes it infinitely more open (and likely) for it to be abused.

I'm definitely the target market for something like Glass, and I think the technology is great. But I just think it's much more likely for the technology to be abused by authorities than not.

Uh... Not that I don't think that law is awful, but I don't see the connection between it and anything to do with Glass.

Well you could wear Google Glass and document law enforcement telling you to move along. A few towns in California sew requiring police wear earpieces with cameras, which I guess you would call half a glasshole. It turns out civilian complaints drop when law enforcement is on the record.

So isn't that an argument in favor of something like Glass? If police know their actions are being recorded, they're less likely to beat the hell out of someone for walking while black.

I agree that having their actions recorded might reduce the violence they cause. I just don't like the idea of them being able to use facial recognition wherever they go. It just adds a whole other level of ability to profile and harass. And I'm pretty sure law enforcement agencies won't bother adhering to Google's policies about what apps should and shouldn't be developed for Glass. It's not like they'll be publishing the apps for other people to use.

It'll be impossible for Google to dictate what apps get deployed to these devices and by whom. They might be able to control what goes on their app store, but not what gets deployed by developer devices. And there are almost unlimited ways a device like this can be abused.

Apple has security flaw == End of WorldGoogle product has security flaw == No big deal.

Great neutral reporting guiz.

Ars has long delivered neutral but hard-hitting coverage of Google security. Indeed, Android users have several times accused me of "sensationalizing" articles to make Android vulnerabilities appear worse than they really are. In just the past month, here are two examples of the kind of scrutiny Ars has given to security in Android:

I write more than 300 articles per year. Spend a few minutes searching and you'll find many, many more.

As an aside, comments like "Great neutral reporting guiz," even if intended to be playful, aren't constructive. Frequently, they add more noise than signal to a discussion. Rather than peppering your comments with sarcasm, please consider adding facts that support the argument you want to make.

And in case it's not clear, the argument I'm making is that Ars delivers tough but fair security coverage on a whole spectrum of products and platforms, including iOS, Windows, open source, Android, and Internet of things. It may be tempting to claim we favor one thing over another, but those arguments aren't supported by the articles we publish. People upvoting dawesdust_12 please think again.

Except the last "giant" security article that was written about Apple/iOS was absolutely terrible (and now fixed as of iOS 7.1). It was written based off of a removed blog post (the iOS Screen Recording), with no vague information about how such a flaw existed, other than it did.

The missing information not in the article caused the masses to speculate about if such an app could exist on the AppStore, and the Android fanboys coming in and trying to blow it so far out of proportion that it was 4 dimensional.

There was so much speculation because of so much missing information in the article, that no article would have been better, and instead waiting for the exploit reporter to write a post-mortem of it AFTER it was fixed, and basing an article on that. At least that way the article could have been factual, rather than prospective and fear mongering to the more gullible.

1. If you would like to comment on the coverage of Apple/iOS issues (e.g. calling it "terrible"), please do so in the comment threads for those articles, rather than derailing unrelated comment threads.2. If you're dissatisfied with Ars writers' fairness, you can start a thread (maybe in the main forum area) comparing coverage of iPhone/iPad issues to coverage of Android phone/tablet issues.3. What the "fanboys" say in the comments is not the responsibility of the Ars writers, and there are fanboys for every platform, in approximate proportion to each platform's market share.4. Comparing the impact of flaws in a globally popular Apple device with huge market share to a flaw in a limited-release, experimental Google device is not useful. If Apple had limited-release, experimental products like Google Glass, the absence of software-level enforcement for terms of service would receive a similarly unconcerned reception unless the flaw was not fixed before a mass-market release.

Weird reaction. Dan Goodin defended Ars' record for being fair and pointed to other posts. Throwing the discussion wide open. Dawesdust reacted to that. And now you say he/she should stay on topic!

To get away from the discussion how awful this is: what could google do to prevent this?

I guess they could either check API calls, to determine if the app is using the camera, they could also require apps to get confirmation for using the camera (i.e. Like apple does with the microphone). This, of course, could be prevented by the app masking as a legit camera app.

I'd prefer if their was some kind of hard wired signal whenever the camera is in use. This is hard to do correctly as could recently be seen by firmware hacks of those MacBook cameras, but it's probably the only useful solution. This signal should also be visible by others, do they know their taped.

To get away from the discussion how awful this is: what could google do to prevent this?

I guess they could either check API calls, to determine if the app is using the camera, they could also require apps to get confirmation for using the camera (i.e. Like apple does with the microphone). This, of course, could be prevented by the app masking as a legit camera app.

I'd prefer if their was some kind of hard wired signal whenever the camera is in use. This is hard to do correctly as could recently be seen by firmware hacks of those MacBook cameras, but it's probably the only useful solution. This signal should also be visible by others, do they know their taped.

A hardwired signal is not difficult to do, that would be basic electrical engineering. Laptop manufactures just don't think it's important.

To get away from the discussion how awful this is: what could google do to prevent this?

I guess they could either check API calls, to determine if the app is using the camera, they could also require apps to get confirmation for using the camera (i.e. Like apple does with the microphone). This, of course, could be prevented by the app masking as a legit camera app.

I'd prefer if their was some kind of hard wired signal whenever the camera is in use. This is hard to do correctly as could recently be seen by firmware hacks of those MacBook cameras, but it's probably the only useful solution. This signal should also be visible by others, do they know their taped.

If anything, I would think it would be built into the API. If it takes a picture, it shows up on the display with no exceptions. Google's wink mechanism already does that, so just move that into the API call (if it isn't already).

This is the kind of problem that needs to be addressed prior to Glass eventually becoming a mass-market consumer device, but until that happens, buyers simply need to beware.

Buyers need to beware? It sounds to me like Glass can easily be setup to take pictures of anyone and everyone around it. It's not taking pictures of the buyer, not while they're wearing it, anyway. It's the people around Glass(es) who need to beware.

It's amazing how much goodwill was wrung out of the "don't do evil" slogan. I have to believe if Microsoft was putting products on the market that surreptitiously monitored those around them, it would result in significant backlash. In fact, I seem to recall numerous people saying they didn't want Kinect for that very reason. But with Google, it's always fairy tales and rainbows.

I think the real takeaway from this is that while Google Glass may be an experimental product, glassholes have already revealed major flaws in the product.

This vulnerability IS small because of the small number of devices. However, that does not mean it's something that should Glass be made more available it's not a problem.

No, it's a HUGE problem. It just has a small attack surface right now - so the threat is large, but the contagiousness of it is small. Once Glass is available publicly, it's game over.

Couple this with Google's heavy investment in facial recognition and once Glass is publicly available, it's a pervasive surveillance device that completely identifies everyone everywhere.

All the Glass trials show is that it's the perfect device for creeps and stalkers, and for every useful purpose it has, there are 10 more nefarious ones. And all those doomsday scenarios envisioned with Glass are proving accurate just by watching glassholes.

In fact, the glassholes and the nefarious types have pretty much killed any form of social acceptance for the device - instead of using the devices for good and winning over the public, they're being misused and the public is even more wary than ever.

Hey guys, I'm one of the researchers mentioned in the article. What I found when I was playing around with Glass's camera API is that it does require the camera to have a SurfaceHolder object inflated on the display to show a preview of what the camera is looking at.

What is not enforced by the API is the size of that preview SurfaceHolder object and that the display needs to be on. The developer can currently make the preview SurfaceHolder object 1x1 pixels, nullifying its intended purpose. The developer can also make a check to see if the display is on or off.

What I imagine Google is doing to fix these two problems is to add extra conditions to their Camera.takePicture(...) method:1. They can check the size of the SurfaceHolder that was passed to them (via Camera.setPreviewDisplay(...)) to make sure that it is greater than some minimum size or takes up the whole screen.2. They can do the same check that I do to ensure that the display is on.

If neither of these conditions are met, the Camera should throw an exception much like how it throws an exception for not having that SurfaceHolder object inflated on the display.

These are conditions that MUST be enforced by the Camera API before Glass is widely available. Otherwise, it can enable an entire ecosystem of nefarious apps that don't make it into the Glass store, but can still be downloaded from 3rd party sites and "side loaded".

To get away from the discussion how awful this is: what could google do to prevent this?

I guess they could either check API calls, to determine if the app is using the camera, they could also require apps to get confirmation for using the camera (i.e. Like apple does with the microphone). This, of course, could be prevented by the app masking as a legit camera app.

I'd prefer if their was some kind of hard wired signal whenever the camera is in use. This is hard to do correctly as could recently be seen by firmware hacks of those MacBook cameras, but it's probably the only useful solution. This signal should also be visible by others, do they know their taped.

If anything, I would think it would be built into the API. If it takes a picture, it shows up on the display with no exceptions. Google's wink mechanism already does that, so just move that into the API call (if it isn't already).

I genuinely don't understand the fear and loathing expressed towards Glass, and more generally, the idea of ubiquitious cameras.

The advent of smartphones (and iPods, and everything else with a tiny camera) has, on the whole, empowered ordinary citizens far more than the police state. While social change is unavoidable, it's only really a problem if authorities have it and ordinary folks don't. The conduct of police on the scene of an incident will never be as unwitnessed and undocumented as it used to be, and thank goodness. This technology enables a balance of power that ensures a naive "1984" will never come to pass.

Facial recognition will be everywhere in the near future. But it won't just be cops using it, it will be protestors identifying undercover cops causing trouble in protests. It will be you and me never forgetting another person's name. This trend cannot be stopped, because anyone with an Ardunio, a camera, and an algorithm can create a facial recognition tool. And an Ardunio is gigantic compared to the size of an equivalent platform ten years from now. Cameras will become effectively invisible and undetectable.

This change is coming. It's not a question of restaurants banning geeky headwear. When we have augmented reality contact lenses, or even implants where the programming platform is now in your head and using your meat eyeballs, banning will not be possible.

While there are creepy uses of any technology, as a technophile I genuinely believe that the benefits far, far outweigh the costs.

I genuinely don't understand the fear and loathing expressed towards Glass, and more generally, the idea of ubiquitious cameras.

The advent of smartphones (and iPods, and everything else with a tiny camera) has, on the whole, empowered ordinary citizens far more than the police state. While social change is unavoidable, it's only really a problem if authorities have it and ordinary folks don't. The conduct of police on the scene of an incident will never be as unwitnessed and undocumented as it used to be, and thank goodness. This technology enables a balance of power that ensures a naive "1984" will never come to pass.

The key difference between Glass and a mobile phone, iPod touch or anything else with a camera is that you have to hold those up to take a picture. It's very clear to people around the person with the device that they're doing so. So they're "warned" that they're being recorded.

Glass on the other hand is something that's worn, and expected to be worn in a way that does not necessarily imply that it's being used to take a photo or video.

In essence, it's the difference between having a company inform you that they'll record your call and having the NSA (and everyone else) do it without your knowledge. One involves informed consent, the other doesn't.

Facial recognition will be everywhere in the near future. But it won't just be cops using it, it will be protestors identifying undercover cops causing trouble in protests. It will be you and me never forgetting another person's name. This trend cannot be stopped, because anyone with an Ardunio, a camera, and an algorithm can create a facial recognition tool. And an Ardunio is gigantic compared to the size of an equivalent platform ten years from now. Cameras will become effectively invisible and undetectable.

This change is coming. It's not a question of restaurants banning geeky headwear. When we have augmented reality contact lenses, or even implants where the programming platform is now in your head and using your meat eyeballs, banning will not be possible.

While there are creepy uses of any technology, as a technophile I genuinely believe that the benefits far, far outweigh the costs.

I'm a technophile as well and also think the positive aspects will far outweigh the negative aspects. But I do want the negative aspects dealt with before it hits mainstream. If they're not, the public backlash against this sort of product will be significantly worse.

The key difference between Glass and a mobile phone, iPod touch or anything else with a camera is that you have to hold those up to take a picture. It's very clear to people around the person with the device that they're doing so. So they're "warned" that they're being recorded.

With respect, you are focusing on Glass, not the theme or where technology will be in the near future. The idea that the obtrusive nature of the technology itself will always 'warn' you that you are being recorded is already almost obsolete, and will no longer be the case. Like it or not, you are going to be recorded all the time in the future and not know it.

Quote:

I do want the negative aspects dealt with before it hits mainstream. If they're not, the public backlash against this sort of product will be significantly worse.

Again, I urge you to take a wider view than this one product. This change is due to technology itself, and not any single product being taken to market.

Hey guys, I'm one of the researchers mentioned in the article. What I found when I was playing around with Glass's camera API is that it does require the camera to have a SurfaceHolder object inflated on the display to show a preview of what the camera is looking at.

What is not enforced by the API is the size of that preview SurfaceHolder object and that the display needs to be on. The developer can currently make the preview SurfaceHolder object 1x1 pixels, nullifying its intended purpose. The developer can also make a check to see if the display is on or off.

What I imagine Google is doing to fix these two problems is to add extra conditions to their Camera.takePicture(...) method:1. They can check the size of the SurfaceHolder that was passed to them (via Camera.setPreviewDisplay(...)) to make sure that it is greater than some minimum size or takes up the whole screen.2. They can do the same check that I do to ensure that the display is on.

I don't think software is the way to do this, really; I think it's still going to be vulnerable to attack (you'd also need to check the Z-order of the surface, ensure that any overlays don't obscure it too much, ensure that it's not being composited off-screen, etc.).

I think that Google Glass should have two hardwired LEDs to indicate camera usage; one facing the wearer, and another for everyone else.

Apple has security flaw == End of WorldGoogle product has security flaw == No big deal.

Great neutral reporting guiz.

Glass is an experimental product that isn't available to the mass market, and it may never be anything other than an experimental product with limited availability. You can't just walk into a store, buy a Glass, and get pwned.

I totally agree that hardwired LEDs would be the best, most visible solution to this issue, much like how webcams have an "on" light. I'm just suggesting low-hanging fruit fixes to the API that could force developers more consistent with its security policies.

You're totally right that there are going to be more sophisticated ways in software to get around camera requirements. I'm just surprised that this relatively simple method was still possible after it has been in beta for a year and its most controversial feature is the camera

I don't think software is the way to do this, really; I think it's still going to be vulnerable to attack (you'd also need to check the Z-order of the surface, ensure that any overlays don't obscure it too much, ensure that it's not being composited off-screen, etc.).

I think that Google Glass should have two hardwired LEDs to indicate camera usage; one facing the wearer, and another for everyone else.

The key difference between Glass and a mobile phone, iPod touch or anything else with a camera is that you have to hold those up to take a picture. It's very clear to people around the person with the device that they're doing so. So they're "warned" that they're being recorded.

With respect, you are focusing on Glass, not the theme or where technology will be in the near future. The idea that the obtrusive nature of the technology itself will always 'warn' you that you are being recorded is already almost obsolete, and will no longer be the case. Like it or not, you are going to be recorded all the time in the future and not know it.

Quote:

I do want the negative aspects dealt with before it hits mainstream. If they're not, the public backlash against this sort of product will be significantly worse.

Again, I urge you to take a wider view than this one product. This change is due to technology itself, and not any single product being taken to market.

Yes, please wait until this is everywhere, then, when it's too late to do anything about it, just accept it.

Yes, please wait until this is everywhere, then, when it's too late to do anything about it, just accept it.

My point was that time has past. It is already everywhere. You have been captured on spy cameras and are just not aware of it.

We don't need to accept the situation, but we must accept that it is already here. Many posters appear to think that by doing something with regard to Glass, that we're all good. We're not. This is a larger issue than a single product, and the problem stems from the advance of technology, not a policy decision by a corporation.