Google’s “Project Glass,” is the Augmented Reality (AR) Heads-up-Display (HUD) glasses offering that Google is designing for a near future Internet interactive experience.

(Video credit: Google)

From watching their demonstration video, I certainly have some questions and observations. Google’s vision (no pun intended) of the future is a place where people ignore women except as witnesses to their achievements, talk with their mouth full, and put their live friends on hold to interact with a machine (oh wait, that’s what people do now); and is one without ads (wait…what?). Thankfully, rebelliouspixels mixed them in:

(Video Credit: Google + rebelliouspixels)

As I wrote earlier in Connected cAR: Becoming the Cyborg Chauffeur, if Google has their way, we are about to be overwhelmed with synchronous (connecting in real time) and asynchronous (connecting in shifted time) messaging and communication while we walk.

If you’d like to read that piece, I’ll wait.

Not much difference. This time, the idea is that the Augmented Reality (AR) driven multiple input information will be applicable to everyone, not just those behind the wheel of cars. In the Connected cAR, the conculsion was that we were the intermediaries training the robots to drive. This time, we won’t be training the robots to walk for us. It’s more that our usage of Project Glass glasses will be training Google’s AI to learn about the world. This AI is still for the robots, but mostly likely for Google’s robots which aren’t here…yet.

Applin and Fischer’s (2012) recent paper, PolySocial Reality: Prospects for Extending User Capabilities Beyond Mixed, Dual and Blended Reality, discusses the evolution of software development and how the mobile-local-social-geo-web has changed the game with regard to human interaction and participation. PolySocial Reality (PoSR) is described as a conceptual model of the space that contains individuals’ multiplexed, synchronous and asynchronous individuated data creations. An instance of PoSR might contain behaviors such as walking while talking on a phone, while texting, while the phone checks into foursquare, and sends an update via Twitter and/or Facebook, while replying to incoming friends’ status update and/or other messages at the same time. Each instance of PoSR can contain a lot of action, which can cause distraction, and people may or may not be walking or (we hope not) driving while all this is happening. Because the actions of individuals may overlap, if true, the potential for distraction in each case compounds.

Applin and Fischer suggest that there are cases for historical software development and that we are moving from a homogenous model of having “one user, one machine,” or “one type of user, many machines,” towards a heterogenous model of “many users (all different) and many machines (also all different).” Google’s “Project Glass” in the video is shown as a case of “one user, one machine” in the video, but the actual reality of using the glasses likely will hover around being geared towards that of “many homogenous users, many machines” and when fully deployed, most likely a case of “many heterogenous users, many heterogenous machines” where there is fully functioning PoSR that becomes disruptive. In this last instance, designers and developers of programs must take into account a number of factors including the fact that:

“Details about the context of others are missing and my be difficult for individual users to infer or [contain] details that cannot be inferred; Highly complex elements of differentiated environments are combined into structures that appear different from each users’ point-of-view; and Users as distributed dynamic unique agents.”

This means that people in the Google Project Glass glasses bubble are going to be having some serious navigation problems.

When someone is walking down the street using a cell phone or, as the Google video illustrates, a cell phone like device, the negative consequences from instances of PoSR can become even more problematic. The 2009 study, “Did You See the Unicycling Clown? Inattentional Blindness while Walking and Talking on a Cell Phone”, (Hyman, Boss, Wise et al. 2009) examined the effects of walking while engaged with music players, cell phones or walking with others in a pair. The results showed that cell phone usage might cause an “inattentional blindness” even during an activity such as walking. Cell phone users were found to be less likely to notice something different on a normally travelled path when engaged with their phones. The study found that individuals while talking on a cell phone “experienced more difficulty navigating through a complex environment….walked slower, weaved more often, and made more direction changes.” The observed individuals who were engaged in cell phone conversations for the most part missed seeing a clown riding a unicycle in their immediate vicinity.

I’m going to write that again:

“The observed individuals who were engaged in cell phone conversations for the most part missed seeing a clown riding a unicycle in their immediate vicinity.”

This example illustrates problems for individual people engaged with cell phones conducting regular conversation. The Google Project Glass demo seemed to show simple cases of simple interaction in a nearly synchronous environment. The reality of today’s messaging is much more along the lines of multiple instances of PoSR, hovering more into the asynchronous rather than synchronous messaging category. This means more messages, more responses and less attention to the “unicycling clowns” on our paths and in our lives.

In short, while the idea of getting our attention away from looking down at a device, to looking at the world, when the phone is on our ear, the research still suggests that we have cognitive problems that keep us from acknowledging or understanding or even seeing events in our immediate proximity. If we multiply that by the aggregate of people wearing Google Project Glass glasses, I fear we are in for a bumpy ride.

This response to the Google demo pretty much sums it up:

(Video Credit: TomScott.com)

Sally Applin is a Ph.D. Candidate at the University of Kent at Canterbury, UK, in the Centre for Social Anthropology and Computing (CSAC). Sally researches the impact of technology on culture, and vice versa.

Comments 8

PJ Rey — April 10, 2012

Haha, hadn't seen the remix of the Google Glasses vid with the ads. That is spot on.

Trak — April 11, 2012

Hi Sally,

Great article. Nice to see the other side of the coin presented when everyone is blindly (npi) raving about the future and the wonderful possibilities of these not-yet-released or even close to finalized glasses.

With regard to: "The observed individuals who were engaged in cell phone conversations for the most part missed seeing a clown riding a unicycle in their immediate vicinity."

I've been thinking about this ever since I read your post yesterday trying to determine if this was a decidedly negative result. A clown riding a unicycle seems like a combination of elements that cannot or would not be ignored, normally. But think of all the spectacles we face on a day-to-day basis, the things that advertisers and ARGs and businesses are willing to do just to get our attention for a millisecond. I think my cynical mind would immediately believe something like that to be a reality tv show, publicity stunt, or promotion. I think it's even possible that as a culture, some of us have become so inured to these spectacles that our minds may just shut them out because they're unimportant. But a phone conversation or checking emails or even social networks allows us to interact on both a professional and personal level.

So the question is- what does the "unicycling clown" even represent? And why should we give it our attention?

Curious to hear your thoughts. Once again, loved the article.

Best,

-Trak

sally — April 11, 2012

Hi Trak,

Thank you for your kind remarks! Were you able to read the study? The unicycling clown represents an object out of the ordinary, not usually on the quad that was studied. It isn't a question of whether or not people chose to give it their attention, it was a question of whether or not they saw it in their vicinity and remembered seeing it. The authors did a control study with music only, and people walking in pairs as well as a person using a cell phone.

If you weren't able to read it, these were their results:

"We found evidence of inattentional blindness among cell phone users. Only 25% of the cell phone users had noticed the clown and many turned around at that point to see what they had missed. In essence, 75% of the cell phone users experienced inattentional blindness to the unicycling clown. In contrast, over half of the people in the other conditions reported seeing the clown (51% of single individuals, 61% of music player users, and 71% of people in pairs). Table 2 presents the percentage of each group that stated they had seen the clown in response to the general and direct questions.

Individuals in pairs were the most likely to have seen the unicycling clown. Their rate of seeing the clown is essentially equal to what one would expect by combining the performance of two individuals. This indicates that pairs may improve performance by having more observers engaged in monitoring the environment (see Crudell et al., 2005; Strayer & Drews, 2007). Unfortunately, we cannot be sure that all individuals looked in the direction of the clown. Thus the differences could be caused by cell phone users being less likely to look around the environment. However, the clown was not far off the basic path and Strayer et al. (2003) showed that although cell phone users were as likely to look at objects in a driving simulator, there were less likely to remember the objects..."

My point is that combining a good chance of "inattentional blindness" plus more objects within the visual field, will create highly complex instances of PolySocial Reality and all that that will entail for Google's Project Glass glasses -- or any other graphics in front of the eye with simultaneous sound, haptic vibration, whatever. The combination of all adds to the load.

What someone is doing on the phone in this regard, as long as it is interactive or cognitive in some way, has no bearing on whether it's personal or professional--the results of "inattentional blindness" are still a high probability.

Hope this helps!

Trak — April 11, 2012

Thank you for your response Sally. I had taken a cursory glance at the study, and I appreciate the clarification on the results.

I was really looking for a value judgment. The study is great, and the results are fascinating, but the study alone seems to ignore what is the most imperative discourse- is the deliberate / forced / chosen ignorance of the clown on the unicycle in fact a "good" or "bad" thing for humans? I mean it as a question of utility rather than morality, but does this inattentional blindness improve or impede our progress or efficiency as a species? If we were to, perhaps, move beyond the "weaving" and difficulty with navigational changes within a few years (and I believe that as younger generations are exposed to this type of technology, they will indelibly learn to adapt to it better than any of us "retrofitters"), would that be an overall improvement?

Increased distraction is dangerous while driving (as you mentioned in your Chauffeur post), but I would be curious to see the same study conducted on people walking and interacting with a phone (speaking, texting etc), but instead of a clown on a unicycle, potentially traumatic or harmful events. Cars squealing their breaks to a stop just behind the user; bicyclists jingling or shouting "On your left!" as they race by; an attempted mugging; sirens; a child crying out in pain after falling etc.

I wonder how similar/different the response would be, and how quickly someone would react to these events as opposed to the unicycling clown.

I'm fascinated by this entire discourse, but I find there are few people that wish to talk about it :) Thanks for entertaining my ideas in these comments, looking forward to your next post.

-Trak

sally — April 11, 2012

Ah, ok.

Value judgement: at the moment we are not well adapted to do this.

Forecast: It is unclear whether or not we will be able to adjust to this. That is what our, and other researchers in this space are examining.

As Fischer says, "The present state of affairs is dangerous, and population + danger = injury and death. There would have to be a significant payoff for this to be an adaptation on the phone."

As an aside, every single sample you suggest is audible. There isn't one in your list that isn't fundamentally not a sound based distraction, which would act as an "alarm." Though interesting, that wasn't the target of the study. The study wasn't looking at "how will people respond if there is an emergency" or "if we make a loud noise will they hear it and be shocked out of their conversations"? The point of the study was to note that if you take the world the way it is and add something different, in what conditions will people notice it or not if they are on a cell phone, walking in a pair, or listening to music?

The results showed that people would not notice it as much or at all if they were on a cell phone. Even if they were ostensibly seeming to look where they were going.

These are two entirely different arguments.

Then again, people seem to get hit a lot by traffic when they are on their phones and the cries of people on a burning boat didn't seem to get the attention of that barge operator, so perhaps it doesn't register either way.

I'll be at ARE2012, likely we can get a discussion group together. In the short term, why not come to TtW2012? We'll be discussing this there and more!

Two Weeks of Weekest Links « Augmented Reality Blog — April 16, 2012

[...] while looking down, instead of flexing one’s bicep to back orders into a massive handset while staring straight ahead (or glaring at a subordinate). Real men don’t “stand around”; real men do stuff! Real men [...]

About Cyborgology

We live in a cyborg society. Technology has infiltrated the most fundamental aspects of our lives: social organization, the body, even our self-concepts. This blog chronicles our new, augmented reality.