The critics are missing just how deeply Google and Motorola are changing the game with sensors and new contextual features — the new voice feature isn’t just a small thing, it’s a whole new sensor package and represents a new approach to hardware and software.

This is a big competitive advantage for Google and reshapes the game. Just last week, the new Nokia phone was looking pretty cool with its 41 megapixel sensor. Today? Not so much. This refocuses the mobile competition, once again, on Apple and Google.

Only Motorola’s devices let you talk to them at any time without turning on an app, or touching something and it does that in a way that doesn’t use much battery life (even more remarkable because the screen on the Moto X is kept on full time too). It’s contextual too, because it remembers what you were just asking it. Ask “OK Google Now, what is Michael Jordan’s shooting percentage?” and then “OK Google Now, how tall is he?” and Moto X will remember to answer the with Michael Jordan’s height. Look for more contextual stuff to be built in over time.

http://www.youtube.com/watch?v=xXyCbrdQEyA

The joy of context

These new sensors bring a cost, though, that counteracts the promise of contextual features (and I’m hearing from inside Google that Google is working on a contextual OS that might see the light of day in 2015). That cost? I call it the “freaky line.” It means giving up a little more privacy. Whether or not we’re really giving up privacy doesn’t matter. Many people have the perception that their privacy is being given up by these “always on” and “always listening” devices and that is a cost that Google will need to deal with.

Moto X is just one in a string of products and services that will bring radical new functionality to users. Examples? Google Now, Google Glass, and the new Moto X phone that keeps the microphone open full-time. The Xbox One, coming this winter, will have a 3D sensor on it so sensitive it can see how fast your heart is beating just by watching your skin.

These new contextual, sensor-based features are game changers and I’m hearing Google has a raft of other product announcements lined up that will turn on even more freaky features. Why? Because the more Google can get you to communicate with your phone, the more context it can slurp up.

The more sensors it can turn on, or put on you, the more it can learn about your intent and your context. Today your phone doesn’t really know that you’re walking, running, skiing, shopping, driving, or biking, but in the future, Google will know that and will be able to build wild new kinds of systems that can serve you when doing each of those things.

Contextual thinkers (I was just at PARC, the famous research lab that brought us Ethernet, GUIs, and Object Oriented Programming and they are working on a ton of contextual stuff) are already building all sorts of systems that will assist you in your life. Let’s just take driving, for instance.

In California it is illegal to touch a smartphone while driving. So, most people keep their phones off. Not me. I have a dashboard-mounted holder where I keep mine and I keep Waze or Google Maps on it almost full-time. I’ve been playing a lot with ways to turn on the voice features without touching the phone. I have a little button for Siri, for instance, for when I use an iPhone (which I haven’t for months because Google Glass got me to switch to Android).

The more I can talk to my device in the car the more I’ll give it even more data. If Google figures out where my favorite gas stations are, if it figures out where I like to stop on the way home for flowers, food, groceries, etc, the more likely I’ll be monetizable and not just by ads, either. In the future everything will work like Uber. You’ll ask your phone “OK Google Now, bring me a chicken sandwich” and someone will deliver one to where you are sitting and the system will charge you for that.

Today my phone could know I’m driving (my Samsung S4 has a setting where I could tell it to sense my car’s bluetooth radio) but, in practice, that doesn’t do all that much and, due to Samsung’s poor UI, got me to turn it off. In the future, though, Google’s system could hold my non-important messages, tweets, etc, until I’m not driving and could tell me where the next place I could get gas or a quick meal on my route might be.

For instance, I want to tell my phone something like:

“OK Google Now, note to self, buy more milk before I get home tonight.” (It could remind me when I drive through a geofence on the way home, in time for it to navigate me to the local grocery store).

“OK Google Now, find me the cheapest gas on the way home.” (Gas Buddy already does something like that but it requires poking around on your phone, which is illegal).

“OK Google Now, make a restaurant reservation at a nice steak restaurant in Half Moon Bay for my wife and I tonight at 8 p.m.” (it should already know, contextually, what kind of restaurants we both like, based on our reviews of past restaurants).

“OK Google Now, find a place to buy flowers on the way home.”

“OK Google Now, find a babysitter for tonight.”

“OK Google Now, send a message to Maryam Scoble and say ‘I love you, I’m taking you out to dinner tonight at 8.”

Why? Because the Moto X’s system can only do singular commands, so you gotta repeat “OK Google Now” for each command (and many of these don’t work because the infrastructure to, well, find a babysitter tonight, just isn’t built out and isn’t connected into Google’s systems).

The voice commands, though, are hardly the beginning of Google’s Contextual OS. Google Glass is unique because it is a sensor platform. It’s the first device I’ve owned that knows both where I’m aiming and where I’m looking (there is a 16×16 infrared eye sensor that watches my eye full-time). Your smartphone doesn’t know where you are aimed or looking. So this world will get even freakier next year.

Add onto that the Xbox One, and other wearable computers like the Basis device I often wear on my wrist. That has a monitor for my heartrate and activity levels. If I don’t get enough exercise it reminds me and gently prods me to go for a walk.

Google hasn’t yet explained how all the pieces of this contextual ecosystem will work nor has it explained how it will convince people to get over the freaky line. For instance, I asked Larry Page, Google’s CEO and co-founder, at its Google I/O conference earlier this year what the eye sensor is and what it would be used for. He, instead, made a funny joke that he didn’t like me wearing Google Glass into the shower.

Maybe that joke was right on point: did he have a live feed to what I was looking at and seeing while wearing the Glass? Of course not, right? But in this “post Snowden world” where we all know the government is spying on us nearly all the time that sort of makes all of us uneasy.

Which gets to the freaky line.

Freaky

Yes, Google is more transparent about what they are collecting on us than most companies, but it is asking an awful lot of us. Some won’t go into the contextual age easily. Lots of my friends are getting freaked out by just how much Google is collecting on us. I think that’s a marketing opportunity, even while those of us who like this kind of stuff adopt it.

We’ll see a new kind of digital divide open over the next couple of years. Not a divide between people who can and can’t afford technology but a divide between people who are willing to go over the freaky line and those who aren’t.

It’ll be interesting to see who will go over the freaky line and who won’t and what that will mean for each group. It is the biggest social shift since the Web came on the scene in 1994.

How about you? Are you ready to give companies like Google, Facebook, Amazon, and Microsoft even more data? Are you willing to buy a phone that listens to you? A video game console that listens AND watches you?

Me? I’m in 100%. I’ve already seen how this stuff will serve me and will help me have a better experience in life. I’m living the “over the freaky line” experiment full tilt and I like it. Will you?