As the name suggests, it works with a tap-and-hold on the Home button. The clever part is that Google Now can scan whatever's on screen—whether a chat conversation or a Web page—and bring up relevant information.

One example shown on stage was a movie mentioned in an email: Now on Tap brought up reviews, information and a trailer with one push. In another example, tapping on a photo of Hugh Laurie on a website brought up background information about the actor.

Now on Tap was also able to scan a thread inside a third-party chat app and create a reminder to pick up dry cleaning, because that's what the conversation was about. Another demo showed a Skrillex song in Spotify (not a Google app of course); asking "what's his real name?" brought up the correct answer.

In short, it makes Google Now better able to identify key items of interest within apps, parse natural language and prompts, and then take action on them. This is on top of other recent improvements to Google's digital assistant: it now works with third-party apps and is expanding its reach into more areas.

"Too often, you have to leave what you’re doing just to look for what you need somewhere else on your phone," explains the official blog post. "With Now on Tap, you can simply tap and hold the home button for assistance without having to leave what you're doing — whether you’re in an app or on a website." No effort is needed on the part of developers provided their apps are indexed by Google.

Aparna Chennapragada, Google Now's product director, emphasized the three pillars of Google Now: Context, Answers and Actions. It's clear that in the digital assistant race, Google is keen to stay ahead of the competition. Now on Tap is going to arrive with Android M in the third quarter of 2015.

iOS already has its own personal assistant app in the form of Siri, but it seems Apple wants a more direct competitor to Google Now: 9to5Mac reports that a new service called Proactive is on the way.

With deep ties to Siri, Contacts, Calendar, Passbook and third-party apps, Proactive would reportedly surface timely and relevant information in the same way that Google Now does. This could be hugely useful on the Apple Watch as well as the iPhone and iPad.

While Google Now and Siri have several features in common, Apple's app concentrates on controlling devices and running searches using voice input. Google Now focuses more on being an intelligent assistant, mining collected location, search and email data to automatically show alerts (like flight delays) when they're needed.

And that appears to be what Proactive is targeting.

"Proactive will automatically provide timely information based on the user's data and device usage patterns, but will respect the user's privacy preferences, according to sources familiar with Apple's plans," says 9to5Mac.

The roots of Proactive can be traced back to Apple's 2013 acquisition of personal assistant app Cue, which enabled users to "know what's next" based on calendar and email information. With the notification and search capabilities inside iOS growing, Proactive is a logical next step.

Battle Of The Digital Assistants

Cortana on Android.

Google Now has become the major component of stock Android—the unmodified version of Android that Google is increasingly pushing on phone makers—and is available on iOS devices and inside the Chrome browser too. With Microsoft's Cortana assistant spreading out to Windows 10, iOS and Android, it's time for Apple to make a move.

Siri has always been seen as Apple's Cortana or Apple's Google Now, but it lacks the smart, pre-emptive elements found in Microsoft and Google's products. Proactive would plug that gap—9to5Mac says it will show estimated travel times to scheduled events in exactly the same way that Google Now does.

It's another sign of the growing importance of these digital assistants and the ecosystems they tap into: Will we be choosing our next phones based on the digital assistant we get on best with? Or the one that knows most about us from our emails and searches?

9to5Mac says Proactive could even rearrange apps based on the time of day and usage patterns, and that third-party app integration will be an important element of the new service. If Proactive arrives with iOS 9, as is expected, we'll be hearing about it at WWDC.

We'll have to wait and see just how comprehensive the new app ends up being, and how well it competes with Google Now and Cortana. One thing we can predict with a good degree of certainty: It will only be available on Apple's platforms.

Google's whimsically named Cardboard virtual-reality effort has a new chief, and possibly a new vision. Jon Wiley, who formerly headed up design for the company’s search division and—among other things—came up with the Cards user interface on Google’s mobile platforms, will be taking over a division that started off a little more than a joke a year ago.

No one's laughing now. Wiley’s new position, first reported by Fast Company on Monday, could signify Google’s growing commitment to virtual reality as way more than a cardboard curiosity.

Google’s Growing Cardboard Commitment

Google’s decision to shuffle Wiley out of search and into Cardboard occurred sometime in mid-May, although neither the company nor Wiley himself have said much about it. Fast Company says Google confirmed Wiley’s new position, and I’ve asked the company to elaborate on the move. Even without hard details, it’s not difficult to imagine that Google has big plans for its DIY virtual reality headset.

Jon Wiley, formerly the lead designer of Google Search, now principal designer of Google Cardboard and VR (image via Twitter)

Back in April, Google launched “Works with Google Cardboard,” a new certification system for hardware and software makers to help ensure their own cardboard VR designs are consistent with Google's ideas. Even more telling, the Wall Street Journal reported in March that Google had plans to build a new, virtual reality-focused operating system based on Android, presumably in concert with its Cardboard initiative. Those two details show Google’s interest in pushing Cardboard into new, more ambitious territory; Wiley’s addition all but confirms it.

As Fast Company points out, Wiley’s big claim to fame is the creation of Google Now Cards, which anticipate mobile users’ needs based on their search history and Google services. If you have flight details sent to your Gmail inbox, a Google Now card will appear on your Android phone or Android Wear device to tell you whether or not it’s on time. Likewise if you search a particular movie you’re curious about, Google Now will often hook you up with nearby showtimes.

Google Now's Card UI was Wiley's brainchild

It’s not clear how Wiley’s experience with Cards can translate to Google’s virtual reality plans, but there’s no question that Cardboard (and most other virtual reality platforms) could use a more user-friendly interface. Cards reduce the distance between a user and relevant information, so it stands to reason that Wiley’s task for VR might be similar.

Meanwhile, the VR field is growing more crowded every day. Samsung recently released its second iteration of the Gear VR, this one made for the hugely popular Galaxy S6 handset. HTC and Valve’s headset, the Vive, is set to launch later this year, while Sony and Facebook-owned Oculus have plans to launch their own headsets in 2016. Whatever Wiley’s going to do with Cardboard, he’ll have his work cut out for him.

Guest author Peter Yared is co-founder and CTO at the push-notification startup Sapho.

Since its inception in the 1960s, the modern computer has offered humans the same “pull computing” paradigm: make a query, get a response. Or, as we often experience it: Go to the haystack, try to find the needle.

But that’s quickly changing. As software grows more intelligent and learns more about our preferences and behavior, it seemingly gets to know us. That knowledge makes software more valuable because it means that it can deliver things to us, perhaps even before we know we want it. We are at the start of the era of push computing.

Pushmi-Pullyu

With push computing, a computer is no longer just a question-and-answer service; it’s expected to proactively figure out what’s interesting to you and deliver that data. On mobile, that’s often an actionable stream of cards and timely notifications of important items.

Push computing represents a major shift in architecture from the pull relationship computers have long maintained with users. Computing interfaces have evolved from green screens to GUIs to HTML5 to apps, but most applications have the same workflows and address the same needs in a pull-based fashion.

Outside the view of users, however, software delivery has steadily evolved toward a push-type model. Just consider how far we've come, from the hosted timesharing of mainframes and minicomputers to dedicated Unix servers to the PC floppy disk and CD and finally to the increasingly prevalent “software-as-a-service” we see today.

Over the past few years, push computing has also begun to infiltrate the interfaces of key consumer apps. Of course, as Chris Dixon recently pointed out, some Internet services are further along than others. Facebook, for instance, has mastered intelligent news feeds of cards and relevant notifications while Twitter delivers a straight temporal stream that grows more overwhelming the more accounts you follow.

Don’t Push Me

Not all pushes are the same, after all, and companies have to think carefully about the information that is important to push, when and why it‘s pushed, and how they expect users to react.

Major players are also trying to figure out how to make push a central part of the mobile OS. As I wrote a few months ago, Google is aggressively recasting itself as a push player with Google Now and answer cards in search. Apple is decidedly in the pull camp, as Siri is rarely proactive, although the iOS notification manager is well ahead of Android’s. Push has also become the backbone of successful mobile apps powered by real time infrastructure such as PubNub and Amazon’s Simple Notification Service.

Machine learning is key to the success of contemporary push-based services. Notifications and cards should only presented to users if they deliver relevant information users can act on easily.

Previous attempts to provide user notifications via email failed because email notifications are typically irrelevant and spammy. We’re all well trained to avoid spam like the plague, so users typically dumped all notifications into an email folder and never looked at them at all. Email is also inherently less actionable because a user has to click on a link, log into an application, and then perform an action.

For push to work, it’s crucial for applications to make their notifications actionable, friction-free, and rooted in sophisticated machine learning. Early efforts like PointCast to push information were too static and overloaded networks with continual updates.

Getting Pushy At Work

While push got its start in the consumer realm, the case for business-based push is in many ways much stronger. Enterprise systems manage discrete events that often require urgent action. For example, a sales opportunity might be closing in a CRM system, a complaint from a customer you cover could pop up in the service system, or the HR database could flag you about a new hire you need to onboard.

Conversely, the relative importance of events in consumer apps are much more nebulous. To deliver a superior experience to users, Google Now must continually learn, confirm and re-confirm details about where you live, where you work, your calendar, your travel arrangements, your preferences. Peoples’ lives and environs are constantly shifting, making it hard for the new generation of consumer apps to keep up.

What is more difficult about enterprise events is that they must be extremely secure and the data is often locked away in a variety of data siloes.

As users increasingly expect their services to be intelligent and proactive, push computing is making its way not just to mobile, but also to desktops and laptops by means of browser notifications. The new generation of push software is ushering in a new way for humans to interact with technology, and in the case of the Internet of Things, for technology to interact with itself in the form of networks of “smart” devices.

But as digital data becomes more voluminous, our systems have to get more intelligent. They have to filter, analyze, and deliver information to users—and then only when they need to know it or act on it. The goal should always be simple: for the haystack to bring you the needle—whatever it is—before you even start to look for it.

Google personal digital assistant is about to get a whole lot more powerful. Google Now—which recommends websites, keeps track of reminders and appointments and acts as every Android-user’s personal digital butler—will soon provide developers with an open API, product director Aparna Chennapragada told an audience at SXSW on Saturday.

Just about every kind of app would be able to communicate with Google Now to provide even more tailored notifications and recommendations to users. As a result, Google might leave competitors from Apple and Microsoft choking on its digital dust.

User Activity To Guide Recommendations

Chennapragada explained that Google Now’s predictive abilities have gotten much smarter since the service first launched in 2012. At first, the Google Now team simply guessed what notifications and apps would be most useful. But after extensively polling users and their activity, the team has refined Now’s recommendations.

When Google offers the open API, however, there will be even more apps and notifications vying for Google Now’s affections. It’ll determine each user’s most pertinent notifications based on their app usage patterns.

Examples of how Google Now already works with over 30 third-party apps.

That sounds smart enough, though how it’ll work in practice remains to be seen. I still swipe away notifications on my Android Wear watch that I’ve told Google Now I don’t need. I’d put my Google Now success rate somewhere between 65 and 75 percent on a daily basis. Bringing even more competing apps to the party may make it even more difficult for users to cut through the noise to find their most important signals.

If there’s one area in which Google excels, it’s handling lots of data and giving users the best results. As Microsoft pursues plans to launch Cortana on other platforms, one of its biggest hurdles will be playing catch-up to Google Now’s huge head start. Forget Microsoft providing developers with an open API—Cortana actually needs to end up on people’s devices before the company can start providing the same depth of functionality.

Siri, meanwhile, isn’t leaving Apple’s iOS devices at all. Apple hasn’t done much to open Siri up to other apps, either. With Google Now also available on iOS, the forthcoming open API could give the service a huge edge on Apple’s platform, to say nothing of the millions of happy Android users getting more out of Google Now than ever before.

Microsoft's Cortana app will make its way to iOS and Android devices in the near future, a new report from Reuters says, quoting "people familiar with the matter." The rumor has in fact been doing the rounds for some time: Last November, Microsoft executive Julie Larson-Green hinted that such a move was on the cards in a briefing with reporters.

Getting the app on other devices is one thing; getting anyone to use it is quite another. Assuming Cortana jumps out of Windows, can it thrive elsewhere?

Digital Assistants Go To War

It's the latest move in a fascinating battle between the digital assistant apps—Microsoft's Cortana, Apple's Siri and Google Now—that are becoming more and more integral to the mobile platforms they represent. (Indeed, stock Android 5.0 is little more than Google Now plus some extra wrapping.)

All these apps offer voice control, intelligent searches, and varying levels of personalization. Siri, which was wrapped into iOS in 2011, puts the emphasis on voice commands: Users can do anything from play all the rock songs on your iPhone to read out your most recent email. It can call up any information from a device or the Web quickly and easily.

Google Now isn't so concerned with voice input (though it is available). Here the focus is on personalized cards of information that pop up at the right time and the right place, with no user interaction required. Sports scores, travel times, movie recommendations, and so on, all prompted by data mined from your history on Google's various services.

Cortana, the most recent of these apps to launch, tries to combine the best of Siri and Google Now: advanced voice control and smart predictive responses all rolled up into one. Considering the low market share enjoyed by Windows Phone across the world, it may have little choice but to spread its wings.

Cortana's Breakout

Google's own digital assistant is Google Now.

Siri, Google Now and Cortana are busy vying for position. They're all based on knowing as much about us as possible, and that means extending to as many devices as possible: Phones, tablets, laptops, browsers, consoles, smartwatches and the rest.

Siri, of course, is never going to make it to Android or Windows Phone; Apple apps run on Apple hardware and that's that. But Microsoft may want to take some pointers from Google Now, which has made the jump over to iOS—it's embedded in the Google app that offers Web search and various other features on Apple's iDevices.

The iOS version of Google Now is fine for displaying cards, but it misses the deep hooks into the operating system that it enjoys on its home turf. It's not one swipe away from the home screen, for instance, as it is on Android 5.0 Lollipop, and it can't be called up with an "OK Google" unless the app is already running. It feels slightly watered-down, walled-in and read-only, and it's likely that an iOS Cortana would experience the same fate.

Google Now can perform some tricks on iOS, though. It can monitor your location, display updates in the Notification Center, and tap into other Google services. Because so much of the data it mines lives on the Web—from Gmail to Google Calendar—it doesn't necessarily need access to iOS or its native apps to function well.

That brings us back to Cortana. Over time, iOS has opened up a little, allowing third-party apps to run and refresh in the background, and have deeper access to the operating system (note the introduction of third-party keyboard support in iOS 8). Yet for Cortana to make headway, it's going to need some top-notch Microsoft apps on iOS, and a strong cloud system behind that.

Android is an easier proposition, as it gives more control to any third-party apps who want to take it. You can completely replace the Google Now launcher with another skin, if you want to—Facebook Home is one high-profile app that does this, and that's a path Cortana could theoretically go down. (Just look at the lock screen replacement Microsoft has already built for Android.)

The next phase

Siri's voice control is an important facet of Apple CarPlay.

Right now, Microsoft has "nothing to share" about Cortana coming to iOS and Android, according to a spokesperson I contacted. But if Office for iOS and Android are anything to go by, it seems that Microsoft is going to follow Google's lead (get your apps and services to as many people as possible) rather than Apple's (let the people come to the apps instead).

Bear in mind that from this fall, millions of new Windows 10 PCs are going to come with Cortana installed, giving Microsoft millions of new opportunities to collect and display data. Google's Chrome browser and Chrome OS offer rudimentary support for Google Now, which you can expect to see improve over time. Again, Google Now's aim is to get everywhere, and Chrome is a vital cornerstone of that.

Siri feels like the odd one out here. Its focus has always been on easy, hands-free voice access to your data on mobile devices, rather than watching and predicting your every move, and it's not yet on Mac OS X. If the patent applications are to be believed, it might not be long until that changes, but right now it seems Apple is happy to ease off on the spooky pre-emptive notifications and the privacy implications that go along with them.

When you weigh all of these factors up, it's about services as well as platforms. Siri only knows who your brother is if there's a matching entry in your Apple contacts; Google Now only knows about your next flight if there's a confirmation in your Gmail; and Cortana only knows you need to be across town in an hour if you've marked it on your Outlook calendar.

If these digital assistants are going to be truly smart, then they need to know as much about their users as possible, and that goes beyond iOS and Android to the cross-platform services underpinning them—it's an issue that extends across mobile, desktop, wearables, the smart home, the car dashboard and the Web.

In that light Cortana has no choice but to jump to as many devices as it can—ubiquity is key for an ambitious all-encompassing digital assistant. At that point, the only question is: Who do you want running your life for you?

Google has Google Now, Apple has Siri and Microsoft has Cortana—these personal digital assistants are playing an increasingly dominant role on mobile and desktop. Now researchers from the University of Michigan have unveiled an open-source alternative called Sirius, the latest of several similar open-source efforts along the same line.

Mobile phone makers, wearable startups and app developers could all potentially use Sirius to bring some instant smarts to their projects in the not-too-distant future, Jason Mars, one of the co-directors at U-M's Clarity Lab where the system has been developed, said in a video:

Unlike Google Now, Siri and Cortana, Sirius is free to use and can be customized as required by anyone interested in the technology. "Now the core technology is out of the bag, and we all have access to it," says Mars in a press statement. "Instead of making an app to run on the Apple Watch, for example, maybe I could make my own watch. We're very excited to see what the world comes together to build and learn with Sirius as a starting point."

The Sirius system comprises speech recognition, image matching, natural language processing and a question-and-answer mechanism powered by the cloud. It could, for example, answer the question "when does this place close?" when shown an image of a restaurant.

Being able to ask questions about what you're seeing is one of the unique features of Sirius, according to Clarity Lab doctoral student Johann Hauswald, and helps to differentiate it from similar open-source projects in the same area.

Those projects include Jasper, which can bring voice control to almost any device (such as the Raspberry Pi), and JuliusJS, which focuses on adding Siri-style commands to Web apps. Sirius goes further than both though, building on the basics of voice control to add natural language interpretation, intelligent responses and image recognition.

Data Center Architecture

Unlike Jasper or JuliusJS, Sirius does more than identify a voice command and act accordingly. It's also designed to push the boundaries of data center system architecture, providing an open source approach that other digital assistants can make use of in the years ahead as they become more knowledgeable, more capable, and more widespread.

The demo version of Sirius, which the team will show off on March 14 at an international technology conference, is based around a static version of Wikipedia. Users can ask factual questions and get answers back. The researchers will release the software code shortly after.

That database of Wikipedia entries could be swapped out for anything else a company likes: Academic research, auto repair manuals, cooking tips, a database of medicines, and so on.

The system was built by stitching together open source projects from various institutions and companies, including Microsoft Research and Qualcomm. Other firms and agencies—among them Google, ARM, the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation—provided funding,

Although Linux may not have much of the desktop PC market, it has the honor of running many of the world's servers and mainframes, and it's the foundation of Android. In the same way, the team behind Sirius wants their project to act as the bedrock of the digital assistants of the future.

Now if you’re chilly, you can tell your Nest thermostat to warm up your home by simply saying, “OK Google, change the temperature to 75 degrees.”

Monday, the Google-owned Nest smart home device integrated with the company's Google Now voice service, giving users speech features using their mobile devices. Once verbalized, the command sets off a Google card that pops up in the app, letting you know Nest is making the change.

Since June, Nest has been promising that users would be able to control their devices with Google one day. Droid Life spotted signs on Friday that the company was on the verge of implementing the change. Now, it should be live for all users.

Examples of commands you can tell the Nest:

change temperature to 20 degrees

set the temperature at 75 degrees

turn the thermostat to 73 degrees fahrenheit

change the thermostat to 68 fahrenheit

tweak my temperature to 68

modify my nest temperature to 23 degree celsius

alter the nest thermostat to be 76 degrees

turn up the temperature to 80

turn the thermostat down to 72

Other command triggers: fix (the temperature to ...) and dial (the thermostat to ...), as well as increase, decrease, put, switch, raise, raise up, lower, lower down and please change. The range suggests that Google Now's ability to understand casual language extends to Nest, so people won't have to memorize artificial-sounding commands.

In order to use the thermostat with the Google service, you must authorize both the app itself and Google voice control separately. For more information on how to set it up, check out the Nest tutorial.

Amazon's new device takes the personal assistant features of Siri, Google Now, and Cortana and bundles them into one big speaker that sits in your living room. Or wherever else you'd like to tell it what to do.

Called Echo, the voice-controlled device can tell you news and weather information; play your favorite music from Amazon Music Library, Prime Music, TuneIn, and iHeartRadio; and set to-do lists and alarms to remind you of important details later—plus it learns your speech and behavior to adapt to your vocabulary and preferences. The device is always on, so users need to say "Alexa" to control it. (It presumably draws on Amazon's own Alexa Web Information Service.)

Echo has seven speakers around the top of the device and can hear commands from any direction. The device can supposedly hear your voice from across the room, even over music.

The difference this time, is that Echo isn't something its competitor companies have. Google, Microsoft and Apple might have built-in "assistants" on their mobile devices, but there's currently no standalone connected home system controlled by voice. Though Microsoft is working with smarthome manufacturer Insteon to create Cortana-controlled systems, users still need a Microsoft smartphone to operate them.

Echo is available by invitation only, and if you're an Amazon Prime member lucky enough to get an invite, you'll also get a discount for a limited time. Amazon says Echo costs $199, but $99 for Prime members.

Inbox looks like a mobile-focused cross between Gmail, Google+ and Google Now. It scans your mail for important information such as flight times, appointments and emailed photos or documents, highlighting them with images, tags and buttons that draw your attention and let you take action (for instance, by confirming a flight).

Travel reminders pop out of your regular email stream

Goggle's new email app steals a helpful feature from Dropbox's Mailbox app that lets you "snooze" emails and reminders for a day, a week or any other time you like, letting you effectively postpone less urgent messages.

Hit the snooze button on email

It also lets you set reminders at the top of your screen that you can dismiss with a swipe. (It may also set some of these automatically.) Some examples:

Finally, the new app reimagines Gmail's current "tab" structure, which automatically sorts email into your main inbox and separate bins for social, promotions, updates and forums. In Inbox, related clusters of email become "bundles" you can open up and save or archive. Supposedly you'll be able to teach Inbox how to group email over time.

There is, of course, a catch: You might not be seeing Inbox yourself very soon. Google is once again rationing access to the app—much the way it originally did with Gmail back in 2004—so you need to request an invitation by emailing inbox@google.com. (Presumably Google won't keep Inbox an invite-only service for three whole years.) Existing users will get to invite their friends as well.

]]>http://readwrite.com/2014/10/22/google-inbox-deconstructing-emailhttp://readwrite.com/2014/10/22/google-inbox-deconstructing-emailWed, 22 Oct 2014 20:37:53 GMTEditor's note: This post was originally published by our partners at PopSugar Tech.

Apple's Siri, Google's Google Now, and Windows Phone's Cortana are ready to answer your questions. The voice-activated artificial intelligence already built into your smartphone gets better and better every day—but how good is it right now? One consulting company, Stone Temple, put together a study to determine, once and for all, which tech was best.

Google Now, Siri, and Cortana were each given 3,000 voice queries—everything from "What is the tallest mountain in the world?" to "What does the fox say?" Android's Google Now was the clear winner, having answered 88 percent of its questions completely. Next best was Siri with a not-even-close 53 percent and in last was the newest AI, Microsoft's Cortana with 40 percent.

Having tested the Google Now-centric Moto X myself, I've witnessed the incredible power of Google Now firsthand. The smartphone is tailored to recognize only my voice, so when I say, "OK, Google," the Moto X will turn on, even from across the room.

My friends have tried to unlock the phone by imitating my voice, but to no avail. Google Now's voice-recognition capabilities are simply better. When you ask, "What is focaccia?" it understands the Italian word and offers a Web search definition. It also combs your Gmail account and can tell you when your next flight is. Siri and Cortana have a lot of catching up to do.

Cortana, a new feature in Microsoft’s Windows Phone operating system, is both a search engine and a helper, just like its counterparts: Apple's Siri and Google Now for Android. Cortana—who says she's female, though not a woman—is Microsoft’s attempt to counter Google's domination of Web search on smartphones while also serving as its counterpoint to the cheeky and informative Siri on the iPhone.

In this way, Cortana—like almost everything in Windows Phone—emerges as a combination of iOS and Android features embellished with some of Microsoft's own unique elements.

Cortana Leans On And Learns From Bing

The first thing to know about Cortana for Windows Phone is that it is, at heart, Microsoft’s Bing search engine. At Microsoft Build 2014, one press session bore the title “The Bing Platform”—and it was all about Cortana.

Bing is no longer its own separate app, nor are there any specific Bing features like news or weather. It's now all Cortana, all the time. On Windows Phone, the two are basically indistinguishable.

By using Bing as the backbone of Cortana, Microsoft has made it a lot like the Google Now assistant on Android. Cortana recognizes your interests and uses Bing to mine various information categories to deliver news and contextual information that you are supposed to find particularly useful.

During setup, you can choose among pre-defined interests like health, sports, technology or headline news. You can set your favorite sports teams or neighborhoods where you like to eat and explore. Cortana will then deliver you information based on what you like and where you are, using both Bing and the sensors in the smartphone that help keep track of what you do and where you do it. The information is delivered in Cortana’s notebook, the equivalent of using a homescreen on Android for Google Now.

Where Google Now differs is that it uses a variety of factors to determine what information it delivers users. If you sign in to your Google profile, you can have it access Gmail, search, navigation, calendars … all of Google’s core services. It will also note what websites you visit when you are signed into Chrome and note those in the Google Now feed as well.

Cortana (left) Notebook vs. Google Now news stream.

Developers can tap Bing to power their apps as well, which then can bring third-party customization to Cortana. Only five third party apps have been built for Cortana at the time of launch: Flixster, Hulu, Twitter, Facebook and Skype (which is owned by Microsoft). Cortana has an open software developer kit for interested app makers that want to integrate it into their products.

Cortana's voice-control and language interpretation functions rely on a hybrid of on-device and cloud computation. When you speak to Cortana, your phone will use key speech patterns to interpret what you've said. If Cortana doesn’t understand a particular word, it will reach out to its neural network in the cloud to filter for possibilities. This hybrid approach is designed to let Cortana learn better speech recognition over time.

An Assistant Like Any Other

Cortana straddles the line between what Google Now provides as a search engine and how Siri acts as a personal assistant.

Google Now is an assistant without a personality. It is essentially Google delivering information you might want or need and allowing you to control your phone through voice actions. It wants to tell you stuff before you think you want to know about it. The other day, for instance, Google Now told me that I had to leave for a meeting at 1:57 p.m. to get to a meeting by 3 p.m.

You can set reminders, tasks, timers, send texts or emails through Google Now as well, just like you would with an actual assistant. But for a variety of reasons, Google decided not to make Google Now a search experience driven by a particular character the way Siri and Cortana are.

Siri doesn't provide the precognitive abilities that Google Now or Cortana do, because its fundamentally different under the hood and doesn't have a search engine spine the way the Microsoft and Google offerings do. Instead, Siri hooks through both partner databases and search engines, relying on Wolfram Alpha and Microsoft's Bing (to a certain extent) for computational search power.

Siri provides contextual, relevant information like stocks or sports or weather by creating hooks to third-party databases Apple has partnered with. Siri can also set reminders and alarms, open apps, post to Facebook or Twitter and navigate. Siri set the standard of personal assistants on smartphones, which Google Now and Cortana have now largely matched in different ways.

Cortana has a couple of additional capabilities that set it apart from its rivals—for instance, by personalizing your communications with trusted people. If you establish someone as a member of you “inner circles” within the app, you can then use Cortana's voice control to set reminders by name.

So you could tell Cortana to “remind me to read Rebekah’s essay this evening,” and it would understand who you're referring to. Siri and Google Now have similar capabilities, but Cortana takes it a step further.

Cortana also has a personality all its own. The assistant is named after an artificial-intelligence character in the game series Halo—a guide that gets you through missions and helps along the way. On Windows Phone 8.1, Cortana (which is voiced by the same Halo actress, Jen Taylor), will respond to Halo-related questions. For instance, if you ask where Master Chief (the main character in Halo) is, Cortana will give a variety of answers.

Where is Master Chief?

Cortana also knows that it is a computer. Yes, it will identify as female, but will also give answers such as “I contain multitudes” (a Walt Whitman reference) and “Is there a third option?”

Cortana: Still A Beta

Microsoft’s goal was to imbue Cortana with a personal touch. It combines the semantic search of Google with the personality of Siri while still being fun and dorky in a Microsoft kind of way. Which you may or may not like, depending on your view of Windows Phone and whether you play Halo.

That said, Cortana is still in beta. After using it for a little more than a week, it's easy to see that the assistant is still coming into its own. Cortana's voice recognition is good but often requires precise enunciation (Cortana often confuses itself with Cortado, apparently a city in Italy), it doesn't always connect contacts with data correctly and its navigation sometimes misfires.

It also doesn’t have a touchless command, the way Google Now on Android devices activate when a user says “OK Google.” These types of problems are fairly easy to fix, so Microsoft can presumably work them out ahead of the formal launch of Windows Phone 8.1 later this year.

Microsoft's smartphone operating system now has its own personal assistant. Just like Apple's Siri and Android's Google Now, Microsoft's "Cortana" is a voice activated-personal assistant to help users control their devices, send messages and search apps and the Web.

Cortana is powered by Microsoft's Bing search engine. It will live in a Live Tile on Windows Phone and completely replaces the default Bing search engine in Windows Phone. It learns what you search and what you do over time to deliver. The assistant is named after an artificial intelligence character in the popular Halo game series.

Cortana comes with a notebook that organizes all of the functions it performs such as interests like news, traffic and weather and learns to keep track of those interests over time. Cortana knows who your best friends are and the places you frequent on a regular basis.

If you are familiar with Apple's Siri or Google Now, Cortana will seem very familiar. It is voice-activated and helps users keep track of events and common behavior, reminders and calendar updates.

Cortana can also open apps in Windows Phone and perform actions through voice controls. For instance, a user can open Skype and call a contact by telling Cortana to open the app with the contact. It will work with third-party apps, allowing you to add TV shows to a queue in Hulu or to look up a certain person on Facebook. This is a unique feature for personal assistants, as neither Google Now nor Siri are very good at opening third-party apps and controlling them.

Microsoft is launching Cortana as a beta version as it continues to iron out the wrinkles of the new assistant. The beta designation follows both Google Now and Apple's Siri that were both launched in beta before becoming full-fledged consumer products almost a year after launch.

Here are eight other things you need to know about the new Windows Phone 8.1 update.

Windows Phone 8.1 introduces a new "Action Center" that is a drop-down menu from the top of the home screen. It provides quick access to settings and connectivity such as Wi-Fi, Bluetooth and Airplane Mode. The new drop-down menu brings Windows Phone 8.1 in line with Android and iOS, both of which have settings menus that can be dragged from the top of a locked homescreen.

Speaking of the lock screen, Microsoft is now letting users customize their locked homescreens with Windows Phone 8.1. This setting was not previously available to users of Windows Phone and is an important update that will help manufacturers and users differentiate the user experience of the device.

The Start screen on Windows Phone will now also be customizable, letting users pick color themes or set a background images in which the Hubs and Tiles interface will lay on to of.

Microsoft also has new enterprise virtual private network features that will help Windows Phone 8.1 securely connect to corporate networks. A S/MIME messages setting will help Windows Phone 8.1 enterprise users send secure messages.

The Windows Phone Marketplace has been updated to put a focus on apps, a departure from the previous iterations of the store that would also feature books, music and media next to apps. This should make the Windows Phone Marketplace easier to search and browse.

A new calendar app offers a new user interface that allows users to swipe to the right to change the day view along with week settings. Developers can build towards the Windows Phone 8.1 calendar with a new public API.

Microsoft added a new Wi-Fi booster it calls "Wi-Fi Sense" that will help users stay off their cellular data connections and displace to free Wi-Fi hotspots when in range. Wi-Fi Sense will automatically sign into free public networks with credentials saved within the operating system. Microsoft also improved all its other "Sense" features in Windows Phone including Battery Sense.

Windows Phone 8.1 provides enhancements to the "Word Flow" keyboard that improves the accuracy as well as institutes a Swype-like gesture feature that brings the keyboard much more in line with the capabilities of default keyboards for Android phones.

Windows Phone 8.1 will start rolling out to consumer within the next few months. It will be available on brand new Windows Phones by late April or early May.

Today Google announced Android Wear, its platform for smartwatch and wearable technology development. While Google has not yet released the full software development kit (SDK) for Android wearables, we can get a good sense of what Android smartwatches will be capable of by digging into the principles in the developer preview.

Pick A Card

Android Wear user interface will be based on cards. Cards are applets (smaller versions of full smartphone or tablet apps) that deliver only the most relevant information for that app. Different app cards will be stacked on top of each other on an Android Wear device and users navigate between them by swiping up and down on the watch.

To navigate to actionable items within a card app, users will swipe horizontally. For instance, if I am taking an American Airlines flight from Boston to San Francisco, the card may pop up telling me that my flight is ready for check-in. To perform the check-in, I will swipe right on the Android Wear device and tap check-in.

Cards will have images in the background to differentiate between which applets are in use and what actions are being performed. So, if I get a message from my boss about a meeting in one card, I can have an image associated with messaging in that card. If I swipe down to my calendar, I can have a time related image in that card. If I swipe right within that calendar, I can confirm the meeting and so forth.

Contextual, Ambient & On Demand

Android Wear devices will be completely aware of its users surroundings and be able to deliver two types of notifications through apps: contextual and on demand. Google calls these “Suggest” cards.

Contextual apps use an Android wearable's sensors combined with those of a smartphone to deliver information based on what the user is doing. This is totally congruent with Google Now, the Google service that attempts to anticipate what a user is doing, wants to do or intends to search for in the near future.

For instance, today I went to a meeting and I looked up the address for it before I left the house. On my Android smartphone, Google knew that I searched for the address and already had a Google Now card queued up with directions and navigation to the meeting.

The contextual stream in Android Wear will be able to perform a lot of these same types of functions by reading the user's location and state and delivering information that just shows up on the watch without necessarily creating a vibrating notification. The information is just there ready to be glanced at on the watch.

Demand cards are the opposite of contextual cards. These cards are present on the device, but have to be called up by the user either by touching the device or speaking instructions to it. They can include Android “intents” that call up a specific action, like making a phone call, sending a text message, getting specific directions, listening to music and so forth. These cards don't necessarily deliver ambient information like the contextual cards do, but are intended to support something specific the user presumably wants to do.

Notifications, Pages & Actions

The easy part about Android Wear is that developers don't necessarily need to create entirely new applications, or even to reconfigure how their existing Android notifications work on a wearable device. Notifications in Android Wear are based on Android’s existing notification system, and will be shared between the smartphone and the wearable.

By contrast, Samsung, which chose to use its Tizen operating system for its new Gear smartwatches, offers developers a much more complicated task. Gear requires developers to create two separate applications (one for Android, one for Tizen) and then share information back and forth through a Samsung-specific protocol.

In Android Wear, developers can build additional functionality into their notifications, such as the ability to respond via voice input or add additional pages. The notification can let the user take specific actions—for instance, by pressing “Reply” after receiving a message.

Google's design principles call for all notifications to be glanceable—that is, short and to the point—by default, but also give developers and users the option to expand the message. So, an email may pop up on the Android Wear device but only a truncated version of the headline. You may say “read message” or tap on the screen to expand what is in that email.

Stacking Cards

By definition, a smartwatch has limited space. But all those notifications that you normally get to your smartphone need to go somewhere. In Android Wear, they will get stacked on top of each other. Say you're conducting several different conversations in WhatsApp. Those notification cards will stack on top of each other in Android Wear, and you'll swipe to dismiss them or respond.

If you're in an email conversation, multiple messages from one email thread will go into the same stack as opposed to creating entirely new cards. This could also be done with voice input if the developer has set up that option in Android Wear.

Each app will have its own stack. That way users shouldn’t get overwhelmed by a jumble of disorganized messages sitting on top of their smartwatch. This is one of the downfalls of the Qualcomm Toq interface where notification message cards are stacked on top of each other regardless of which applet is sending them.

Technology has grown increasingly personal over the years, but can it ever be a "friend" in the way we think about human friends?

The movie Her, directed by Spike Jonze, envisions a future in which operating systems have evolved to learn from our behaviors and proactively look out for our best interests every day. They're our personal assistants, but they've become nuanced to the point that we have no problem calling them our friends. And when a person says they're in love with their operating system, it's not particularly weird.

The star of Her is OS1, a new operating system that, when you first launch it, creates a unique persona to best accommodate its user's personality and communication needs. For the film's lonely protagonist, OS1 takes on the name "Samantha" and acts as a personal assistant to control connected technologies like computers, smartphones and TVs. Voiced by Scarlett Johansson, she is also the most human-sounding non-human ever built.

Samantha talks and responds naturally like a human, but she can also "like" things like colors, faces and stories. She can "see" her surroundings via webcam, laugh at jokes, make her own jokes, and even exhibit feelings of joy and sadness. She can also recognize and analyze patterns in its owner's recreational habits, relationships and career, and offer beneficial advice without the user needing to ask for it—just like a friend would.

If AI's goal is to emulate human behavior, OS1 might be the ultimate realization.

The closest modern approximation to the fantasy depicted in Her is the virtual personal assistant, which can be found in desktop clients like Nuance’s Dragon Assistant and smartphone apps like Apple's Siri or Google Now. While it's highly unlikely that any of these products will turn into anything like OS1, many natural language developers believe it won't be long before our AI assistants get much more personal than they are now.

More Than Human

Nuance CMO Peter Mahoney says his company’s been spending more time building out virtual assistant capabilities due to the “groundswell of interest in making more intelligent systems that can communicate with humans more fluidly.”

Since computing technology has reached the point where it can now access huge amounts of data in the cloud, sift through that data and make real-time decisions about it in just seconds, Nuance has worked hard to transition its solutions from solely transcribing audio to actually extracting meaning from the text.

“Dialogue is really important,” Mahoney told me. “In the original systems that came out, it operated like a search engine. You say something and something comes back, but it may or may not be the right thing. But that’s not how humans work. Humans disambiguate. We clarify.”

Creating “natural-sounding” systems that can dissect speech and read between the lines, though, is just as difficult as it sounds.

Martijn van der Spek is the co-founder of Sparkling Apps, a startup that owns nine different speech recognition services including Voice Answer, which the company calls its "next-generation personal assistant." According to van der Spek, virtual personal assistants require massive amounts of server power, and smaller startups with AI solutions—like Sparkling Apps’s Voice Answer—simply can't afford to power a truly smart assistant with expertise across a broad number of domains, as opposed to just a few.

“The amount of data stored results in performance issues for our servers,” van der Spek told me. “This together with the concern of privacy has made us clear Eve’s database every 24 hours. So she suffers from acute amnesia and any long-term relationship is doomed to fail.”

Luke Muehlhauser, CEO of the Singularity Institute for Artificial Intelligence, also noted that AI is advancing more slowly than it might because many researchers aren't sharing their information. Large private companies like Google and Facebook are keeping their AI-related research under wraps, whereas academic researchers constantly publish their progress in journals.

Getting To Know You

Digital assistants may never evolve to love us like OS1 does in Her, but maybe they'll at least eventually remember what we've told them in previous conversations.

Today’s personal assistants are helpful with solving problems that are happening right now (“play a song,” “text Joe,” “launch Skype,” “find a Chinese restaurant nearby,” etc.). But if AI ever wants to approximate human behavior, its systems will need to be a little more thoughtful. And that means pushing intelligent systems to store more data and consider more contextual information when making decisions.

“A human who is thoughtful understands your needs, wants and desires—he or she understands you and can contextualize that,” Mahoney told me. “One of the things you talk about is having all the information. The more online information and the more great services out there that exist, the more we’ll be able to connect our intelligent systems that can understand everything that’s going on.”

What drives a recommendation engine isn't just information, but learned combinations of relationships, classifications and genres. “Structured content will happen first versus things that are less structured—those will be more complicated to figure out,” Mahoney said. In other words, today's personal assistants know a lot about what's playing in theatres, but those less-structured concepts—like remembering previous conversations about favorite movies to proactively recommend a new movie the user may like—are going to take more time to develop.

Ray Kurzweil, the noted inventor and futurist currently working with Google on its X Lab projects, believes that Google will build a computer that can understand natural language and human emotion by 2029. But as he told tech blogger Jimi Disu in December, an amped up digital assistant could be in our pockets in as little as four years:

Right now, search is based mostly on looking for key words. What I’m working on is creating a search engine that understands the meaning of these billions of documents. It will be more like a human assistant that you can talk things over with, that you can express complicated, even personal concerns to. If you’re wearing something like Google Glass, it could annotate reality; it could even listen in to a conversation, giving helpful hints. It might suggest an anecdote that would fit into your conversation in real-time.

Making Friends In iPlaces

Over time, the intelligence of personal assistants will expand as the online catalogue of information grows deeper and better-connected. And lots of big companies are investing heavily to make the best use of those vast information stores.

But collecting massive libraries of information isn't enough to power a true personal assistant. Companies like Apple and Google also need to perfect the "dialogue" factor, since there is all too often a noticeable lapse in time between the user's question and the personal assistant's answer.

The key might be to disconnect from the cloud entirely—or at least to minimize the number of times the system must connect to the cloud. But even though personal assistants would benefit from as much local processing as possible, the ideal personal assistant—think "best friend that knows everything about you"—needs access to the deep catalogues of online information. Companies are working on anticipating users' needs to have the most relevant information ready to deliver, but there's a lot of information to consider and many moving parts.

Google is experimenting with a few solutions to make personal assistants work faster, namely with offline voice recognition in Android, while Intel's new Edison computer might make it possible for voice recognition over mobile devices or even wearables to work near-instantaneously. The key, according to most companies, is to minimize the number of round trips over cellular-data signals to make processing—and in turn, conversations—more snappy.

Intelligent personal assistants will become more valuable as they get better at understanding the subtleties in communication, but researchers and developers will eventually be forced to grapple with the issue of ethics. If we can program a computer to function like a brain in order to like or even love us, there’s nothing stopping developers from fine-tuning those powerful systems to personal or corporate interests as opposed to a true moral compass.

Movies like Her make us fantasize about personal assistants that can be true friends, but the state of today's AI technologies leads one to believe this won't be happening anytime soon. Personal assistants are nifty features, but they need to improve their listening skills, knowledge bases and memory banks before they can be our trusty sidekicks.

In time, AI assistants may grow smart enough to learn our habits and advocate for our best interests, but the odds are against personal assistants ever leaving the friend zone to become something "more." And there's nothing wrong with that.

Founded in 2012, DeepMind uses “general-purpose learning algorithms for applications such as simulations, e-commerce and games,” according to its website.

Google has become increasingly focused on projects that rely on artificial intelligence, including the company's personal assistant platform "Google Now," as well as its highly-publicized self-driving cars project. Google also purchased several other artificially intelligent and robotics companies recently, including Boston Dynamics, Meka Robotics and Redwood Robotics (all in December), plus the home automation company Nest on January 13. Last March, Google also acquired a startup based out of the University of Toronto working on the future of deep neural networks, speech recognition, computer vision and language understanding.

Google Now, the search giant’s intelligent personal assistant for mobile devices, will soon be coming to your desktop according to the latest test version of the company's Chrome browser.

Google Now is being tested in Chrome Canary, the test sandbox that Google uses for new builds, features and functions in forthcoming versions of its browser. (To enable Google Now in Chrome, visit this page and click the radio button for the Now experiment to “Enabled.” Once you do so, a menu item will pop up at the bottom to relaunch your browser.)

You will need to be signed into Chrome for Google Now to work, but the experimental desktop version of Google Now will still be pulling information from your iOS or Android smartphone to inform your location. You can set location settings for Google Now for multiple devices by changing your system preferences within iOS and Android.

Once you are signed in, you can see Now-based notifications from your notifications bar in Windows or the top tool bar on Mac OS X. The little grayed out bell is for Chrome and Google Now notifications.

If you use Google Now on your mobile device, you can see certain Now cards on your desktop computer if you're signed into Chrome, including weather, sports scores, commute traffic, and event reminders cards. Some of these cards may be based on the location of your mobile device.

Google Now is an important part of Google's vision for the future of search. One of the reasons Google wants to know so much about you, get you to use Google+, Android smartphones, Chrome, YouTube and Google Play et al. is so it can serve contextually relevant information to you—yes, sometimes this includes advertisements.

Google Now is the company’s attempt to serve you information it knows you probably wanted anyway, such as the weather and sports or interesting news, before you can even search for it. Are you a sports fan living in Boston? Google will send you a Google Now notification to your smartphone—and soon on your desktop—about the score of the game.

When it comes to a variety of its consumer features, Google is an equal opportunity developer. It wants to be on your Windows and your Mac, your iPhone and your Android. Chrome is now one the most popular browsers in the world because Google has spread it to nearly every single operating system, first on mobile, and now on the desktop. Google Now is a very important piece of the feature set that Google wants to spread everywhere.

So, while Google Now for Chrome is officially in experimental mode, there is no reason to think that Now won’t eventually make its way to every piece of computing that touches Google.

ReadWriteReflect offers a look back at major technology trends, products and companies of the past year.

As the explosive trajectory of smartphones adoption approaches an asymptote, mobile apps are riding high. Once an unassuming term for a curious, smallish sort of phone program, the app is now king. It’s almost impossible to now imagine otherwise.

In this mobile-first era, apps make headlines, precipitate stock slumps and altogether define an industry that didn’t see them coming a mere six years ago when Apple released the App Store. Here is our list of the most important apps of 2013. These are not necessarily the fan favorites, but they were the headline drivers, movers and shakers that helped define the app economy in 2013 and beyond.

Love it or hate it (or love to hate it), Snapchat captured one of the mobile Web’s most fascinating pivots: the shift from archiving toward intentional ephemera. Snapchat forgoes Facebook’s reign of the cohesive narrative in favor of brief, chaotic social snapshots that literally self destruct. The app is enough of a threat (or a fascination) to have piqued Facebook’s interest to the tune of $3 billion, after all. Snapchat’s moment may fade as quickly as one of its frenetic missives, but 2013 will always be remembered for the rise of the Snap.

Comparing Google’s predictive data brain to Siri is to sell it short. Google Now had its humble beginnings on Android back in 2012 (the Google search team was working on it in 2011 as well), but in 2013 the service sneaked onto the iPhone through the Google Search app. With progressive updates, Google Now just gets better and better, serving up flight updates, package tracking info and local suggestions before you even know you need them. Google thinks that Now is going to be the future of search, delivering you information before you realize that you need it. Adding that capability to just about every smartphone could be pretty big.

The tide of wearable devices is nowhere near its crest, but apps aren’t waiting around for hardware to catch up. Many mobile fitness mavens have already put the iPhone 5S’s M7 motion coprocessor to good use. For more casual use, Nike+ Move provides an excellent snapshot of your daily habits. The app counts “NikeFuel,” Nike’s own sort of fitness currency, rather than calories or steps, which makes it the perfect gateway app into fitness tech. More serious athletes should check out Strava Run, Nike+ Running, Argus, MyTracks by Google and RunKeeper to take it to the next level, no buggy wearable accessory required.

Arguably the killer app for Google Glass, Word Lens translates foreign text into your native language right before your eyes—literally. For anyone brave enough to wear Glass on trips abroad, this app could revolutionize travel. (The rest of us can stick to Word Lens for our smartphones.)

It might just be for checking the weather, but Yahoo’s reinvented app manages to encapsulate everything about the big Y!’s Mayer-era makeover. Bright, fluid and playful, it’s everything the old Yahoo’s mobile presence wasn’t. Remember back when Yahoo had literally 70 different apps? Yeah, those days are long gone.

Uber’s been around, but it really only exploded into proper verb territory this year. (“Are you going to Uber home later?”) Uber allows user to summon a private car directly to their location. Uber translates digital ease into three dimensions in a way that only truly disruptive technology can. As controversial as it is useful, Uber has battled city governments and unions across the United States ... and won. Uber is helping to change the definition of urban transportation. Its surge pricing is borderline scandalous and ReadWrite never got the kitten it was promised, but there’s no denying that Uber made major waves in 2013.

It might not be a household name (yet), but IFTTT is the multitool of mobile. A playground for productivity nerds, IFTTT invites users to craft simple formulas that text you when it’s about to rain, call you when the rent is due, back up your Instagram photos to Dropbox … and just about anything else you can think up.

Forget Candy Crush Saga and the Zyngaverse. Badland bucks the mindless, hyper-addictive model of generic mobile games in favor of an artful, strategic approach that oozes indie. Badland is a beautiful, brutal, exemplary entry that blows lesser minded games out of the water.

Honorable Mentions:

These apps may not have stirred the pot in 2013, but they kept improving on their already excellent groundwork. The picks on this list kept up with the quick clip of mobile in 2013 without straying too far from what makes them great.

Google Now fans have been hoping for a desktop version of the intelligent mobile personal assistant, and now ... well, it still isn't quite here. But something similar is coming to desktops, smartphones and tablets in the U.S. this week.

The company is rolling out a new update to its search engine that gives users "quick answers" to personal queries—for instance, when their next flight leaves, or when a package is due to arrive—made in the search bar. And if the info is in your Gmail, Google Calendar or Google+ accounts, the relevant answers pop up at the top of the search results.

The typed or spoken queries use natural language and cover common scenarios such as:

Flight tracking: Type or say "Is my flight on time?" or "What's my gate number?" to see current or upcoming flights, their status and other details.

Reservations: Input "my reservations" and it pulls up upcoming hotel or restaurant reservations and addresses. It also provides driving or transit directions to the destination with one tap.

Purchases: Query “my purchases” and shopping orders spring up, so you can view available order and shipping information.

Plans: Ask “What are my plans for tomorrow?” and any appointments or relevant combination of the above may appear.

Photos: The feature can also search and display your Google+ images upon command. Just ask something like, “Show me my photos from Thailand” or “my photos of sunsets.”

In truth, all that information is already available to you, so the new service isn't providing access to anything you don't already have. But it does aim to make the Google search bar a convenient way for you to call up your own information in addition to the remaining entirety of online human knowledge.

According to Google, "We’ve been offering this kind of info—flights, reservations, appointments and more—for more than a year in Google Now. We’ve gotten great feedback on how convenient it is, especially when you’re on the go." Although there are similarities in the types of practical data this and Google Now provide (which, in both cases, is for one user account at a time—there's no multiple account support), the services are not quite the same.

Google Now anticipates the information users need, while the new feature relies on users proactively searching for specific info. Another distinction: The search-based service works in desktop browsers, as well as on tablets and smartphones. It works in any browser via text input, and via speech wherever Google search by voice is available—i.e., desktop Chrome, Google search mobile apps and the native Google search on Android.

The personal data Google serves up is secured via encrypted connections, the company says. Google conducted a limited field trial of the feature last year. Now it's ready for the public, so it's rolling out in the U.S. over next few days in U.S. English. No word yet on when it might come to other countries or languages.

If you're interested, you'll be able to take it for a spin soon and see if you find it handy. If not, don't fret—there's a handy kill switch. You can turn it off temporarily by clicking on the globe icon at the top of the search results page, or shut it off completely in the "Private results" section in the settings.

Today, Google's Googliest project makes the leap from Android to iOS. Google Now, announced last June at the company's I/O 2012 conference, is part smart search and part personal assistant — but don't call it Siri. The service will make its debut on iOS through an update to Google's core Search app, available in the App Store.

According to Google's blog post on the release, "Today, with the launch of Google Now on iPhone and iPad, your smartphone will become even smarter. Google Now is about giving you just the right information at just the right time. Together, Google Now and voice search will make your day run a little smoother."

Google Now for iOS will be nearly identical to the Android release, though it won't enjoy the same deep integration as it does on Google's own mobile platform. That means no homescreen widget, of course, and no "swipe up" gesture for instant, fluid access. The iOS version will also be missing a few of the cards you'd find on Android: For now, cards for boarding passes, nearby events, Fandago and Zillow will remain an Android exclusive.

A 20% Project That Took Off

We spoke with Google's Baris Gultekin, co-creator of Google Now, about the product's migration to that other platform. According to Gultekin, Google Now is the latest product home run with humble beginnings as a year-long 20% project (Google encourages employees to dedicate 20% of their time to a pet project that interests them).

"In the early days it was all about keywords," Gultekin explains. "With Google Now, you don't even have to search. We're really interested in having computers do all the hard work."

For Google Now, the heavy lifting comes easy. A smart search app on steroids, it provides instant access to a spread of useful information, delivered via "cards". The cards are wholly dependent on context. As Gultekin puts it, "The product is different given the situation you're in." You might see a card for commute traffic around rush hour, or a card for your flight reservation the morning before you head to the airport.

Google Now Is Google, Now

Google Now is an umbrella project of sorts, tying Google's vast web of products together. Naturally, the product is also right at home on Google Glass, the company's futuristic eyewear that also aims to make this whole business of carrying the Internet less interruptive.

Google is betting big on Google Now, so it will be interesting to see if the service takes off in Apple's ecosystem. Google iOS ports like Google Maps are wildly popular, but will iPhone users take notice of Google Now?

From its perfect morsels of context-dependent info to its uncanny knack for knowing what you needed to know before you knew you needed to know it, Google Now is a powerful tool — and a fun one.

Try it out today in the App Store and have fun pitting it against Siri in voice-powered search time trials.

I get asked what the next big thing is a lot. I haven't had a good answer in a while. So much of what I see in technology feels iterative, or worse, derivative, especially in the social Web. All the interesting niches have been mapped out.

We're starting to see glimmerings of these new, smarter systems in everything from check-in services like Foursquare to calendar apps, advertising and even online-personals services. Increasingly, rather than waiting for us to tell them what we want, in the form of a search query or command, they'll prompt us with suggestions.

What Is An Anticipatory System?

Here's a simple definition of anticipatory systems. Think of them as artificially intelligent services that are aware of external context — including ambient inputs like time of day, social connections, upcoming meetings, local weather, traffic and more. Taking all of that into account comes naturally to humans. But for computers, it's hard.

The big challenge in artificial intelligence isn't that computers are stupid. It's that they're ignorant. We haven't given them enough data, nor the tools and rules to process it all. But that's rapidly changing.

That's a bit vague, and the practical application of anticipatory systems has proven accordingly tricky. But all of the trends we're kind of bored with now — social, local, mobile, big data — have laid the groundwork for the realization of anticipatory systems' promise.

Foursquare, for example, has been collecting years of data about where people are and what places they're interested in — not just their explicit check-ins, but their local searches, tips and likes. So far, that's allowed Foursquare to offer personalized recommendations. But now the company is taking the next step into anticipating users' needs, Foursquare's head of search, Andrew Hogue, told Fast Company. Hogue gave the example of giving users recommendations for lunch spots at 11 a.m., rather than requiring users to type "lunch" into a search.

That kind of ambient awareness is at the center of the latest version of a mobile dining guide made by Ness Computing. Older versions of Ness sucked in data from Facebook, Foursquare, Twitter and other sources to offer personalized dining recommendations based on friends' tastes. The next step Ness is taking is to tailor those recommendations based on context — time of day and location. Currently in beta, the new version should come out later this month.

Merely analyzing social data isn't enough, says Ness CEO Corey Reese: "Just because a computer is aware of what you're doing doesn't mean it will add value to your life."

Anticipating Your Schedule

Schedule-management apps are another field getting reinvented by anticipatory computing, as startup consultant Semil Shah recently noted in TechCrunch. Apps like Twist and Leave Now alert people we're meeting with to our real arrival times. That's a welcome, computer-assisted acknowledgement of the reality that calendars are a perpetual act of optimism, subject to real-time revision by factors we can manage — like self-discipline — and factors we can't, like traffic and transit delays.

Even our social lives are getting transformed. Consider Facebook's "People You May Know" feature, which draws on both its own social graph of our connections and external cues like our email inboxes to recommend friends. That's perhaps the most widely distributed and used anticipatory system in the world. Dating sites are getting smarter, too, relying on the implicit cues of self-presentation as well as explicit data in user's searches to match up people. That's what online daters are already doing, more or less manually as they sort through profiles — the trick is for personals sites to start doing the work for them.

The biggest bet on anticipatory computing at present is Google Now, Google's intelligent mobile assistant that's built into Android. Drawing on all the data Google has, from flight confirmations in your Gmail to upcoming events in Google Calendar to your history of Web searches, Google Now attempts to give you what you might search for without making you search.

Apple's Siri, though more of a voice-command system, also has anticipatory elements. But it is hobbled by the thinness of the data Apple has on tap. If it wants Siri to anticipate our needs, Apple will have to partner more deeply with Facebook, Yelp and a host of other services so it knows more about us.

The true challenge for Apple, Google and Facebook is how to design a great anticipatory service around a specific need — without feeling creepy or, worse, clumsy. So much of what makes an anticipatory system great lies in the nuances of the service. Written prompts and design cues will play a huge role in getting people comfortable with computers that know a lot about us and make eerily accurate guesses.

But if people can get it right and design anticipatory systems that feel human and respond to our needs — well, I can only shiver with anticipation.

Google's cloud-based ChromeBook never really went anywhere, selling and performing poorly. But that hasn't stopped Google from chasing its browser-as-OS future. With the recent introduction of Google Now to Chrome, Google looks set to install a Trojan Horse on Microsoft's and Apple's desktop home turf.

This might seem farfetched if you're unfamiliar with Google Now. But if you've used it, you've experienced the almost magical foresight it has to anticipate the kinds of data you need before you ask for it. Here's a demo:

As incredible as it is, Google Now is nowhere near reaching its full potential. Acknowledging its current limitations, Botnik CEO Michael Brill argues that "Sure it only does 30 things now ...[b]ut let's say it does 3,000 things and [Google adds] more interactivity to cards [...N]ow they have a way to add value to the thousands of everyday decisions we make and in that process introduce sponsored content (i.e., ads) that can be monetized."

The Quiet Spread

Monetization is the "why" of Google Now. Much more interesting, is the "how" of its proliferation.

Tilde co-founder and former Apple employee Tom Dale craves just one feature in Apple's upcoming iOS 7: "sufficiently powerful hooks that Google could implement Google Now for iOS."

Fat chance.

Given that Google Now is, as he continues, "the vector by which Google has figured out how to weaponize the stack of PhDs it has been accumulating for the past decade," it's unlikely that Apple is going to let Google Now onto its playground anytime soon. Except that it already has.

At least, that is, on the desktop. Spotted and described by François Beaufort, Google has quietly introduced Google Now notifications into its latest Chrome desktop browser. This new Chrome Notification Center, while still only available in a pre-release Chromium build, is a clear indication that Google Now-style intelligence and notifications are coming to a Chrome browser near you, whether you're running it on your Mac, Windows or Linux machine.

The World Of Google

I already spend most of my day interacting with the world through Chrome. In addition to vanilla websites, I also run Google Drive, Google Calendar, Gmail, Google Maps and other Google services. With Google Now, I'm not sure I'll have much incentive to ever leave the realm of these services.

This is Google's great genius. Unlike Apple, which tries to create an optimal user experience by controlling every aspect of that experience, from hardware to software to web services, Google is happy to build for others' platforms. Every user that interacts with Google, whether on a Blackberry or iPad or Windows desktop, is one step closer to embracing Chrome or Android or Google Talk or any number of other Google products. The more they use, the more advertising revenue they drive to Google.

In the battle between Android and iOS for mobile supremacy, Google is clearly winning, with IDC reporting that Android represented 75% of all smartphones shipped in Q3 2012, hooking users on the value of Google Now and other Google services. By adding Google Now to Chrome, with its roughly 20% of the desktop browser market, according to Net Applications, Google is planting a Trojan Horse on its rivals' platforms, one which leads them to put the convenience of Google Now into their pockets, as well.

It's a brilliant strategy, and it derives from Google's exceptional ability to put Big Data to work, coupled with its willingness to be open to other platforms.