Last week Google hosted its annual developer’s conference I/O. There are a number of announcements to unpack here: all revolve around Google doubling down on AI as a core competency to win search, no matter the interface.

Before I get into the I/O announcements, it’s helpful to remember Google’s end game and what they’re after here: ad spend. In the aughts, Google defined and won the digital ad spend pie by becoming the default runtime for consumers web search. When those users started leaving for mobile, Google started looking for ways to place themselves within the mobile path to content.

Google’s mobile efforts have had varied success, none anywhere near their web browser monopoly, and so last week’s conference seemed to be a combination of two things:

Google continuing its pursuit of search share on the current runtime. (Mobile.)

Google starting its pursuit of search share on the future runtime. (Voice.)

Let’s start with how Google pursues user attention on mobile.

Messaging

With mobile applications fragmented across the OS, many have looked to messaging (and the model built around it in Asia) as the runtime to rule them all. In reality, mobile users in the U.S have grown too accustomed to the current state of affairs---search for restaurants on Yelp, search for friends on Facebook, search for random tidbits of knowledge on mobile browser---to shift the entirety of their search behavior to a messaging app. Still, tech companies are trying, and Google is no exception.

Allo was the big messaging announcement last week. It’s a standalone messaging app with contextual response suggestions based on image recognition and machine learning. The Google-specific add-ons are cool from a tech lens, but Allo faces two challenges.

First, it’s another messaging app.

Two, it’s machine learning will need to be really great to make it worthwhile to users relative to their current messaging apps. Google says predictive responses make for a messaging app that is “simple and more productive,” but it’s hard to see how anyone can make messaging simpler than it already is today. The addition of predictive responses actually makes Allo a more complex user experience, as now the user needs to determine whether or not to use Allo’s suggested responses instead of doing it the old fashioned way (i.e. typing.)

In my mind, the smarter messaging play from Google is Gboard, an OS agnostic keyboard extension that lets users run a Google query within any messaging app. Easier GIFs and emoji are the advertised benefit to consumers but it’s putting search directly within consumer messaging apps that makes this valuable to Google.

Instant Apps

The other big news in mobile came with Instant Apps, an Android update that lets apps run instantly, no installation required. Though this isn’t directly related to the AI-driven Google Assistant news, it’s still key to Google’s core objective in mobile. With Instant Apps, Google removes the primary barrier consumers face in mobile search.

It’s great news for android users and a good reminder that making mobile content easier to access is big time for Google. The other reminder is AMP (Accelerated Mobile Pages,) Google’s HTML framework that optimizes web pages for mobile.

In summary, Google wants to make it easier to find mobile content by making search contextual and predictive and widely available across the OS (browser app, proprietary messaging app, keyboard extension for other apps,) all while making it easier to access that content (no matter its host app) once it’s been found.

It’s good to see Google moving in the direction of things to come, but Home as a concept seems to fall short in the same way that the company’s mobile efforts have to date. Google’s web browser search engine was a hardware-agnostic, OS-agnostic search runtime monopoly that dominated the digital advertising industry for the last fifteen years. Google Home is not that.

Primarily, it’s not platform-agnostic. Amazon won’t secede the query runtime of their own hardware to Google, even if the latter has a search experience that is smarter and more predictive. There is too much at stake in the direct purchases that Echo users make through the device.

For all of the R&D Google has poured into NLU and AI, when it comes to contextual runtime in the home, they will still find themselves competing with another piece of hardware. That is a problem www.google.com never had to consider.

The good news is Google is pushing forward and trying to find the next big thing. They’ve made a big bet on AI and its ability to serve contextual, predictive queries, but for all the innovation here -- and Google has done incredible things with AI -- it’s still not obvious that proactive responses are what consumers need in a messaging app, or a home assistant.

And so, for all the announcements at I/O, I was still left wondering, do Google and consumers want the same thing?