Star Trek got it right: In the future, we'll use computers by talking to them.

Google held an event this week and new hardware got most of the attention. Google unveiled a couple of Google-built Pixel phones, Google Wi-Fi home mesh routers, the Google Home virtual assistant appliance, a new 4k Chromecast Ultra streaming media box and the Daydream View VR headset.

Critics say Google is copying and competing directly with Apple with the Pixel phones and with Amazon with Home. But this misses the point.

Simply put, all that means is you talk to a computer. It "understands" what you say no matter how you say it. Then the computer does things for you based on those conversations.

Google is betting the company on its own version of this interface, called Google Assistant. And it's a bet I think they'll win.

The last user interface

The artificially intelligent conversation agent is the last user interface. The entire history of human-computer user interfaces has been about applying ever increasing amounts of compute power to making the machines work harder to interact in a way that's easier for people.

Humans are hardwired for talking to each other, so the most human-compatible interface is one that has conversations with us.

Trouble is, people are able to converse the way we do because of a complex mixture of human psychology and knowledge. AI will need to simulate human psychology and gather massive knowledge in order to hold even the most basic conversation.

But wait! What about Google Now?

Google Now can be safely categorised as a non-AI virtual assistant. It's like Siri, Cortana, Alexa and other virtual assistants. Nice, but not AI

Assistant is more adaptive. It learns, both to become more generally competent and also to personalise.

You can use the command "Remember" to make Assistant your own repository of handy information. For example, you can say: "Remember my bike combo is 397." Five years from now, you can ask Assistant what your bike combo is, and it will tell you.

The important thing to know about Assistant is that it learns and uses facts about you, personally. It accesses Google's Search and Knowledge Base information. And it will do things such as control your home appliances and make your dinner reservations.

That omnipresent AI will do it all.

Unlike Google Now (or, for that matter, Siri, Alexa and Cortana), Assistant should be able to "figure things out," even with a cryptic or vague request. The ability to do this should improve over time as users rank responses.

Assistant should also make minor decisions. So if you're in your car and say: "Play 'Gold' from Kiiara," that song will play through your car's sound system. But if you say the same thing at home, it will play through either your Home device or your Chromecast, depending on which you tend to prefer.

Another Assistant skill is that it pays attention to (or, depending on your views on privacy, "spies on") your conversations and interjects helpful information and links, such as restaurant recommendations. If you're doing this in Google's Allo, both parties see the recommendations.

Assistant gets even more contextual with a Pixel phone. A long press on the home button while you're looking at a photo, for example, returns personalised search results based on the content of the photo (It's basically the Google Now On Tap feature extended to Assistant and the new phones).

We'll always have other user interfaces -- virtual, mixed and augmented reality, for example. But these are for experience. For information, the interface will be conversation. You'll be able to stop worrying about devices, apps, platforms and all the rest. You just talk, and Assistant makes it happen. That's the vision, anyway.

When AI chooses bots for you

Google is planning to open up Assistant to developers via a platform called Actions on Google.

Developers won't build "apps" or even "bots," according to Google's lingo, but "Actions." These "Actions" can be either Direct Actions or Conversation Actions.

Direct Actions are simple query-response events. So an airline might build Direct Actions so that a question about when a flight lands returns the estimated landing time.

Conversation Actions are harder to build but easier to use. They involve back-and-forth. So an airline's Conversation Action might enable you to ask: "Which weekend in October has the cheapest price for flying to Vegas"? In response, the Action might initially gather more information, asking "Do you mind a stop-over?" for example. With Conversation Actions, the interaction involves the "bot" not only giving answers and doing things for you (such as booking those tickets), but also asking you questions to get more information.

Google last month bought a two-year-old Silicon Valley startup called API.ai, whose technology will be offered to developers for building Assistant Conversation Actions. (Awkwardly, API.ai has it's own app called "Assistant," which reportedly has some 20 million users.)

A smattering of companies have already jumped on board to offer third-party additions to Assistant. These include news sources like CNN, CNBC, The Huffington Post, ABC News Radio, CBS Sports, CBS Radio News and others; music sites such as Tunein, Pandora, iHeart Radio and Spotify; food-related brands like Food Network, Vivino and OpenTable; and home automation companies like SmartThings. Intriguingly, crowdsourced information services Quora and Jelly have also signed up.

Today, Amazon's Alexa is far ahead of other virtual assistants in the integration of third-party apps or bots -- or "skills," as Amazon calls them. Alexa now has more than 3,000 "skills," according to Amazon.

While "skills" are great, Amazon's system doesn't work for mass adoption. "Skills" have to be discovered and installed using the Alexa phone app. Memorisation is required by the user to call up a skill. This works well for a tiny number of skills. But once that number exceeds a dozen or two, most people will likely forget which "skills" they've installed, as well as the words that launch them.

Because Assistant uses AI, it should understand the result you want, then often choose -- or at least offer -- third-party integrations for you. Google hasn't announced this feature, and the company was unwilling to comment on it when I reached out to them. But I think the success of Assistant will depend on it.

AI-selected third-party bots enable you to use thousands of bots, not just a few. You won't have to memorise anything. Just talk to Assistant like you would a person, and Assistant's AI should figure it out and make decisions about how to respond.

I think it's clear that, despite coming from behind, Google's Assistant will gain far more and better support than Amazon's Alexa, in part because Google's hardware, software and online reach is far wider. Assistant will be everywhere.

Why Home is so important

AI virtual assistant apps and bots are free and easy. But when consumers buy a virtual assistant appliance like the Amazon Echo or Google Home, they're essentially expressing their brand preference, and that preference is likely to stick.

That's why lots of companies will ship such devices.

Samsung this week announced that it would acquire a startup called Viv Labs, which was founded by three members of the Siri team when Apple bought Siri in 2010.

Viv is a Siri-like virtual assistant, but with multifaceted agency, meaning that it can combine services. You should be able to say things like "I'd like to go to a good Italian restaurant." In response to that simple, natural language phrase, Viv could take note of your location, use a service that ranks restaurants, make a reservation at the nearest good one and call you an Uber. In other words, like Google's plans for Assistant, Viv can use AI to choose the third-party integration that makes sense.

Viv is the closest thing we have to Google's Assistant. Samsung unveiled an Amazon Echo-like virtual assistant appliance in April called Otto. I think Samsung will ship Otto, and it should be powered by Viv -- as should Samsung phones, TVs and other products.

It's a near certainty that Microsoft will eventually ship a Cortana virtual assistant appliance.

Facebook is a prime candidate to create a "social" virtual assistant appliance, one that might work with Messenger and Facebook's M virtual assistant bot.

And Apple at some point will probably ship a virtual assistant appliance of some kind that runs Siri.

Why Google Assistant should win

With all this future competition, it's clear that Google is in the best position.

The effectiveness and success of an AI virtual assistant depends on three factors. First, the quality of the AI The smarter the software -- and the better able it is to understand you no matter how you talk -- the more useful and usable it will be. Two years ago, Google acquired an AI startup called DeepMind Technologies, which makes AI so smart it beats humans in Atari Breakout and Go.

The second factor is personal data. By leveraging Gmail, Searches, Contacts, Calendar, YouTube and other services, Google can easily have the best and most personal user data to apply to personal assistant relevance.

The third factor is ubiquity. Consumers won't want to switch between virtual assistants. They'll want the one that's always present. Google is uniquely positioned to be everywhere, from phones to watches to TVs and even Apple's iOS.

Google Assistant is already the most widely distributed. Alexa isn't available as a text-chat bot in a messaging app. Facebook's M is only available in Messenger. Siri isn't available on Android phones.

Finally, there's a level of habit and trust with Google. To look something up on the Internet is to "Google" it. Assistant will change that to "OK Google" it.

Talking to AI is the last user interface. The race is on. So far, it looks like Google is going to win.