An always-on Siri: MindMeld listens to you talk, finds what you need

"Anticipatory computing" will do all your work without you even asking.

Tim Tuttle intends to give Siri and Google Now a run for their money with MindMeld, an iPad app slated to launch in the App Store this fall, with an iPhone app soon to follow. While both existing services have quickly become essential features on new smartphones, MindMeld will take the idea of a digital personal assistant a step further by making it almost prescient.

Though it will launch in the App Store under the guise of a video conferencing tool, a capability which it provides natively, MindMeld is actually an information-driven application which listens in to your conversation and attempts to understand what's being said. Once it figures out what you're talking about, MindMeld will try to create a model of the conversation's context, and from that it will attempt to locate and display relevant information from many different sources. "We're listening to the last ten minutes to predict what you need in the next ten seconds," Tuttle told Ars. "We're trying to make it so you never have to explicitly search for something you've already talked about."

Enlarge/ MindMeld's iPad app will be able to simultaneously video conference and deliver real-time search results.

Tuttle and his team began working on MindMeld with the intention of creating technology which would be considered essential for meetings, phone calls, and any other type of collaborative spoken working environment. Unlike Apple's Siri and Android's Google Now, you don't actually have to address the MindMeld agent to get any results. Instead, it is constantly listening during a call so that it can hear everything and make sure it delivers the right information based on what you're talking about. It's like having a friendly robot eavesdropping on your conversation and looking things up for you.

Before it presents anything, the MindMeld friendly robot—more properly known as the "anticipatory computing engine"—will extract information from search engines, news articles, videos, the user's social networking profiles, and even locally stored documents. It will then attempt to correlate all of that data and rank it by its importance to the conversation. For instance, if someone mentions that Becky is coming to the Bay Area and she’d like to check out wine country, MindMeld will recognize those keywords and begin to display links to wine tours and a map of the Napa Valley in real time. The idea is for the app to present you what you're about to look for before you even start looking for it, hence "anticipatory computing."

MindMeld’s engine has been designed to do three things: it can decipher a multi-party conversation and pick out vital keywords from concurrent streams of dialogue in real time; it can do continuous, predictive modeling, which essentially means it listens to the conversation to understand what has already been said and what might be talked about next; and it can perform proactive information discovery based on what it's hearing so that it’s constantly finding and retrieving things for you.

Though the idea of a robot constantly listening in may raise up those "big brother is watching" flags, Tuttle explains that the application only does so when all parties have explicitly allowed it. MindMeld will only process a conversation if another user has the app installed on their iOS device—it will not listen in to telephone conversations or decipher the speech of someone who has not given it specific permission.

Tuttle does foresee a few issues still looming for the product's launch. Most notably, he wants to ensure the technology works flawlessly on the backend and that the team can scale it out as the user base starts to grow. "I expect we're going to be working very hard to make sure [the launch] goes smoothly," he said.

This isn’t an entirely new concept, and if MindMeld is a success, it could help shift the focus of apps like Siri and Google Now in a new direction. There’s also some speculation that this technology could be married with Google Goggles, since Google's Ventures branch has reportedly backed the company behind MindMeld with $2.4 million. Regardless, it seems to be only a matter of time until technology like this becomes mainstream. As Tuttle says, if your device understands everything that you see and hear, it can use that information to better help you.

38 Reader Comments

I could see this being helpful, but if the processing is on the backend then they've already failed at making "meetings" more productive. Because of privacy laws, or just paranoia in general, it would be difficult for any large company to use this, or any entity handling sensitive information responsibly.

Would be funny to see some guy that was talking dirty with a co-worker have a conversation with her in front of his wife, "Honey, why is your tablet searching for S&M equipment when you talk to your secretary?"

so this will listen to everything we say, look through our phones/tablets for documents, look through our facebook/twitter/linkedin/other social pages, and all through Mindmelds computers held in mindmelds facilities. Not sure i would trust this on a work phone. Not sure it would be useful on my personel phone.

I'd be interested in having this as a desktop application running on a second display while I play games. Having it pull up maps, stats, prices, whatever. I don't know how well encouraging me to talk to myself MORE would work out though.

so this will listen to everything we say, look through our phones/tablets for documents, look through our facebook/twitter/linkedin/other social pages, and all through Mindmelds computers held in mindmelds facilities. Not sure i would trust this on a work phone. Not sure it would be useful on my personel phone.

Skynet! Run your lives! Ahhhh! All kidding aside, this actually sounds pretty cool. I hope it doesn't get shot down because of privacy laws, because that would be a shame. Goodness knows we need something that understands you possibly better than you can yourself, setting reminders so you don't have to.

While I find this fascinating, the privacy implications for private use and the security ones for enterprise use make it a non-starter, even if you are more imaginative than I am and find a use for it in the first place...

Well, at least it doesn't sound as contrived and stupid as Siri...

On the other hand, people seem to love Siri and social networking sites seem to be a thing, so I'm sure it will be a roaring success...

Reaction 1: "Does it come with a plutonium RTG to power yoru phone?"Reaction 2: "The backend is going to use what, the entire Andromeda galaxy to compute this?"Reaction 3: "Oh hell no."Reaction 4: "Get off my goddamned lawn!"Reaction 5: "Shit, this is inevitable, time to retire to my cabin in the woods."

All kidding aside, it's a really really neat concept, and I could see how it would be really helpful in meetings and that type of thing, I'd love to have my computer find a document or article I read based upon it knowing my history and me saying something like...I know I read an article about such and such. But I just don't know about it being smart enough to draw clues from speech. Especially if it's heavily "jargoned" speech.

The concept is really interesting. I've been attempting to convert my Android to a geniusphone -- essentially a digital replacement for a personal assistant (contextual awareness being the most useful project). I've made pretty good strides via Tasker, but for query situations, you still have to input data yourself, since my Android can only understand passive information (location, calendar events, etc.). Voice-based need prediction is a very evolved step from this.

Obviously, there are some major technological and sociological hurdles -- battery and privacy being the biggest of the two. Siri already records and saves your discreet inquiries. I personally won't trust this until all the computing can be done on the phone itself. Plus, the 23 minute battery life is totally going to rock.

As neat as an idea as this is, though, I'm tremendously skeptical. For all of the information the internet has supposedly collected about me, they sure as hell are terrible at selecting relevant advertisements, even before do-not-track. Facebook's ads are just laughable. "Oh, you like high-end wines? Here, let me post something about Sauza tequila!" And heaven forbid you play video games, because there is no other career choice ever in the world than to become a video game tester. Even Amazon, which is supposedly the king of all metric tracking, just can't quite figure out what I need. Even if you get past the technological hurdle of always-on, always-analyzed information, I don't know that we're going to have decent programming algorithms to really predict what you need. If discrete inputs often don't yield the result I need, passive inputs are going to be that much worse.

very ambitious. It seems like integration would need to be extremely tight for something like this to work well. Like, I wonder how this will work across devices and possible account variations a given individual might have for the various services it attempts to pull from.

Cool! This is how I've always envisioned using a device that follows me around and automatically documents certain aspects of my life and helps out by reminding me of little things. For example, "remember that you wanted to do x,y, and z the next time you were driving near this area of town", or "remember you wanted to ask so-and-so about this topic the next time you called them". It's a long way from that and obviously there are great privacy concerns, but its progress.

Oh man, I HATE making decisions! The more software that makes decisions for me, the better! If I give MindMeld permission to use the credit card data it dug up, is it capable of making the purchasing decisions for my household too?

You surely wanted to say software with the capability to listen to conversations, and I didn't check it totally out but I'm pretty sure it does server-side processing. Now, I'm seeing customer data protection in the U.S. being more a guide than a law with companies expected to self-regulate with government access usually seeming to be free and broad. Now I'm sure everyone wants this thing on their devices.

So, this company made some market research and find that some people are okay to let a machine record, transmit to a third party, store and analyze every word that get out of their lip in exchange for the convenience of not having to type their Google search themselves.

WHAT. THE. F**K.Do those people also need the assistance of a machine to wipe their own ass, and lace their shoes?

We're not talking about an assistive technology for disable people there, were talking about a technology for people who find that intentionally asking a question to a machine is too cumbersome.

Florence Ion / Florence was a former Reviews Editor at Ars, with a focus on Android, gadgets, and essential gear. She received a degree in journalism from San Francisco State University and lives in the Bay Area.