think of a meeting full of iphones and someone says, "text my wife I'm leaving her for another woman" and the whole room sends the text

As I mentioned, I don't think it'll happen until Siri can recognize individual voices. In fact, "personalization" like this is going to add to the accuracy of the recognition heuristics, which in turn will strengthen the Apple lock-in.

I don't have any technical data to back this up, but I will guess that the major impediment to "always on" is more one of power consumption than algorithms.

to me, always on would be more of a desktop or television stb (atv) thing where power is less of an issue (ie a movement sensor that can wake the machine up to pay more attention. Using something like kinect, it could tell if you're facing it, and perhaps read lips (even through a hatch window in the pod bay).

She should just know, and remind me in a timely fashion. Similarly, if I then turn around and start telling Sally about the great lunch that Frank and I just had, Siri should chime in with a helpful, "Bob, sir."

And if I'm reminiscing with my wife about that torrid night in Las Vegas, Siri should chime in with a helpful, "Err, that was your mistress, sir."

Some local processing. It shouldn't require a network connection to process "Yes/Send/OK" vs. "No/Cancel."

More back-end processing power. The present outages are embarrassing and should have been preventable. Apple knew exactly how many iPhones it had manufactured, and should certainly have anticipated selling out.

These!

For phoning my Contacts, a 4S with Siri is often far less efficient than an iPhone 4 with voice control due to flakey local 3G. Surely Apple doesn't need to know every time I call my Wife (or do they ... )

think of a meeting full of iphones and someone says, "text my wife I'm leaving her for another woman" and the whole room sends the text

As I mentioned, I don't think it'll happen until Siri can recognize individual voices. In fact, "personalization" like this is going to add to the accuracy of the recognition heuristics, which in turn will strengthen the Apple lock-in.

I don't have any technical data to back this up, but I will guess that the major impediment to "always on" is more one of power consumption than algorithms.

Recognizing individual voices is an easier algorithm than voice recognition.

I don't think Apple will do arbitrary 3rd party Siri integration. I think it will be more background processing style. Apple provides certain voice services (e.g. Turn by turn navigation...."Siri take me to 42nd and Lex") that the app, if installed, can respond to.

I also think on board processing is a must. Not sure how feasible this is, because the databases it needs to store are probably tremendous, but I could imagine a feature where it records your voice all day long, and communicates with the server once a day to download a newer improved version of itself (with the FB only limited to aspects related to your voice), based on the server listening to your day or week's worth of voice snippets.

If Apple truly believes in Siri, I wouldn't be surprised to see a dedicated button until always on listening is a reality. But always on listening is coming. Just a matter of time (since the inboard processing is a prerequisite for this, and battery life concerns are also an issue).

I don't think Apple will do arbitrary 3rd party Siri integration. I think it will be more background processing style. Apple provides certain voice services (e.g. Turn by turn navigation...."Siri take me to 42nd and Lex") that the app, if installed, can respond to.

I don't think it'll be arbitrary; I think it will be a limited set of carefully curated apps that are permitted to integrate. But Apple does acknowledge that it can't do everything by itself. Though their mapping purchases lead me to agree that they're likely to do turn-by-turn on their own. I just hope they use locally-stored maps.

This, along with possibly a faster Siri, will require a lot of on-board storage, which is expensive. Is there a cheaper fast-read, but slow-write (since both large DBs probably don't need to be updated much) technology that Apple could use separate from their regular storage, that could store all this data, but not increase costs too much? It would only need to be updated infrequently, so the slow-write would not be an issue.

This, along with possibly a faster Siri, will require a lot of on-board storage, which is expensive. Is there a cheaper fast-read, but slow-write (since both large DBs probably don't need to be updated much) technology that Apple could use separate from their regular storage, that could store all this data, but not increase costs too much? It would only need to be updated infrequently, so the slow-write would not be an issue.

TomTom USA uses only 1.3 GB, and the USA has a _lot_ of roads. So I don't think storage is much of an issue. I've got a 64 GB iPhone, and I have lots of movies that take up more than 1.3 GB.

I doubt Apple would go through the effort of having two different kinds of storage on the device, as it would greatly increase the complexity for little benefit.

I don't think Apple will do arbitrary 3rd party Siri integration. I think it will be more background processing style. Apple provides certain voice services (e.g. Turn by turn navigation...."Siri take me to 42nd and Lex") that the app, if installed, can respond to.

I don't think it'll be arbitrary; I think it will be a limited set of carefully curated apps that are permitted to integrate. But Apple does acknowledge that it can't do everything by itself.

That's what I was originally thinking. They can't just have an API and let it be a free-for-all, because it's a centralized interface. If I ask for the showtimes of Avatar 2: Electric Bugaloo, there's going to be more than one app that wants to answer. So I thought, Apple will not release an API, but will instead partner with Flixster to provide showtimes, ESPN to provide sports scores, etc...

But actually, this sounds like a much better idea, and one that I hope to see in Siri 1.0:

serversurfer wrote:

For the API, I think Apple will need to define types of queries. For example, Restaurant Query, Movie Query, etc. Developers will then need to register the query type(s) they can handle. For example, Yelp can handle Restaurant Queries, while Flixster can handle Movie Queries, and perhaps Loopt can handle both. Then the user will go in to the settings for Siri and choose which app handles which type of result.

And this is a functionality that hadn't even occurred to me:

Quote:

At the same time, they should also provide a voice command library to devs, so they can leverage the tech to use voice control within the apps themselves.

Within an app, there's no conflicts, so Apple should provide some way to let apps execute their own commands while the app is running, using Siri's voice recognition and natural language processing. Good idea. I don't expect to see this in 1.0, but I can totally see it down the road.