“Siri is one of the most popular features of iPhone 4S,” Cook said. “But there’s more that it can do, and we have a lot of people working on this. And I think you will be really pleased with some of the things you’re going to see over the coming months. We have some cool ideas about what Siri can do. We have a lot going on on this. … Sure, it can be broader, and so forth, but we see unbelievable potential here. We’re doubling down on it.”

His reference to “the coming months” is interesting, given that, on June 11, he’ll be giving the keynote address at Apple’s Worldwide Developer Conference. Most observers expect iOS 6 to be unveiled there, and the rumor mill has Siri APIs on the agenda. They would allow developers to use Siri to interact with apps, which could be a game changer for Apple.

Imagine, for example, asking Siri to work with the Open Table app to make a 7 p.m. reservation at your favorite restaurant. Or asking it to add to your Contacts list the person you just chatted with on Skype. Or sending a link to the Web page you’re currently reading to a friend.

You get the idea. As Cook said, Siri could do more. A lot more. And it would be pretty darned cool.

But before Siri can live up to its potential, it has to work better at a more fundamental level, and there’s no sign that Siri on the iPhone 4S is getting better at what it does. If Apple is learning how to improve the feature’s accuracy and reliability, the company doesn’t appear to be using that knowledge to improve the product now.

That’s in contrast to Google, which pushes improvements to its products rapidly. The Chrome browser you’re using today is much better than the one you used just a few months ago. But Siri today is pretty much the same unreliable product Apple shipped on the iPhone 4S back in October.

Improving Siri on the fly should not be hard to do, as most of the heavy lifting is on Apple’s cloud servers. Granted, part of the problem appears to be that they are overwhelmed (which may be why Siri isn’t available on the new iPad), but still, the code that executes on them could be better.

If Apple’s “doubling down on Siri”, let current iPhone 4S users know it by making the feature better right now, not “over the coming months”.

I think what i would prefer is “not seen yet” rather than “not happening”. yeah, semantics… But i do understand your point. I had thought that Siri was a compelling reason to upgrade, but after all the bad press decided to hold off. Now I’ve held off long enough that i’ll likely go Android instead when I replace the 3GS.

If Siri had lived up to the hype and potential, I might have upgraded sooner, prior to there being any reasonable competition and stayed the iCourse longer.

I agree Dwight, Siri needs to work better at least on the fundamental level. The google services voice recognition on my phone does a better job at accuracy. Till this date, Siri to me is still hardly ever is used(because it’s useless) and the cool factor has faded into the sunset

I have to agree with Dwight on this one. Each time Siri is used, the data is sent to Apple’s servers, and then sent back to the phone. With the amount of data that’s been sent to their servers, I would expect them to be able to analyze that data and find where to make improvements. With all the “I’m sorry <your name here, I don't know what you mean by , would you like me to search the web” responses I get, I would think Apple would say, ”oh, they mean this”, and create the connection for Siri. While I understand it’s not just that easy, I would imagine when they created and implemented Siri, they setup a department or person to knock this out.

For people like me who don’t type Siri does a fine job translating speech to text but you best not have the accent, vocabulary, or sentence structure of an Appalachian back woods moonshiner.

I created a special Contact for Siri to use when it can’t figure things out (Settings>General>Siri>My Info – select from Contacts) and prompts me further, “I’m sorry, I don’t know what you mean, Pinhead. Makes the failures amusing.

“What is the price of Apple stock?” “Apple is down today. The current price is $568.75″

“Play “Ain’t No Sunshine”" “Ain’t No Sunshine now playing.

There is a bit of a learning curve with Siri. For instance, I once tried demonstrating Siri to my dentist. Commands I used frequently weren’t working successfully. Finally figured out that his office has a background music system which was confusing Siri. There are also great differences between male and female voices, speed and pronunciation of speech, speech accents, etc. There is a lot for Siri to process and sort out. You do need to speak slowly and clearly carefully pronouncing each word.

You also need to learn a bit about Siri’s capabilities. For example, you can ask her, “Please give me directions to New Orleans” and she’ll promptly produce a map with directions. However, you can’t ask her, “Please give me directions from Houston to New Orleans.” That won’t work. She’s programmed to provide directions from your current location. If that happens to be Houston, you’ll get the directions you want but you have to ask the question correctly. She’ll let you know about that which is how you learn.

@ lil ol me – Stop it please with the nonsense. I gave examples of the commands I give Siri almost every day. I’m sure your experience with Siri is zero to the tenth power. In other words, you have no experience using Siri, and therefore you aren’t qualified to comment.

I am a long time user of voice recognition. The biggest fundamental issue with Siri is lack of individualized training.

The human mind is a wondrous thing. It can hear a human voice and sort out what was said and how it was said, even when every single human uses slightly different accents, inflections, dialects, even different pacing and rhythm, etc. And when we can’t hear every last detail, our brain has the imagination to fill in the missing bits, giving us what we think we hear… and, amazingly, our brain is right more than wrong.

Computer processors and algorithms depend on getting coherent and consistent input. Complete input, much like the getting the syntax absolutely right when programing or entering the old DOS/CPM commands, is mandatory. Computers have no imagination. Most apps like Siri don’t even handle context well enough to figure out whether it’s two, to, or too.

Everybody uses language and voice differently. Apps like Siri just aren’t up to the task, yet, of sorting out the millions of minor variations each of uses in our speaking voices.

Eventually, given enough processing power and bandwidth, apps like Siri may begin to work as advertised, but till then, they are but a glimpse of the future and a flaky side show.

BTW, I use Dragon NatSpeak. Started with ViaVoice (discrete word) on OS2. I dictate and write with it almost every day. I have, through use and software training, gotten its accuracy up to 98%. And yet it still stumbles on the littlest things sometimes. Like two, to, and too. lol

The problem with Siri is that you can give it a simple request (“call home”) and get utterly different results from one attempt to the next. When I used Blackberry’s voice dialing feature, it didn’t always work the first time, but it never utterly failed to even parse the request. I’d say a simple request to Siri to call home meets with stony silence at least 25% of the time. And another 10% of the time, it results in a colossal mis-parse.

If you can’t even do the basics reliably, you shouldn’t be looking to widen the net and utterly fail to do more advanced things.

I make moderate use of Siri and it works well for me. I set the timer, have her read text messages or send messages, get information from the net ( atomic weights and numbers).

As to why Siri has not shown improvement, I think that Siri has improved somewhat in that the recognition part seems better than it was a few months ago. Beyond that, Apple tends to make major releases than incremental releases so I expect to see big changes and improvements in iOS 6.