Posts tagged with "siri"

This is a good video by Marques Brownlee on where things stand today between Siri (iOS 10) and the Google Assistant (running Android Nougat on a Google Pixel XL). Three takeaways: Google Assistant is more chatty than old Google Voice Search; Google still seems to have an edge over Siri when it comes to follow-up questions based on topic inference (which Siri also does, but not as well); and, Siri holds up well in most types of questions asked by Brownlee.

In my daily experience, however, Siri still falls short of basic tasks too often (twoexamples) and deals with questions inconsistently. There is also, I believe, a perception problem with Siri in that Apple fixes obvious Siri shortcomings too slowly or simply isn't prepared for new types of questions – such as asking how the last presidential debate went. In addition, being able to text with Google Assistant in Allo for iOS has reinforced a longstanding wish of mine – the ability to converse silently with a digital assistant. I hope Siri gets some kind of textual mode or iMessage integration in iOS 11.

One note on Brownlee's video: the reason Siri isn't as conversational as Google Assistant is due to the way Brownlee activates Siri. When invoked with the Home button (or by tapping the microphone icon), Siri assumes the user is looking at the screen and provides fewer audio cues, prioritizing visual feedback instead. If Brownlee had opened Siri using "Hey Siri" hands-free activation, Siri would have likely been just as conversational as Google. I prefer Apple's approach here – if I'm holding a phone, it means I can look at the UI, and there's no need to speak detailed results aloud.

Apple’s high-level goal here should be to include responses that increase your faith in Siri’s ability to parse and respond to your question, even when that isn’t immediately possible. Google Search accomplishes this by explaining what they’re showing you, and asking you questions like “_Did you mean ‘when is the debate’?_” when they think you’ve made an error. Beyond increasing your trust in Siri, including questions like this in the responses would also generate a torrent of incredible data to help Apple tune the responses that Siri gives.

Apple has a bias towards failing silently when errors occur, which can be effective when the error rate is low. With Siri, however, this error rate is still quite high and the approach is far less appropriate. When Siri fails, there’s no path to success short of restarting and trying again (the brute force approach).

The comparison between conversational assistants and iOS' original user interface feels particularly apt. It'd be helpful to know what else to try when Siri doesn't understand a question.

Walt Mossberg, writing for The Verge, shares some frustrations with using Siri across multiple Apple devices:

In recent weeks, on multiple Apple devices, Siri has been unable to tell me the names of the major party candidates for president and vice president of the United States. Or when they were debating. Or when the Emmy awards show was due to be on. Or the date of the World Series. When I asked it "What is the weather on Crete?" it gave me the weather for Crete, Illinois, a small village which — while I’m sure it’s great — isn’t what most people mean when they ask for the weather _on _Crete, the famous Greek island.

Google Now, on the same Apple devices, using the same voice input, answered every one of these questions clearly and correctly. And that isn’t even Google’s latest digital helper, the new Google Assistant.

Like Mossberg, I think Siri has gotten pretty good at transcribing my commands (despite my accent) but it still fails often when it comes to doing stuff with transcribed text. Every example mentioned by Mossberg sounds more of less familiar to me (including the egregious presidential debate one).

Five years on, Siri in iOS 10 is much better than its first version, but it still has to improve in key areas such as consistency of results, timeliness of web-based queries (e.g. Grammys, presidential debates, news stories, etc.), and inferred queries (case in point). Despite the improvements and launch of a developer platform, these aspects are so fundamental to a virtual assistant, even the occasional stumble makes Siri, as Mossberg writes, seem dumb.

Last week, I mentioned how Airmail – my favorite email client for iPhone and iPad – would soon receive Siri integration on iOS 10. Today, Airmail 1.3 has hit the App Store with a variety of iOS 10 features in addition to SiriKit, including support for rich notifications and iMessage.

Steven Levy has a fascinating inside look at Apple’s artificial intelligence and machine learning efforts on Backchannel. Levy spent most of a day with Eddy Cue, Phil Schiller, Craig Federighi, Tom Gruber, and Alex Acero in a wide-ranging discussion of the products impacted by those efforts. Perhaps the most interesting parts of the interviews revolved around what Levy refers to as the Apple Brain inside the iPhone:

How big is this brain, the dynamic cache that enables machine learning on the iPhone? Somewhat to my surprise when I asked Apple, it provided the information: about 200 megabytes, depending on how much personal information is stored (it’s always deleting older data). This includes information about app usage, interactions with other people, neural net processing, a speech modeler, and “natural language event modeling.” It also has data used for the neural nets that power object recognition, face recognition, and scene classification.

And, according to Apple, it’s all done so your preferences, predilections, and peregrinations are private.

Levy also covers the replacement of Siri’s smarts on July 30, 2014 with neural-net system. The impact according to Eddy Cue was immediate:

This was one of those things where the jump was so significant that you do the test again to make sure that somebody didn’t drop a decimal place.

Many people have commented that Siri has improved over time, but without the context beyond one’s own experience or metrics from Apple, the perceived change has been largely anecdotal. According to Acero, however:

The error rate has been cut by a factor of two in all the languages, more than a factor of two in many cases.… That’s mostly due to deep learning and the way we have optimized it — not just the algorithm itself but in the context of the whole end-to-end product.

Levy also delves into whether Apple’s stance on privacy hobbles its ability to effectively implement AI and machine learning. According to Apple, it does not. The most personal information remains on-device in the ‘Apple Brain.’ Other data, which is transmitted to Apple uses techniques like differential privacy, which is coming in iOS 10, to obfuscate a user's identity.

The entire article is worth a read to get a sense of the breadth and depth of Apple’s AI and machine learning efforts and the impact on its products. It’s also fascinating to see Apple continue to open up on its own terms as a way to rebut recent criticisms leveled against it.

Useful site by Sandro Roth (via Six Colors) to browse every command supported by Siri in Apple's apps. I almost wish iOS had a similar interface to explore commands. I wonder if we'll start seeing more sites like this pop up after iOS 10 and SiriKit.

Imagine if, on a weekly basis, you saw or heard "Xinghua" being compared to Siri. But "Xinghua" was available only in China and only to people who spoke Mandarin. How meaningful would those comparisons really be to you in the U.S.? That's about as meaningful as headlines comparing Amazon's virtual assistant, Alexa to Apple's Siri are to the vast majority of the world's population.

Right now Alexa is solving only for people in America who speak English. That's an incredibly small subset of what Siri, which just recently added Hebrew and several other languages in several other reasons, solves for.

With all due respect to Rene, I think this is a disingenuous way of defending Siri from the comparisons to the Amazon Echo's Alexa.

It is, of course, a fair complaint that the Amazon Echo is not available in countries outside the United States, and that it can only understand US English.1 But I do not think it is legitimate to imply that the Echo's geographic and lingual limitations somehow undermines the advances that the Echo offers in other areas such as its integrations with services which is seeing it receive praise from all-corners of the industry in recent months.

A large part of the praise of the Amazon Echo is because in 18 months it has gone from a product that didn't exist, into one that many in the US find incredibly useful. Also significant is that in those 18 months it has evolved rapidly, adding great new features that make it even more useful. That is why people are comparing it to Siri, which launched in 2011 and has undoubtedly improved, but at a much slower pace and in less substantial ways (multi-lingual support aside).

I'm an Australian and I don't think this Siri vs Alexa debate is "laughably US-centric", I think it's important, even if I can't personally use Alexa. Just last week, Google announced that it will be releasing a very similar product later this year, and credited Amazon for their pioneering work with the Echo. I am certain Apple has taken similar notice of Amazon's (seemingly successful) efforts with the Echo, and if Apple acts on those observations, then everyone with access to Siri will benefit.

So I'm not laughing, I'm grateful, if a little envious that my friends in the US are (yet again) getting a taste of the future before me. But I know it'll reach me soon enough, whether it's via Apple, Google, Amazon, or even Microsoft.

I regularly make these kinds of observations/complaints about various products and services. Two years ago I even spent days researching and putting together this extensive examination of just how far ahead Apple was in terms of the availability of media content in countries around the world, so I understand this frustration very well. ↩︎

It is not a secret that Siri has not kept up the pace that just about all of us expected, including some of the Siri team. The passion that Steve had seemed to have been waning deep inside of Apple and the results were Dag and Adam Cheyer moved on and formed Five Six Labs ( A play on V IV in Roman numerals) and Viv.

Tom Gruber, one of the original team members and the chief scientist that created Siri technology, stayed on and continued his work. During most of 2016 and 2017 we will begin to see the results of this work. I call it Siri2 and am very certain Apple will call it something else.

Roemmele has been following all this for a long time, and he adds:

If Apple utilizes just a small subset of the technology developed by VocalIQ, we will see a far more advanced Siri. However I am quite certain the amazing work of Tom Gruber will also be utilized. Additionally the amazing technology from Emollient, Perception and a number of unannounced and future Apple acquistions will also become a big part of Apple’s AI future.

The Verge reports today that Siri has been upgraded with a load of baseball facts, just in time for Opening Day:

Siri now has some more baseball smarts: it can answer questions about more detailed statistics, according to Apple, including historical stats going back to the beginning of baseball records. You can also get information on career statistics, and there's now specific information for leagues other than the Majors — there are 28 other leagues, including the Minors, that are covered now.

I tested out a number of questions with Siri and, like Dante D’Orazio of the Verge, found that certain questions like “Who hit the most home runs ever in baseball?” tended to return either Google search results or in the case of the home run question above, the results for the 2016 season, not all time.

In case you were wondering, right now Troy Tulowitzki and Corey Dickerson are tied for the lead with one home run each.