Category Archives: Mobile

There are many counterintuitive ‘rules’ in product design, these two are among the most intractable:

• The more successful a product, the harder it’s to upgrade.

• The more users say they want a product update, the more they complain when the change arrives.

It wouldn’t be unkind to ascribe both of them to the iOS platform: spectacularly successful and at the crossroads for the mother of all upgrades for both hardware and software, now commandeered for the first time by a single person who’s not named Steve Jobs. The financial impact of these design decisions is easily the 64-billion dollar question at Cupertino.

What has changed?

Having already sold over 120 million iPads in less than two years, Apple’s now making the sales pitch to hundreds of millions of potential post-PC consumers that iPads may be ‘OR’ devices, not just ‘AND’ adjuncts to their desktops and notebooks of yesteryear.

The iPhone in 2007 and the iPad in 2010 created their respective industry segments, then went on to dominate what was mostly virgin territory with a simple proposition: One Device > One Account > One App > One Window.

Several years after their introduction, now with many competitors, Apple is under pressure to examine every link in that chain of platform definition. And the one most contested is the last: One Window. While it’s true that iOS apps can contain two (and sometimes even more) ‘views’ in one screen, like the standard Master-Detail views, two different apps cannot share the same window. A blog writing app on an iPad can, for example, dedicate portions of its single window to video, map, search engine results or tweet displays, but not specifically to Vimeo, Google Maps, Bing or Twitter apps. In the sandboxed territories of iOS, ‘One Device > One Account > One App > One Window’ is still the law of the land.

As iPads move into business, education, healthcare and other vertical markets, however, expectations of what iPads should do beyond audio, video, ebook and simple app consumption have gone up dramatically. After all, users don’t just inertly read in one app at a time but write, code, design, compose, calculate, paint, clip, tweet, and, in general, perform multiple operations in multiple apps to complete a single task in one app.

In iOS, this involves double-clicking the Home button, swiping in the tray to find the other app, waiting for it to (re)load fully, locating the app view necessary to copy, double-clicking the Home button, finding the previous app in the tray and waiting for it to (re)load fully to paste the previously copied material. That’s just one operation between two apps. Composing a patient review for a doctor or creating a presentation for a student can easily involve many such operations among multiple apps.

Indeed, among the major post-iOS mobile platforms like Android, Metro and BlackBerry, iOS is the most cumbersome and slowest at inter-app navigation and task completion. There have been a few mitigating advances: gestural swipes, faster processors and more memory certainly help but the inter-app task sharing problem is becoming increasingly more acute. Unfortunately, solving iOS’s multitasking problem in general involves many other considerations, including introduction of UX complexity and thus considerable user re-education, to say nothing of major architectural OS changes. It may thus take Apple longer than expected to find an optimal solution. What can Apple do in the interim then?

Is ‘Multi’ the opposite of ‘One’?

Systems designers know all too well: when you just don’t have the time, money, staff or technology to solve a given problem, there are ways to cheat. Steve Jobs would be the first to tell you: that’s OK. A well executed cheat can be indistinguishable from a fundamental architectural transition.

From a design perspective, the weakest link in the one-task-many-operations-in-different-apps problem is the iOS clipboard. The single-slot clipboard. The one that forces the user to shuffle laboriously among apps to collect all the disparate items one. at. a. time.

But with a multi-slot clipboard, if you were writing a report, for example, you could go to a web page, copy the URL, a paragraph, maybe a photo and a person’s email address in one trip. Now a single trip back to the initial app and you have four items ready to be pasted into appropriate places with no more inter-app shuffle necessary. Instant 4X productivity gain. Simply put, if you had a four-slot clipboard, you can instantly quadruple your productivity. For a ten-slot clipboard, 10X!

Well, obviously, it’s not that easy. First of all, Apple doesn’t believe in multi-slot clipboards and doesn’t even ship one with Mac OS X. Also, you couldn’t really have an ‘infinite-slot’ clipboard, for iOS would run out of memory quickly. Finally, a multi-slot clipboard would require a visible UI for the user to select the right content, thereby introducing some cognitive complexity.

None of these objections seem insurmountable, though. iOS already has a similarly useful ‘option selectors’ like the recent ‘share sheets’ from which a user can send stuff to Twitter, Facebook, email, etc. Limiting the clipboard to four slots would enable at least 250-pixel square previews of each slot’s contents for easy identification. The Clipboard could pop, move up, slide in from right or perform some other clever animated appearance. Yes, there could be a cognitive penalty for having to be concerned about system-memory management, but a bit of user training for the concept of ‘First In, First Out’ or a little alert to the user indicating memory-intensive copying would go a long way.

It’s not my job to suggest Jony Ive how this might be implemented in UI and UX. But until Apple has a more general solution to multitasking and inter-app navigation, the four-slot clipboard with a visible UI should be announced at WWDC. I believe it would buy Ive another year for a more comprehensive architectural solution, as he’ll likely need it.

It’s hard to like Apple. To the dismay of conventional thinkers everywhere, the fruit company sambas to its own tune: makes the wrong products, at the wrong prices, for the wrong markets, at the wrong time. And, infuriatingly, wins.

Some of Apple’s ill-advised moves are well known. When other PC companies were shuttering their retail stores, Apple opened dozens in the most expensive locales. During the post-dotcom crash, instead of layoffs, Apple spent millions to hire and boost R&D. To the “Show us a $500 netbook, now!” amen corner Apple gave the un-netbook iPad, not at $999 but $499. The App Store and iTunes are still not open. Google hasn’t been given the keys to iOS devices yet…Clearly, this is a company that hasn’t learned the market-share-über-alles lesson from the Wintel era and is repeating the same mistakes, again. Like these:

• Media company — The slick design of Apple gadgets wouldn’t be nearly enough if it weren’t for the fact that Apple has quietly become the world’s biggest digital content purveyor. The availability of a vast library of media, coupled with the ease of purchase and the lock-in effect these purchases create, could easily tempt a lesser outfit to fashionably declare itself a “media company”. After all, Macromedia tried that with its AtomFilms purchase in 2000. Real and Yahoo dabbled in various forms media creation, acquisition and distribution. Microsoft fancied itself a part-media company with investments in publishing (Slate) and cable (MSNBC, Comcast). Amazon has several imprints of its own. Netflix is now an episodic TV producer. Google is investing hundreds of millions in original material for YouTube. Apple, on the other hand, has always resisted creating and owning content, because…

• Indies — … Apple plays for the fat middle part of the bell curve. Once a bit player in computers and consumer electronics, Apple’s now a giant. Whether it’s music, TV shows, movies or ebooks, Apple targets the mainstream, and the mainstream demands the availability of mainstream content from top labels, studios and publishers. It’s very tempting to urge Apple to sign deals right and left with independent producers in entertainment and publishing, to bypass traditional gatekeepers and ‘disrupt’ their respective industries, on the cheap. Unfortunately, beyond modest promotional efforts with indies, it doesn’t look like Apple’s likely to upset the mainstream cart from which it makes so much money.

• Multitasking — “One device. One account. One app. One window. One task.” seems to be Apple’s current approach to Post-PC computing. If iPads are going to cannibalize PCs in the workplace or schools, iOS workflow patterns will have to evolve. Bringing multiple user accounts to the same device, showing two windows from two different apps in the same view with interaction between the two or letting all/most apps work in the background would necessitate quite a bit of user re-education in the iOS camp. It’s not clear for how long Apple can afford not to provide such functionalities.

• PDF replacement — Apple’s tumultuous love affair with PDF goes back nearly 25 years to Display PostScript during its NeXT prequel. PDF may now be “native” to Mac OS X and the closest format of exchange for visual fidelity, but it’s become slow, fat, cumbersome and not well integrated with HTML, the lingua franca of the web. While PDF is too entrenched for the print world, ePub 3.0 seems to be emerging as an alternative standard for interactive media publishing. Apple does support it, even with Apple-created extensions, but composing and publishing polished ePub material is still a maddeningly complex, hit-and-miss affair. iBooks Author is a great start, but its most promising output is iTunes-only. If Apple has big ideas in this space, it’s not obvious from its available tools or investments.

• HTML 5 tools — While iBooks Author makes composing app-like interactive content possible without having to use Xcode, Apple has no comparable semi-professional caliber tool for creating web sites/apps for the browser. Apple has resisted offering anything like a Hypercard-level tool for HTML that sits in between the immense but disjointed JavaScript/CSS open ecosystem and the powerful but hard-to-master Xcode. It has killed iWeb and still keeps iAd Producer mostly out of sight. Clearly, Apple doesn’t want more apps but more unique apps to showcase the App Store. HTML isn’t much of a differentiator there and until the ROI in HTML 5 vs. native apps becomes clearer to Apple, such tools are unlikely to arrive anytime soon.

• Discovery tools — Yes, Apple has Genius, but that’s a blackbox. Genius is simple and operates in the background silently. It doesn’t have a visual interface like Spotify, Aweditorium, Music Hunter, Pocket Hipster, Groovebug or Discovr Music, allowing users to actively move around a musical topology visually, aided by various social network inputs. With its Ping attempt and Twitter and Facebook tie-ups, Apple has shown it’s at least interested in the social angle, but a more dedicated, visual and fun discovery tool is still absent not just for music but also for TV, movies, books and apps.

• Map layers — Over the last few years Apple has acquired several map-related companies, one of which, PlaceBase, was known for creating “layers” of data-driven visualizations over maps. Even before its messy divorce from Google, Apple has chosen not to offer any such map enhancements. When properly designed, maps are great base-level tools over which lots of different kinds of information can be interactively delivered, especially on touch-driven mobile devices where Siri also resides.

• iOS device attachments — One of the factors that made iPods and iPhones so popular has been the multi-billion dollar ecosystem of peripherals that wrap around or plug into them. However, besides batteries and audio equipment, there’s been a decided dearth of peripherals that connect to the 30-pin port to do useful things in medicine, education, automation, etc. Apple’s attention and investment in this area have been lackluster. Perhaps the new iPad mini coupled with the tiny Lightning Connector will rekindle interest by Apple and third parties in various domains.

• Wearables — Google Glass is slated for production in a year or so, Apple’s known assets in wearable computing devices amount to a few patents. There’s much debate as to how this field will shape up. Apple may choose to augment iPhones with simpler and cheaper devices like smart watches that work in tandem with the phone, instead of stand-alone but expensive devices like Google Glass. So far ‘wearables’ doesn’t even register as a hobby in Apple’s interestgram.

• Stylus — Apple has successfully educated half a billion users in the art of multitouch navigation and general use of mobile devices. That war, waged against one-year old babies and 90-year old grandmas, has been decisively won. However, until Apple invents a more precise method, taking impromptu notes, sketching diagrams and annotating documents with a (pressure sensitive) stylus remains a superior alternative to the finger. Some may consider the notion of a stylus (even one dedicated only to the specialized tasks cited above) a niche not worthy of Apple’s interest. And yet not too long ago 5-7 inch mobile devices were also considered niches.

• Games — Apple’s on course to become the biggest gaming platform. This without any dedicated game control or motion sensing input devices like the Xbox 360 Kinect and despite half-hearted attempts like the Game Center. Apple has been making steady progress on the CPU/GPU front on iOS devices and now the new Apple TV is also getting an A5X-class chip, capable of handling many console-level games. It remains unclear, however, if Apple has the desire or the dedicated resources to leapfrog not just Sony and Nintendo but also Microsoft in the games arena, with a strategy other than steady, slow erosion of the incumbents’ base.

• iOS Pro devices — Apple has so far seen no reason to bifurcate its iOS product line along entry/pro models, like MacBooks/MacBook Pros. iOS devices sell in the tens of millions every quarter into many complex markets in over 100 countries. Further complicating its SKU portfolio with more models is not the Apple way. More so than iPhones, an iPad with a “Pro” designation with specs to match has so far been not forthcoming. And yet several hundred million of these devices are now sold to business and education, where better security, provisioning, app distribution, mail handling, multitasking, hardware robustness, cloud connectivity, etc., will continue to be requested as check-mark items.

• Money — Apple hasn’t done much with money, other than accumulating about $140 billion in cash and marketable securities for its current balance sheet. It hasn’t yet made any device with NFC, operated as a bank, issued AppleMoney like Amazon Coins or Facebook Credits, offered a branded credit card or established a transactional platform (ignoring the ineptly introduced Passbook app). It has a tantalizing patent application for a virtual money transfer service (like electronic hawala) whereby iOS users can securely send and receive cash anywhere, even from strangers. With close to half a billion credit card accounts, the largest in the world, Apple has the best captive demographics for some sort a transactional sub-universe, but it’s anybody’s guess what it may actually end up doing with it or when.

Half empty or more to fill?

It would be easy and fun to spend another hour to triple this list of Things-Apple-Has-Not-Yet-Done. While not all of these would be easy to implement, none of them would be beyond Apple’s ability to execute. Most card-carrying AAPL doomsayers, however, would look at such a list and conclude: See, Apple’s fallen behind, Apple’s doomed!

There’s, of course, another way of interpreting the same list. Apple could spend a good part of the next decade bundling a handful of these Yet-To-Be-Done items annually into an exciting new iOS device/service to sell into its nearly half billion user base and beyond. Apple suffers from no saturation of market opportunities.

Apple will inevitably tackle most of these, but only in its own time and not when it’s yelled at. It’ll likely introduce products and services not on this or any other list that will end up rejiggering an industry or two. Apple will do so because it knows it won’t win by conventional means or obvious schedules…which makes it hard — for those who are easily distracted — to like Apple.

Earlier in “Is Siri really Apple’s future?” I outlined Siri’s strategic promise as a transition from procedural search to task completion and transactions. This time, I’ll explore that future in the context of two emerging trends:

Internet of Things is about objects as simple as RFID chips slapped on shipping containers and as vital as artificial organs sending and receiving signals to operate properly inside our bodies. It’s about the connectivity of computing objects without direct human intervention.

The best interface is no interface is about objects and tools that we interact with that no longer require elaborate or even minimal user interfaces to get things done. Like self-opening doors, it’s about giving form to objects so that their user interface is hidden in their user experience.

Apple’s strength has always been the hardware and software it creates that we love to carry, touch, interact with and talk about lovingly — above their mere utility — like jewelry, as Jony Ive calls it. So, at first, it seems these two trends — objects talking to each other and objects without discernible UIs — constitute a potential danger for Apple, which thrives on design of human touch and attention. What happens to Apple’s design advantage in an age of objects performing simple discreet tasks or “intuiting” and brokering our next command among themselves without the need for our touch or gaze? Indeed, what happens to UI design, in general, in an ocean of “interface-less” objects inter-networked ubiquitously?

Looks good, sounds better

Fortunately, though a star in her own right, Siri isn’t wedded to the screen. Even though she speaks in many tongues, Siri doesn’t need to speak (or listen, for that matter) to go about her business, either. Yes, Siri uses interface props like fancy cards, torn printouts, maps and a personable voice, but what makes Siri different is neither visuals nor voice.

Despite the knee-jerk reaction to Siri as “voice recognition for search,” Siri isn’t really about voice. In fact, I’d venture to guess Siri initially didn’t even have a voice. Siri’s more significant promise is about correlation, decisioning, task completion and transaction. The fact that Siri has a sassy “voice” (unlike her competitors) is just endearing “attitude”.

Those who are enthusiastic about Siri see her eventually infiltrating many gadgets around us. Often seen liaising with celebrities on TV, Siri is thought to be a shoo-in for the Apple TV interface Oscars, maybe even licensed to other TV manufacturers, for example. And yet the question remains, is Siri too high maintenance? When the most expensive BOM item in an iPhone 5 is the touchscreen at $44, nearly 1/4 costlier than the next item, can Siri afford to live outside of an iPhone without her audio-visual appeal?

Well, she already has. Siri Eyes Free integration is coming to nine automakers early this year, allowing drivers to interact with Siri without having to use the connected iPhone screen.

Given Siri Eyes Free, it’s not that difficult to imagine Siri Touch Free (see and talk but not touch), Siri Talk Free (see and touch but not talk) and so on. People who are impatient with Apple’s often lethargic roll out plans have already imagined Siri in all sorts of places, from aircraft cockpits to smart wristwatches to its rightful place next to an Apple TV.

Over the last decade, enterprise has spent billions to get their “business intelligence” infrastructure to answer analysts’ questions against massive databases from months to weeks to days to hours and even minutes. Now imagine an analyst querying that data by having a “natural” conversation with Siri, orchestrating some future Hadoop setup, continuously relaying nested, iterative questions funneled towards an answer, in real time. Imagine a doctor or a lawyer querying case histories by “conversing” with Siri. Forget voice, imagine Siri’s semantic layer responding to 3D gestures or touches on glass or any sensitized surface. Set aside active participation of a “user” and imagine a monitor with Siri reading microexpressions of a sleeping or crying baby and automatically vocalizing appropriate responses or simply rocking the cradle faster.

Scenarios abound, but can Siri really afford to go fully “embedded”?

There is some precedence. Apple has already created relatively successful devices by eliminating major UI affordances, perhaps best exemplified by the iPod nano ($149) that can become an iPod shuffle ($49) by losing its multitouch screen, made possible by the software magic of Genius, multi-lingual VoiceOver, shuffle, etc. In fact, the iPod shuffle wouldn’t need any buttons whatsoever, save for on/off, if Siri were embedded in it. Any audio functionality it currently has, and much more, could be controlled bi-directionally with ease, in all instances where Siri were functional and socially acceptable. 3G radio plus embedded Siri could also turn that tiny gadget into so many people’s dream of a sub-$100 iPhone.

Grounding Siri

Unfortunately, embedding Siri in devices that look like they may be great targets for Siri functionality isn’t without issues:

Offline — Although Siri requires a certain minimum horsepower to do its magic, much of that is spent ingesting and prepping audio to be transmitted to Apple’s servers which do the heavy lifting. Bringing that processing down to an embedded device that doesn’t require a constant connection to Apple maybe computationally feasible. However, Apple’s ability to advance Siri’s voice input decoding accuracy and pattern recognition depend on constant sampling of and adjusting input from tens of millions of Siri users. This would rule out Siri embedded into offline devices and create significant storage and syncing problems with seldom-connected devices.

Sensors — One of the key reasons why Siri is such a good fit for smartphones is the number of on-device sensors and the virtually unlimited range of apps it’s surrounded with. Siri is capable of “knowing” not only that you’re walking, but that you’ve also been walking wobbly, for 35 minutes, late at night, in a dark alley, around a dangerous part of a city, alone… and send a pre-designated alert silently on your behalf. While we haven’t seen examples of such deep integration from Apple yet, Siri embedded into devices that lack multiple sensors and apps would severely limit its potential utility.

Data — Siri’s utility is directly indexed to her access to data sources and, at this stage, third parties’ search (Yelp), computation (WolframAlpha) and transaction (OpenTable) facilities. Apple does and is expected to continue to add such partners in different domains on a regular basis. Siri embedded in radio-lacking devices that don’t have access to such data and processing, therefore, may be too crippled to be of interest.

Fragmentation — People expect to see Siri pop up in all sorts of places and Apple has taken the first step with Siri Eyes Free where Siri gives up her screen to capture the automotive industry. If Siri can drive in a car, does that also mean she can fly on an airplane, sail on a boat or ride on a train? Can she control a TV? Fit inside a wristwatch? Or a refrigerator? While Siri — being software — can technically inhabit anything with a CPU in it, the radio in a device is far more important to Siri than its CPU, for without connecting to Apple (and third party) servers, her utility is severely diminished.

Branding —Siri Eyes Free won’t light up the iPhone screen or respond to commands that would require displaying a webpage as an answer. What look like reasonable restrictions on Siri’s capabilities in this context shouldn’t, however, necessarily signal that Apple would create “subsets” of Siri for different domains. More people will use and become accustomed to Siri’s capabilities in iPhones than any other context. Degrading that familiarity significantly just to capture smaller markets wouldn’t be in Apple’s playbook. Instead of trying to embed Siri in everything in sight and thus diluting its brand equity, Apple would likely pair Siri with potential NFC or Bluetooth interfaces to devices in proximity.

What’s Act II for Siri?

In Siri’s debut, Apple has harvested the lowest hanging fruit and teamed up with just a handful of already available data services like Yelp and WolframAlpha, but has not really taken full advantage of on-device data, sensor input or other novel information.

As seen from outside, Siri’s progress at Apple has been slow, especially compared to Google that has had to play catch up. But Google must recognize a strategically indispensable weapon in Google Now (a Siri-for-Android, for all practical purposes) as a hook to those Android device manufacturers that would prefer to bypass Google’s ecosystem. None of them can do anything like it for some time to come, Samsung’s subpar attempts aside.

If you thought Maps was hard, injecting relationship metadata into Siri — fact by fact, domain by domain — is likely an order of magnitude more laborious, so Apple’s got her work cut out for Siri. It’d be prudent not to expect Apple to rush into embedding Siri in its non-signature devices just yet.

Siri is a promise. A promise of a new computing environment, enormously empowering to the ordinary user, a new paradigm in our evolving relationship with machines. Siri could change Apple’s fortunes like iTunes and App Store…or end up being like the useful-but-inessential FaceTime or the essential-but-difficult Maps or the desirable-but-dead Ping. After spending hundreds of millions on acquiring and improving it, what does Apple expect to gain from Siri, at once the butt of late-night TV jokes but also the wonder of teary-eyed TV commercials?

Everyone expects different things from Siri. Some think top 5 wishes for Siri should include the ability to change iPhone settings. The impatient already think Siri should have become the omniscient Knowledge Navigator by now. And of course, the favorite pastime of Siri commentators is comparing her query output to Google Search results while giggling.

Siri isn’t a sexy librarian

The Google comparison, while expected and fun, is misplaced. It’d be very hard for Siri (or Bing or Facebook, for that matter) to beat Google at conventional Command Line Interface search given its intense and admirable algorithmic tuning and enormous infrastructure buildup for a decade. Fortunately for competitors, though, Google Search has an Achilles heel: you have to tell Google your intent and essentially instruct the CLI to construct and carry out the search. If you wanted to find a vegetarian restaurant in Quincy, Massachusetts within a price range of $25-$85 and you were a Google Search ninja, you could manually enter a very specific keyword sequence: “restaurant vegetarian quincy ma $25…$85″ and still get “about 147,000 results (0.44 seconds)” to parse from. [All examples hereon are grossly simplified.]

This is a directed navigation system around The Universal Set — the entirety of the Internet. The user has to essentially tell Google his intent one. word. at. a. time and the search engine progressively filters the universal set with each keyword from billions of “pages” to a much smaller set of documents that are left for the user to select the final answer from.

Passive intelligence

Our computing devices, however, are far more “self-aware” circa 2012. A mobile device, for instance, is considerably more capable of passive intelligence thanks to its GPS, cameras, microphone, radios, gyroscope, myriad other in-device sensors, and dozens of dedicated apps, from finance to games, that know about the user enough to dramatically reduce the number of unknowns…if only all these input and sensing data could somehow be integrated.

Siri’s opportunity here to win the hearts and minds of users is to change the rules of the game from relatively rigid, linear and largely decontextualized CLI search towards a much more humane approach where the user declares his intent but doesn’t have to tell Siri how do it every step of the way. The user starts a spoken conversation with Siri, and Siri puts an impressive array of services together in the background:

precise location, time and task awareness derived from the (mobile) device,

“Remind me when I get to the office to make reservations at a restaurant for mom’s birthday and email me the best way to get to her house.”

Siri already knows enough to integrate Contacts, Calendar, GPS, geo-fencing, Maps, traffic, Mail, Yelp and Open Table apps and services to complete the overall task. A CLI search engine like Google’s could complete only some these and only with a lot of keyword and coordination help from the user. Now lets change “a restaurant” above to “a nice Asian restaurant”:

“Remind me when I get to the office to make reservations at a nice Asian restaurant for mom’s birthday and email me the best way to get to her house.”

“Asian” is easy, as any restaurant-related service would make at least a rough attempt to classify eateries by cuisine. But what about “nice”? What does “nice” mean in this context?

A conventional search engine like Google’s would execute a fairly straight forward search for the presence of “nice” in the text of restaurant reviews available to it (that’s why Google bought Zagat), and perhaps go the extra step of doing a “nice AND (romantic OR birthday OR celebration)” compound search to throw in potentially related words. Since search terms can’t be hand-tuned for an infinite number of domains, this comes into play for highly searched categories like finance, travel, electronics, automobiles, etc. In other words, if you’re searching for airline tickets or hotel rooms, the universe of relevant terms is finite, small and well understood. Goat shearing or olive-seed spitting contests, on the other hand, may not benefit as much from such careful human taxonomic curation.

Context is everything

And yet even when a conventional search engine can correlate “nice” with “romantic” or “cozy” to better filter Asian restaurants, it won’t matter to you if you cannot afford it. Google doesn’t have access to your current bank account, budget or spending habits. So for the restaurant recommendation to be truly useful, it would make sense for it to start at least in a range you could afford, say $$-$$$, but not $$$$ and up.

Therein comes the web browser vs. apps unholy war. A conventional search engine like Google has to maintain an unpalatable level of click-stream snooping to track your financial transactions to build your purchasing profile. That’s not easy (likely illegal on several continents) especially if you’re not constantly using Google Play or Google Wallet, for example. While your credit card history or your bank account is opaque to Google, your Amex or Chase app has all that info. If you allow Siri to securely link to such apps on your iPhone, because this is a highly selective request and you trust Siri/Apple, your app and/or Siri can actually interpret what “nice” is within your budget: up to $85 this month and certainly not in the $150-$250 range and not a $25 hole-in-the wall Chinese restaurant either because it’s your mother’s birthday.

Speaking of your mother, her entry in your Contacts app has a custom field next to “Birthday” called “Food” which lists: “Asian,” “Steak,” and “Rishi Organic White Tea”. On the other hand, Google has no idea, but your Yelp app has 37 restaurants bookmarked by you and every single one is vegetarian. Your mother may not care, but you need a vegetarian restaurant. Siri can do a proper mapping of the two sets of “likes” and find a mutually agreeable choice at their intersection.

So a simple search went from “a restaurant” to “a nice Asian vegetarian restaurant I can afford” because Siri already knew (as in, she can find out on demand) about your cuisine preference and your mother’s and your ability to pay:

Mind you, all these series of data lookups and rule arbitrations among multiple apps happen in milliseconds. Quite a bit of your personal info is cached at Apple servers and the vast majority of data lookups in third party apps are highly structured and available in a format Siri has learned (by commercial agreement between companies) to directly consume. Still, the degree of coordination underneath Siri’s reassuring voice is utterly nontrivial. And given the clever “personality” Siri comes with, it sounds like pure magic to ordinary users.

Check weather at and daily traffic conditions to an event at a specific location, only if my calendar and my wife’s shared calendar are open and tickets are available for under $50 for tomorrow evening.

Siri would parse it semantically as:

and translate into an execution chain by apps and services:

Further, being an integral part of iOS and having programmatic access to third party applications on demand, Siri is fully capable of executing a fictional request like:

Transfer money to purchase two tickets, move receipt to Passbook, alert in own calendar, email wife, and update shared calendar, then text baby sitter to book her, and remind me later.

by translating it into a transactional chain, with bundled and 3rd party apps and services acting upon verbs and nouns:

By parsing a “natural language” request lexically into structural subject-predicate-object parts semantically, Siri can not only find documents and facts (like Google) but also execute stated or implied actions with granted authority. The ability to form deep semantic lookups, integrate information from multiple sources, devices and 3rd party apps, perform rules arbitration and execute transactions on behalf of the user elevates Siri from a schoolmarmish librarian (à la Google Search) into an indispensable butler, with privileges.

The future is Siri and Google knows it

After indexing 40 billion pages and their PageRank, legacy search has largely run its course. That’s why you see Google, for example, buying the world’s largest airline search company ITA, restaurant rating service Zagat, and cloning Yelp/Foursquare with Google Places, Amazon with Google Shopping, iTunes and App Store with Google Play, Groupon with Google Offers, Hotels.com with Google Hotel Finder…and, ultimately, Siri with Google Now. Google has to accumulate domain specific data, knowledge and expertise to better disambiguate users’ intent in search. Terms, phrases, names, lemmas, derivations, synonyms, conventions, places, concepts, user reviews and comments…all within a given domain help enormously to resolve issues of context, scope and intent.

Whether surfaced in Search results or Now, Google is indeed furiously building a semantic engine underneath many of its key services. “Normal search results” at Google are now almost an afterthought once you go past the various Google and third party (overt and covert) promoted services. Google has been giving Siri-like answers directly instead of providing interminable links. If you searched for “Yankees” in the middle of the MLB playoffs, you got real-time scores by inning, first and foremost, not the history of the club, the new stadium, etc.

Siri, a high-maintenance lady?

Google has spent enormous amounts of money on an army of PhDs, algorithm design, servers, data centers and constant refinements to create a global search platform. The ROI on search in terms of advertising revenue has been unparalleled in internet history. Apple’s investment in Siri has a much shorter history and far smaller visible footprint. While it’d be suicidal for Apple to attack Google Search in the realm of finding things, can Apple sustainably grow Siri to its fruition nevertheless? Very few projects at Apple that don’t manage to at least provide for their own upkeep tend to survive. Given Apple’s tenuous relationship with direct advertising, is there another business model for Siri?

By 2014, Apple will likely have about 500 million users with access to Siri. If Apple could get half of that user base to generate just a dozen Siri-originated transactions per month (say, worth on average $1 each, with a 30% cut), that would be roughly a $1 billion business. Optimistically, the average transaction could be much more than $1 or the number of Siri transactions much higher than 12/month/user or Siri usage more than 50% of iOS users, especially if Siri were to open to 3rd party apps. While these assumptions are obviously imaginary, even under the most conservative conditions, transactional revenue could be considerable. Let’s recall that, even within its media-only coverage, iTunes has now become a $8 billion business.

As Siri moves up the value chain from its original CLI-centric simplicity prior to Apple acquisition to its current status of speech recognition-dictation-search to a more conversationalist interface focused on transactional task completion, she becomes far more interesting and accessible to hundreds of millions of non-computer savvy users.

Siri as a transaction machine

A transactional Siri has the seeds to shake up the $500 billion global advertising industry. For a consumer with intent to purchase, the ideal input comes close to “pure” information, as opposed to ephemeral ad impression or a series of search results which need to be parsed by the user. Siri, well-oiled by the very rich contextual awareness of a personal mobile device, could deliver “pure” information with unmatched relevance at the time it’s most needed. Eliminating all intermediaries, Siri could “deliver” a customer directly to a vendor, ready for a transaction Apple doesn’t have to get involved in. Siri simply matches intent and offer more accurately, voluntarily and accountably than any other method at scale that we’ve ever seen.

Another advantage of Siri transactions over display and textual advertising is the fact that what’s transacted doesn’t have to be money. It could be discounts, Passbook coupons, frequent mileage, virtual goods, leader-board rankings, check-in credits, credit card points, iTunes gifts, school course credits and so on. Further, Siri doesn’t even need an interactive screen to communicate and complete tasks. With Eyes Free, Apple’s bringing Siri to voice controlled systems, first in cars, then perhaps to other embedded environments that don’t need a visual UI. Apple having the largest and the most lucrative app and content ecosystem on the planet with half a billion users with as many credit card accounts would make the nature of Siri “transactions” an entirely different value proposition to both users and commercial entities.

Siri, too early, too late or merely in progress?

And yet with all that promise, Siri’s future is not a certain one. A few potential barriers stand out:

Performance — Siri works mostly in the cloud, so any latency or network disruption renders it useless. It’s hard to overcome this limitation since domain knowledge must be aggregated from millions of users and coordinated with partners’ servers in the cloud.

Context — Siri’s promise is not only lexical, but also contextual across countless domains. Eventually, Siri has to understand many languages in over 100 countries where Apple sells iOS devices and navigate the extremely tricky maze of cultural differences and local data/service providers.

Partners — Choosing data providers, especially overseas, and maintaining quality control is nontrivial. Apple should also expect bidding wars for partner data, from Google and other competitors.

Scope — As Siri becomes more prominent, so grow expectations over its accuracy. Apple is carefully and slowly adding popular domains to Siri coverage, but the “Why can’t Siri answer my question in my {esoteric field}?” refrain is sure to erupt.

Operations — As Siri operations grow, Apple will have to seriously increase its staffing levels, not only for engineers from the very small semantic search and AI worlds, but also in the data acquisition, entry and correction processes, as well as business development and sales departments.

Leadership — Post-acquisition, two co-founders of Siri have left Apple, although another one, Tom Gruber, remains. Apple recently hired William Stasior, CEO of Amazon A9 search engine, to lead Siri. However, Siri needs as much engineering attention as data partnership building, but Stasior’s A9 is an older search engine different from Siri’s semantic platform.

API — Clearly, third party developers want and expect Apple someday to provide an API to Siri. Third party access to Siri is both a gold mine and a minefield, for Apple. Since same/similar data can be supplied via many third parties, access arbitrage could easily become an operational, technical and even legal quagmire.

Regulation — A notably successful Siri would mean a bevy of competitors likely to petition DoJ, FTC, FCC here and counterparts in Europe to intervene and slow down Apple with bundling/access accusations until they can catch up.

Obviously, no new platform as far-reaching as Siri comes without issues and risks. It also doesn’t help that the two commercial online successes Apple has had, iTunes and App Store, were done in another era of technology and still contain vestiges of many operational shortcomings. More recent efforts such as MobileMe, Ping, Game Center, iCloud, iTunes Match, Passbook, etc., have been less than stellar. Regardless, Siri stands as a monumental opportunity both for Apple as a transactional money machine and for its users as a new paradigm of discovery and task completion more approachable than any we’ve seen to date. In the end, Siri is Apple’s game to lose.

Unimaginable to the users of that Genoese world map from 1457, today’s maps are used daily by hundreds of millions of ordinary people around the globe to accomplish what’s now regarded as pedestrian tasks, like 3D flyovers:

Indeed, in the post-PC era maps have ceased to be cartographic snapshots of the Earth’s terrain and become spatial portals to a vast array of virtual services:

Wayfinding — In the not-too-distant future, the principal feature of maps may no longer be wayfinding. Yes, we still want to go from A to B and know where B is and what’s around it. Today, however, we also want to know not just what, but who among our social network is around B, even before we get there. And we want to know not just how to get to B, but by what modalities of transportation, when, how fast, etc.

Discovery — Knowing “what’s where” is clearly no longer enough. We want to see 2D, 3D, panoramic and synthetically composited photographs taken yesterday (and 50 years ago to compare) sprinkled around the map. We want to see strangers’ virtual scribblings and audio recordings left at particular coordinates. We want to know where our favorite brand of coffee or most scenic bike path may be located. We want to read all the news and tweets around a dropped pin. We want locals facts accessed on a map even before we asked for them.

Commerce — Today we want our maps to not only show us places, but also let us shop directly. We want to tap a restaurant on a map, get its ratings and book a reservation right then and there. We want geo-fencing to alert us automatically as the GPS tracker on our map gets near a commercial point of interest, show us a discount coupon even before we walk in and get ready for a quick transaction. We want a virtual check-in on a map converted into real-life currency discount at a retailer.

Integration — We’re no longer satisfied with “cartography as content”. Our maps are now intention harvesting funnels. When we ask Maps via Siri a very simple question like “How is the traffic?” we’d love for her to know 1) we have a trip planned to see 2) our mother in 3) two hours, and we usually take 4) a particular route and we’re really asking about the traffic conditions 5) at our mother’s, so that we can get 6) alternate routes if it’s unusually busy. Maps without deep semantic correlation (public: directions, routing, traffic, and private: calendar, contacts, navigation history, etc.) are not part of the future we want.

Entertainment — This is no longer the 14th century or the 20th, so we want to experience places without being there. We want our maps to talk to us. We want to virtually fly over not just our neighborhood, but places we may never visit. We want to tour inside malls, stores and offices without moving off our couch. We want to submerge and commune with sea turtles — all within a “map” on a tiny computer we hold in our hand.

Tinker, Tailor, Soldier, Not A Mapmaker

We could go on listing just how profoundly our expectations of what a map is have changed and will continue to expand. Apple understands this more than most companies, but Apple hasn’t been a mapmaker and knows it. Five years ago, Steve Jobs himself favored partnering with companies like Google, for search and mapping backend services:

Jobs wasn’t likely thrilled to have to rely on Google or any other company for these services, but others had a significant head-start in digital maps, and Apple had its hands full reinventing the phone at the time. The intervening five years brought Apple unprecedented success with the iPhone but also a well known systems design problem: it’s very hard to change user habits and expectations once set in. Due to contractual circumstances now being breathlessly analyzed in the media, Apple finds itself having to part with an app powered by a one-time partner, now its chief rival. Regardless, users are rarely comfortable with the loss of what’s familiar, no matter how rationally justifiable the reasons might be. Enter, Mapgate.

We launched Maps initially with the first version of iOS. As time progressed, we wanted to provide our customers with even better Maps including features such as turn-by-turn directions, voice integration, Flyover and vector-based maps. In order to do this, we had to create a new version of Maps from the ground up.

In other words, Apple seems to be not so much reinventing Maps, as evolving it into parity with its now more feature-rich cousin on Android, and, this time, without Google’s help — a tall task, given Google’s eight-year head start. In this seemingly catch-up game, rapidly increasing accuracy and coverage appear to be Apple’s first order of business.

Mapbusters, who you gonna call?

A company in Apple’s current predicament could have followed a number of curative paths. It could have hired the equivalent of more than 7,000 mapping related personnel Google is said to employ to gather, analyze and correct data. However, for other than its retail stores, Apple has no history of hiring so many personnel (8% of its entire head count) for such a narrow operation.

Unlike Google (Big Table), Facebook (Cassandra), Yahoo (Hadoop), Amazon (Dynamo) and others that have accumulated big data storage, processing and operational expertise, Apple’s not been a magnet for data scientists or web-scale infrastructure, automation, real-time analytics and algorithm design professionals. Facebook, for example, can bring online tens of thousands of servers in less than a month in an automated fashion, Apple continues to lag, underperform and underwhelm in this area.

Instead, Apple could acquire a mapping company. Unfortunately, there aren’t a lot of those around. Neither does Apple have a history of buying companies just to get process oriented employees. It’s telling that Apple hasn’t bought any of the companies it currently gets mapping data from, like Tom Tom or Waze. Further, Apple uses multiple map data sources abroad such as AutoNavi in China.

Apple could augment its accuracy efforts by crowdsourcing corrections through OpenStreetMaps, which it’s already been using elsewhere. But OSM may not scale as fast as Apple would like and, more importantly, may pose licensing issues in the future. Another avenue for Apple is to get much more aggressive and encourage a hundred million iOS 6 Maps users to actively send map corrections and suggestions to earn accumulating incentives such as App Store or iTunes credits, iOS device prizes, free trips and so on.

But these acquisition or incentive based approaches are ultimately process oriented remedies not in Apple’s DNA. You can expect, say, Microsoft wanting to code for, test and manage thousands of models, peripherals, drivers and legacy bug exceptions for Windows as they have done for a couple of decades…Apple not so much.

Of course having good map data by itself is not good enough. Apple has to decide if it really wants to clone feature by feature what has become very familiar to its own Maps 1.0 users. That is, would Apple really want to spend all its time, resources and focus to clone Google Maps (on Android) because some of its most vocal users are asking back what was degraded in iOS 6 Maps?

Playing by its rivals’ rules hasn’t been Apple’s modus operandi. Apple often enters a semi-mature industry underserved by system design and rearranges constraints and possibilities to create unique products. More recently, for example, everyone has been busy adding NFC chips to smartphones to rejigger the entire payment industry, full of entrenched players. Apple remained agnostic on payment modalities and ignored NFC, but reimagined everything else around payments: rewards, promotions, cards, tickets, coupons, notifications…all wrapped in a time-and-location based framework, thereby opening up new avenues of growth, integration and deeper ties to users in the form of its new app Passbook. In the same vein, can Apple reimagine and redefine what mobile “mapping” ought to be?

Horizontal

Fortuitously, Apple has another systems design problem in its hands, not unlike Maps. If Maps has become the gateway to mobility, iTunes has been Apple’s portal to content. iTunes started as a single-screen song organizer. On the desktop, it’s still a single-screen app, but has grown to cover the storage, backup, authentication, transaction, discovery, promotion, browsing, preview, playback, streaming, conversion and sharing of every imaginable modern media format. It’s the focal point of a quarter trillion dollar media empire. In the process, the cognitive load of using iTunes has become considerable, not to mention the app’s performance, reliability and myriad other problems. Many users complain about it. Apple’s response has been to separate various functions into individual apps: Podcasts, iTunes U, Music, Videos, Trailers, iBooks, App Store, etc.

Developing and delivering map services as separate apps would prevent the immaturity of one or more components from bringing down the overall perception of the entire Maps platform. Can the Maps app be sliced into 8-10 separate apps: satellite, roads/traffic, mass transit, turn-by-turn direction, 3D/flyover, search, discovery, points of interest and so on? While this may make logical sense, not all users will be happy exchanging an integrated app where especially the novice user can find everything in one place for several single-purpose apps. It can get complicated. For example, millions of people commute daily to New York City from many smaller cities and at least four states, some driving through three states twice a day. Would they want to manage various aspects of that process in multiple apps?

Vertical

Clearly, Apple has already started to think about and experiment with unorthodox displays of map data, as exemplified by its “Schematic Maps” patent application. So, for instance, instead of slicing Maps into separate apps horizontally, could it be another option to display metadata vertically as layers, like the tissue-paper overlays of yesteryear?

Conceptually, this can be an effective way to deal with complexity, data density and integration issues within one app. PlaceBase, one of the mapping companies Apple has acquired in the last couple of years, was known exactly for such layering of data through its PushPin API.

Apple could even parallelize and greatly accelerate its Maps development by starting a mini “Map Store” and actively enlisting third parties to develop layers on top of Apple’s “base-map”. Users could make nominally priced in-app purchases (think $0.99) to add custom layers to replace and/or augment Apple’s own layers. It would be very attractive for third parties, as their layers, floating on Apple default base-map, could quickly capture a massive user base in the millions (think getting 70% of $0.99 from 10-20 million users). Wouldn’t you pay $0.99 for a Google Search layer that pre-processes a local point of interest overlay?

Layered maps don’t, of course, come without issues. It would be sensible to make layers adjustably translucent so multiple layers can be co-visible and interact with each other, such as traffic and mass transit. However, too many layers could become hard to manage for novice users, simple show/hide checkboxes may not suffice. Memory management of layers in terms of pre-caching, loading and state-saving, as well as intra-layer version compatibility and licensing could be problematic. Apple would have to carefully select and test a rather limited number of vitally important layers to go on top of its base-map.

And yet a mini Map Store could help Apple catch up and pass Google Maps or other players in the field much faster than Apple alone could, as well as open up significant opportunities for Apple’s developer ecosystem.

Does Apple have to lose for Google to win?

Not if it doesn’t play by the same rules. After all, it’s Google’s game to lose. Tim Cook telling customers to use other companies’ mapping products must have taken some guts at Cupertino. It’s perhaps a conviction on his part (with his inside knowledge) that Apple can and over time will do better than the competing apps he so forthrightly listed. That’s the confident view Google would likely prefer to fly over.