Categories

Meta

Some people have called me an Apple fanboy. I stood in line for hours for the first iPhone and then again for almost every new iOS device that came out. I own more Apple devices than t-shirts (I’m not a big t-shirt fan, but still, it’s a lot of devices…). And I have convinced quite a few people over the years to switch to Macs, iPhones and iPads.

Two weeks ago, I switched to Android. I bought a Galaxy S4 (Google Play stock Android version) and a Samsung Note 8.0 tablet. They’re now my primary mobile devices.

Why? My theory of frosted glass effects: When a technology vendor can’t keep up with the speed of innovation anymore, it resorts to incrementally copying other’s innovations and starts adding pointless visual gimmicks, such as frosted glass effects. Such effects are cool, set your product apart, make it look modern, but unfortunately they are also entirely useless and just consume system resources without really improving the user experience.

Case in point: Windows Vista.

I switched to Macs in 2006, after almost two decades using Microsoft OSes. Microsoft in the 90s and early 2000s was much more innovative than people give it credit for. Tablet computing, simple Internet programming, productivity software, the first really powerful PDAs — those were all sectors where Microsoft was the leader.

But then Windows Vista came out, almost five years after XP. And what was the most remarkable new feature of this next-gen OS?

Very cool-looking frosted glass effects in windows titles.

The rest was unremarkable at best, copying many of the features Mac OS X had had for years and fixing some old problems. It was clear at that point that Microsoft had lost its way, the beginning of Microsoft’s current malaise. And for me it was time to switch to the platform where the real innovation was happening – Mac OS X, mostly with web-based applications, and a bit later from Windows CE to iOS.

A few weeks ago Apple introduced iOS 7, the first major overhaul since the iPhone came out. It has a ton of new features, almost all of which Android users have enjoyed for years. It fixes a lot of old problems, and…

…it has frosted glass effects. Lots of them.

A few minutes after installing the IOS 7 beta I just knew I needed to switch to Android.

I have found myself using more and more Google apps on my iPhone over the past 18 months or so. Google Maps replaced (surprise) Apple Maps, Gmail replaced Apple Mail, Chrome replaced Safari. I did voice search through Google’s search app, not Siri. And so on. Why? Because Google’s services are not only much more powerful but also very neatly integrated. The amazing Google Now is the best example for that.

In fact, I was already using a Google phone running iOS underneath, and there’s simply nothing in iOS 7 that makes me think I might switch back to Apple’s stock apps.

There’s just frustration that iOS 7 has barely caught up with Android’s current state.

Android fanboys tend to think that this has always been the case. Not so. When Apple came out with the original iOS, the first real browser on a mobile device, multi-touch, the original app store, photo stream, AirPlay, etc., it was years ahead of everybody else. But that was then, and iOS 7 is now. Unfortunately, it feels a lot like Window Vista.

Just to be clear: I’m not feature obsessed when it comes to mobile devices. I spend much of my working day wrestling with technology, and the last thing I need is a high-maintenance phone or tablet. But Android has matured to a point where it’s (almost) as easy to use and polished as iOS, with a ton of added flexibility.

And without question, the real innovation now happens in the Android ecosystem. Smartphone functionality has matured to the point of saturation, and real progress can only be made in Internet-based services that integrate seamlessly with a smartphone OS. Google’s superiority in information services and Android’s general openness are perfect for that.

I’m not an Apple fanboy, I’m a fanboy of great products. And as much as I hate to see Apple lose its lead, it’s time to switch.

Every subsection of society has its myths. Social groups need unifying beliefs that often are designed to shield its members from harsh realities. On the political stage we are currently experiencing what bad things can happen when myths (the Tea Party’s beliefs in how government should work) run wild.

The big myth of the tech industry is that there is such a thing as a black box technology company, one that doesn’t have to deal with customers — people — directly. The technology will handle that, says the myth. Everything is self-service. No need to handle stupid questions and annoying phone calls. No need to sell, because people will find you on Google and then just buy.

Tech startup founders (who are often introverts and don’t like to deal with people that much) and VCs love this idea: The perfectly scalable company, designed around algorithms and websites and network effects, not depending on icky personal interaction and unpredictable human behavior. No need to hire a lot of service agents and salespeople, we just need a few super smart engineers. And customers are just data points swimming through the sales funnel.

The poster child for this myth is of course Google. Some people actually still believe that Google makes all its money from a pure self-service model. Funnily enough, Google seems to believe this myth itself and treats customer service — at least for the unwashed masses — as an unnecessary distraction, to be avoided if possible. Even though, according to some sources, at least half of Google’s employees work in sales and service (hidden away in the lesser buildings on campus with strongly reduced perks), Google still projects an image of a pure technology company.

There’s of course nothing wrong with trying to achieve as much self-service as possible. Customer service is expensive. The terrible quality of the service that most companies provide is not due to their evil nature, but to the fact that customers want to pay less and less, which leaves less money for good service.

But the tech industry has an aversion against customer service that goes beyond this economic aspect. The latest case in point is Airbnb’s “Ransackgate”. The apartment of a customer was completely destroyed by a renter that was referred by Airbnb. According to the customer’s description, Airbnb was somewhat slow to react to this terrible situation, and even went into adversarial mode when the customer blogged about her experience. A co-founder explained to the customer that Airbnb was currently raising money and could suffer from this negative publicity, so could she please blog something positive? Now even the respected Paul Graham, one of Airbnb’s investors, suggests online that the customer is probably lying.

This just shows how far removed the tech elite is from the reality of normal people. When your apartment gets destroyed, you’re not happy being treated with the normal standardized customer service process (“Please include your ticket number…”). It’s a destroyed apartment, a traumatic experience, not a browser compatiblity issue.

It’s pretty telling that Airbnb’s young founders and even its senior investors (who really should act as the adult supervision in such cases) don’t recognize why this could be a problem that resonates emotionally with many people and therefore needs the unconditional attention of the company’s leadership. Oh, a customer has a problem? Yuck, it’s emotionally charged. Let’s just wait and it might go away.

Customer service is more than just necessary, it can be a fundamental differentiator in a crowded market. But interestingly, it’s kind of a taboo topic for most investors. A while ago a VC told me that one of this very successful portfolio companies has this fully automated self-service sales process that needs almost no direct customer interaction. I was very surprised because I know the company fairly well, and I know that probably more than half of its employees spend most of their day on the phone with customers, selling to them and then helping them get up and running. You could even say (and another VC confirmed this later) that their personalized sales process and customer service is the secret of their success. But this VC wanted to believe that it’s all about the technology. He really, really wanted to believe in the black box.

As long as tech companies and their investors can’t go beyond their belief in black boxes, they will struggle to reach the mass market and build profitable businesses. You could even think that the awful performance of the VC industry over the past decade was partially caused by this naive view of reality. Everybody is chasing the magic black box, but nobody wants to build businesses that work in the real world.

The “consumerization of IT” is a trend that started about 5 years ago and that has been reshaping the world of information technology quite radically. Consumer technology such as smartphones, lightweight web-based applications and now tablets has invaded more and more enterprises, to the shock of IT managers everywhere.

The biggest winner of this trend is of course Apple, which now has a market cap close to that of the old Wintel (Microsoft/Intel) monopoly. Apple basically owns the high-end laptop market, even though MacBooks are still not considered “enterprise technology” by most IT departments. It totally dominates tablets, and it makes more than 50% of all profits of the mobile phone industry, even though its market share is still small.

Consumerization is what took Apple from a barely surviving also-ran to the dominant technology company of our time. Is it surprising that Steve Jobs and his lieutenants are focusing all their resources on this successful strategy? For instance, Apple recently killed its pro-level server business (Xserve), effectively exiting the data center market.

The latest victim of this strategy is the Final Cut Pro (FCP) line of video editing applications. FCP Studio is probably the most popular suite of video production software in the market. It started small a decade ago as a cheap alternative to Avid, but it is now the choice of many high-end editors in broadcast TV and even Hollywood. Nowadays even editor legend Walter Murch uses FCP, which once was ridiculed as a toy by the movie tech intelligentsia.

The new version of Final Cut, FCP X, caused a major sh*tstorm in the editor community when it was released two weeks ago. It gets only a 3-star rating in the App Store, attracting comments such as “FCP X = Windows Vista” (which probably is not meant as a compliment). Countless articles complain about all the missing features that professional editors can’t do without, not least the baffling fact that FCP X can’t import projects from older versions of FCP.

So what’s going on here?

First of all, FCP X is a great product, if still a bit 1.0. I’ve been playing with it for a couple of weeks now, and I certainly won’t go back to the old FCP or any of its clones (such as Adobe Premiere Pro). FCP X reinvented quite a few things in how editing is done, and most of the changes are really great, speeding up the editing process considerably.

But FCP X also asks you to relearn a lot of things. It can do practically everything FCP 7 could do and a lot more, but many tasks are just done very differently. There are a lot of “WTF?” moments when you switch to FCP X, but once you discover what the new way of doing things is, it all makes a lot of sense. I’ve only encountered one or two things that I still find more elegant in the old FCP.

To use a metaphor from my other field of work: it’s like learning a new programming language. When you switch from something like C++ or Java to Python or Ruby, a lot of things look strange or even ridiculously simplistic. But after a while, you don’t miss the overhead that the old tool required you to deal with. You recognize that the irritating, seemingly amateurish simplicity is actually productivity-enhancing elegance.

That’s great for prosumers and lone-wolf freelancers, but it’s no consolation for the high-end editing pros who depend on sophisticated, highly specialized workflows. Relearning everything and reorganizing your corporate workflow is not a great proposition for somebody who constantly works under tight deadlines.

So is Apple trying to consciously scare off the high-end pro market? In some ways, yes. Every successful business has to decide what its focus is, who its customers are. Even for a giant company like Apple it’s incredibly difficult to serve entirely different target markets.

High-end video production houses and broadcast stations often run their video production infrastructure like traditional enterprise IT: A central department decides which platform to use. Then internal technical people and consultants implement the system, endlessly tweaking every detail, and the maintenance of the whole system takes considerable resources. Individual workers don’t get to choose what tools they want to work with, but have to adapt to the rules of the organization (Don’t like our Avid system? Go look for another job).

Apple is great at selling stuff to people who make their own purchasing decisions, be it consumers, freelancers or even employees of larger corporations who have enough authority to choose their own tools. Apple is not very good at dealing with IT departments and at adapting its products to the myriad specialized requirements that larger organizations have.

The old FCP clearly suffered from feature creep that was dictated by larger customers, and that made the product difficult to use for the broader prosumer market. It looks like Apple made a clear decision with FCP X: It’s going after the big mass market, and if that means it’s going to lose the high-end segment, so be it. There’s really no other good explanation for the fact that Apple released FCP X without some crucial pro-level features.

Always remember that software is a tiny piece of Apple’s business, and the pro segment is even tinier. But pros are a tough crowd to please, and Apple probably just decided that this can’t be a priority anymore. It looks like it will deliver some of the missing features, but probably not on the scale the pros hoped for. Tough for the professionals who invested a lot in FCP, but this kind of gut-wrenching change is the reality of technology markets. Remember IBM selling off its PC business? Didn’t please a lot of people either.

Without a doubt Apple will lose a lot of fans in the video editing community. But it now has an editing product that is years ahead of everything else, perfect for the big and growing market of serious hobbyists, freelance editors (particularly in online media), independent filmmakers and corporate marketing users. It’s a big bet, but it could pay off.

I can’t help myself, but I think the past few years of social media market development have been a bit of a disappointment.

What once was a bustling ecosystem of blogs, forum sites, multiple social networks, video and photo sharing sites, social bookmarking services and so on has more or less turned into a boring duopoly of Facebook and Twitter.

MySpace? Delicious? Digg? Dead, or almost dead. Blogs? The frequently updated ones are mostly run by professionals. YouTube? Very successful, but only for passive viewing and not social interaction. The many, many forum sites that run on vBulletin or phpBB? Only relevant for tiny niches. Location-oriented services like Foursquare? Feel increasingly like a short-lived fad, easily copied by the big players.

Helped by the always oversimplifying mainstream media, the only social media channel most consumers really use is Facebook, and maybe they have heard of Twitter and use it passively.

It’s a pretty sad state. How can the complexity of human interaction only take place in two venues? It’s like having only a choice of two restaurants, maybe McDonald’s and Olive Garden (which, come to think of it, might even be the reality in some small towns). And don’t get me started on the walled-garden nature and constant privacy issues of Facebook and the lack of innovation at Twitter.

The launch of Google+ finally brings back some hope for more interesting times in the social space. Google has botched all its previous attempts at going social, but G+ feels surprisingly right. It’s not only powerful and flexible, it also offers a great, mainstream-compatible user experience, definitely on the level of Facebook. Oh, and Google finally figured out that it should leverage the heck out of its search dominance. Putting G+ front and center in Webmaster Tools and Google Analytics seems like a no-brainer in retrospect, but Google somehow missed out on this aspect with earlier social projects.

After just one week, it looks like much of the Internet elite has moved on from Facebook to G+. Even Twitter seems to see a noticeably reduced post volume from the usual early adopters. It’s very understandable. Twitter’s 140 character restriction has its charm, but it’s extremely limiting when you want to share deeper content. Facebook’s overcrowded feed that mixes relevant content with puppy pictures and Farmville invites is just too distracting for serious content sharers. People who liked FriendFeed are probably already on G+ now.

So, is G+ a Facebook/Twitter killer? No, and it doesn’t need to be. But Facebook probably has already lost the elite, and if Google makes some pretty straightforward improvements (API, anyone?) it could easily take away much of Twitter’s fanbase.

G+ is a huge step towards a more diversified, use-case oriented social media environment. McDonald’s doesn’t go broke just because there’s a hip new restaurant in town. Different social environments attract different people. There’s no reason why social media should be any different.

My father-in-law is a doctor, an anesthesiologist. He’s not only an extremely experienced professional, but also very entrepreneurial.

That’s why he’s currently starting his own company in the medical field. I can’t tell you what it is about because he’s in “stealth mode“, a deplorable, old-fashioned way of starting a company quietly. Nobody should do this. As we know from all successful Internet companies, you should always immediately tell as many people as possible about your plans in order to build your “ecosystem”. Don’t worry about competition, let people give you feedback. So what if somebody steals your idea, there’s enough room for everybody, right? That’s something that non-Internet people don’t seem to understand. Just look at the guys Mark Zuckerberg stole his idea for Facebook from, they still did pretty well. I think.

Anyway, I was thinking about what kind of advice I could give him for his new medical company. I’m an entrepreneur in the Internet industry, so I’m very familiar with the latest and greatest thinking in how to start a company in our digital age.

This is what I would tell him:First and most importantly, you need to get traction. People have to get to know you and use your service. The best way to do this is to give away your stuff for free for a couple of years. So just do anesthesia for free. You will be surprised how many people will want to use you as their anesthesiologist for their surgery. They will tell their friends, and that’s great free advertising for you.

Of course, since you do everything for free, you need to spend as little money as possible. As we say in the Internet business, you need to be capital-efficient. The best way to do this is to cut unnecessary expenses. Rent, don’t buy your equipment, and just get the cheapest type of medication. That’s good enough for a free service. Also, save on personnel costs. Don’t hire trained nurses, you don’t need that initially. Just get somebody who recently read a book about medicine or likes to watch “Grey’s Anatomy”. Most Internet startups don’t hire experienced engineers either, and that works just fine.

Due to these savings, it’s of course possible that things go wrong during a surgery (what we call a “Fail Whale”), but don’t worry. It’s a free service, so nobody will complain. Right?

At some point, you need to think about making money, or monetization. Don’t just start charging money, because that would break your momentum. You can either use the Freemium concept (“The basic anesthesia is free, but you also want pain killers later? That’ll be $$$$”). Or just use advertising. Show people ads while they’re preparing for surgery. Wake them up occasionally during the surgery to show some more ads, because that’s when they will pay attention, and this will give you high CPMs. You can also use “affiliate marketing” and have them sign up for freecreditreport.com while they’re still groggy. That’s a great way to get juicy affiliate fees.

It’s really important to leverage your service through your own ecosystem. Your patients are a valuable audience, so you should give partners access to these people. For instance, let insurance salespeople come to your OR. People will feel a need for security when they’re in bad health, so providing insurance offers adds value for everybody. And you will get a cut. Plus, the insurance guys will recommend you to their own customers. It’s a win-win-win proposition!

Any type of service today has to be social. People always make a fuss about privacy, but isn’t it much more fun when your patients’ friends can come to the OR and watch their surgery? There might be an increased risk of infection, but that’s a small price to pay for the feeling of community and friendship. Plus, you as the service provider might get somebody interested in having their own surgery. Great sales opportunity! Oh, and people will be interested in how the patient is doing afterwards. So make sure that any change is immediately published to their Twitter and Facebook accounts. Particularly the really personal, slightly embarrassing things (Incontinence? Like!) are fun for everybody.

Finally, the latest and greatest trend are “virtual goods“, which is basically money for made-up premium stuff that people irrationally put a value on. You can use this wonderful concept, too. For instance, why always use these boring old syringes? If your patient is a rap fan, just charge him $20 extra to get his injection with a limited edition Jay-Z syringe. He can’t keep it, obviously, for hygienic reasons, but it will still make him feel great! And your little patients will pester their parents to no end to rent this really cute Hannah Montana bed pan. Just imagine the profit opportunities!

You see, every type of startup can be made better and more dynamic with the latest strategic thinking from the Internet industry. Even boring, trivial stuff like medicine.

Some parts of the tech world reacted very harshly to Apple’s recent iPad announcement. The most frequently voiced criticism, apart from lacking features, was that Apple is trying to force consumers into the closed iTunes ecosystem. Apple’s hardware and content delivery platform are tightly coupled, so people who buy an iPad realistically can only buy music, books, movies and apps that are approved by Apple.

But if the success of the iPod and iPhone are any indication, consumers actually seem to like this closed system that just works and removes complexity. I actually believe that Apple is just the most aggressive, but by no means only company that is pushing the trend towards simpler, consumer-friendly IT.

Consumer products have to be simple. Consumers buy products, not systems. But that also means that consumer products are typically not “open” in the way a Linux machine is. The way of the future in IT could be semi-closed systems that are tightly controlled by a vendor, but tap into open platforms for some key functionality.

There are many examples for this in the non-IT world. A BMW 7 series runs on the same fuel as a Toyota Prius and has a “user interface” that is largely similar, but apart from that, these two vehicles are completely different. Nobody would expect BMW spare parts to work in a Prius. So cars share a common, open infrastructure (the system of filling stations that sell standardized types of gas) and common user interface conventions (a steering wheel, accelerator pedal etc.), but the actual products are strongly proprietary. Another example: A Miele dishwasher runs on the same electricity and fits into the same standardized kitchen slot as a GE product, but otherwise, these two products are very different.

My favorite quote about technology is typically attributed to Antoine de St.Exupéry:“Technology always develops from the primitive via the complicated to the simple.”

The early personal computers were clearly primitive. They were closed systems (forget about running C64 games on a TI 99/4a) and didn’t do much.

But then came the IBM PC, which more by accident than by design turned into an open platform. Suddenly people were able to run the same software on PCs made by many different manufacturers, and even peripherals and extension cards could be used on pretty much any “IBM-compatible” PC. This sounds and is great, but it led to a lot of complexity. Almost three decades after the introduction of the IBM PC, users still struggle with driver issues, compatibility problems and bloated systems because Windows has to support so many variations of hardware and software. Openness also turned out to be a bad business decision for the actual PC manufacturers, since it commoditized PC design. The winners were the two companies who were able to control the remaining proprietary elements, Microsoft and Intel.

Are we now entering an era of simplicity in IT that gives up some of this openness for other gains? That’s very well possible. And Apple is by no means the only example. Other successful products and services in the consumer market follow a similar approach:

Facebook is by far the most successful web platform currently, but also very closed. It actively strives to extend its ecosystem to other parts of the web, for instance by promoting its Facebook Connect login service.

The other rapidly growing smartphone platform next to the iPhone, RIM’s BlackBerry, is not any more open than Apple’s ecosystem.

Game consoles by Sony, Nintendo and Microsoft have always been closed, tightly controlled systems.

Google is an interesting special case. Although the company emphasizes openness in many of its activities, it is very closed when it comes to its central money making machine, Google Search. Both Yahoo and Bing offer relatively open APIs, while Google restricts very strongly what developers can do with its search results.

What these examples have in common: They all use open infrastructure (the Internet) and share some common approaches with their competitors, but apart from that, they’re all tightly controlled systems. And the interesting thing is that consumers seem to like it that way.

A well-designed, simple, reliable product trumps openness in the consumer market. There are plenty of social networks that are more open than Facebook, but guess who dominates social networking? You can buy a Linux-based smartphone that is completely open, but most people go for iPhones and BlackBerries. Most games are available on open PCs, but the closed consoles still dominate the gaming market.

So is this trend away from openness in IT a bad thing? Not necessarily.

The effectiveness of IT actually increases with more simplicity. People working in IT departments might not like it because it jeopardizes their job security, but a simpler computer is actually a better computer. Fewer technical problems mean that resources can be spent on something that actually creates value instead of just fixing shortcomings. That’s just economically efficient.

But doesn’t a lack of openness kill innovation? Again, not necessarily. The last few decades suggest that a focus on openness is actually bad for business. For instance, Sun Microsystems has always been one of the proponents of open systems. It opened its Java platform, but never turned it into a business. The result: Sun was just devoured by Oracle, which doesn’t win any awards for openness. Another example: Red Hat built a sizable business on open Linux technology and invests a lot into the open source community. That’s great, but it sells as much software in a year as Microsoft does in four days.

The point is: Innovation needs capital. Capital can only be obtained if there is a way to protect investments and turn them into a lucrative business. And full openness doesn’t really help with that. The open source movement has achieved many great things and creates a lot of value. But it has not come up with true break-through innovation yet. Openness is a strategy that makes things cheaper, not one that brings entirely new things into the world.

In the end, total openness vs. closed systems is a false dichotomy. As in the examples of cars and dishwashers, there will always be open standards and a public infrastructure that is needed to make products useful. And that’s exactly where consumer IT is going. The iPad is way more open than early home computers. It can access the entire web, using open standards. But at the same time, it restricts other things for the sake of simplicity and reliability (and commercial feasibility, such as the DRM unfortunately still required by movie studios and book publishers).

Very likely, that’s a blueprint for the future of IT. Technology should serve a purpose for its users, not follow some theoretical philosophy.

Remember October of 2001? A long-awaited product announcement out of Silicon Valley caused a lot of disappointment in the world of tech. The reactions were not kind: “I still can’t believe this! All this hype for something so ridiculous!” “Break-thru digital device? The Reality Distortion Field is starting to warp Steve’s mind if he thinks for one second that this thing is gonna take off.”

Or how about January of 2007? Another much hyped product caused quite a bit of frustration. Was this supposed to be all? Nice design, sure, but this lackluster feature list, the closed platform, all these technical restrictions — and this had been hyped as a revolutionary product?

Well, the iPod and the iPhone went on to become big successes anyway. To be more precise, they revolutionized their respective industries, even though many geeks and tech experts predicted their inevitable failure when these two products were originally announced.

Sounds familiar, right? The reactions to Apple’s iPad announcement were strikingly similar. Expectations had run so high previously that the actual product was almost certain to disappoint many. And as with Apple’s previous major new products, tech geeks and “experts” of all kinds were particularly critical.

But exactly as last time, these opinions will not really matter. Apple doesn’t play the same game as the rest of the gadget industry. To understand why, let’s have a look at how high-tech consumer products are typically marketed.

The definitive book on this topic is still Geoffrey Moore’s “Crossing the Chasm“. Moore explains how new products are gradually adopted in the market and which hurdles they have to clear.

New technologies are initially bought by “innovators”. These tech enthusiasts are ready to deal with immature, complex products and pay high prices, just as long as they can get their hands on the latest tech toys. In the next phase, the “early adopters” take over. These are people who want to see a certain degree of usefulness in a new product, but are still willing to pay substantial amounts of money and tolerate problems.

After that, according to Moore, new products have to overcome a “chasm”. The next segment of consumers, the “early majority”, is not really crazy about technology. These people first want to see that a Blu-ray player is really better than their old DVD machine, or that a wireless LAN at home is really useful. They want to pay a reasonable price, not spend all their disposable income on technology. Typically, they follow recommendations from their early adopter friends, but with a healthy degree of skepticism.

That’s why technology vendors target innovators and early adopters first when they want to sell new products. Once these early target groups adopt a new technology, vendors hope to reach the mainstream market. Early adopters are crucial as technology advocates. And since they pay high prices, they are important to refinance development costs, even if a product doesn’t turn out to be a mainstream hit.

There are many technologies that have crossed the chasm successfully: MP3, WiFi, smartphones, DVRs, IPTV, GPS systems. Others are still waiting for their big break: netbooks, internet appliances like the Chumby, or home servers. And many, many other products have failed to cross the chasm into the mainstream market: UMPCs, the Segway or the repeatedly failed video phone come to mind.

With the introduction of the iPod, Apple started to ignore this traditional marketing playbook, and the iPad is the latest and most dramatic example of an entirely different strategy. Steve Jobs’ company doesn’t make products for geeks, but targets the mainstream market from the very beginning.

The iPod obviously wasn’t the first MP3 player, and it didn’t offer anything that would have particularly interested the early adopter segment. On the contrary: Its closed architecture made it unattractive to serious MP3 fans. Instead, the iPod made strong technology available to normal users who didn’t have the patience to deal with the complicated players of the day. Same thing with the iPhone: No technical feature was outstanding, but the superior usability and simplicity of Apple’s phone targeted average consumers who were frustrated with overly complex smartphones.
Apple’s particular approach can’t be easily copied by its competitors. There are three preconditions for this to work. First of all, massive ad spending in mass media is essential. Apple spends a lot on TV ads, but very little on alternative new channels like social media marketing (which still mainly reaches early adopter target groups). Steve Jobs’ reputation as a gadget wizard and “CEO of the decade” helps a lot, since personified marketing works particularly well in mainstream markets.

Secondly, there have to be clear differentiators that are important to the target group. And for mainstream electronics products, it’s not about the latest tech features, but things like beautiful design and ease of use — things that no other tech company does as well as Apple. Simplicity is essential, and that’s why Apple’s closed approach with iTunes is exactly right for the mainstream market. Early majority customers want to buy content as easily as possible. They don’t really care if the songs or movies bought on iTunes are protected by DRM, simply because they don’t know what DRM is and don’t care to learn about it.

Thirdly, mainstream distribution channels are important. Apple’s stores in malls and great downtown locations are exactly the right way to sell tech products to non-technical consumers. Most people want to touch a relatively expensive product before they buy it, and many will be influenced in their buying decision by the nicely designed store and the friendly staff, not so much by long feature lists.

With the iPad, Apple is driving this strategic approach to new extremes. All the elements of its proven formula are there, but one thing is really new: For the first time, Apple doesn’t simply try to sell a product category that is stuck in early adopter land to mainstream consumers. It’s trying to define a new category out of the fragments of several niche markets.

The iPad is a bit like a portable media player, a bit like a netbook, a bit like an e-book reader and a bit like a tablet PC. All these device types have found a niche market, but none of them have really crossed the chasm. Apple is now trying to enter the mainstream market with this recombination of existing, but not yet mainstream-compatible devices.

That’s a bold move that could easily go wrong. But Apple is one of the very few companies in the world that has the marketing power and unique capabilities to pull this off.

(This article originally appeared on netzwertig.com, the leading German tech blog)

Yesterday, Google finally showed off some of the details of its new Chrome operating system. The new OS should be available by Q4 2010. Google most likely didn’t show everything that will be in the final product, but it’s safe to assume that the basic concepts will stay the same.
Some things turned out as previously expected: Google’s OS fully revolves around its Chrome browser, is extremely web-centric and will be based on Linux and other open source packages. But there were also some surprises: Chrome OS will only be available on special hardware that is compliant with Google’s specifications. It will not support traditional hard disks and not run any locally installed applications outside of the browser. Chrome OS devices will not do everything that a PC does, but they will be cheap and easy to use.

This sounds like a fairly typical disruptive strategy (see Clayton Christensen’s books). A new entrant (Google) tries to disrupt the incumbents’ (Microsoft, Apple) business by offering a significantly cheaper and simpler product that will only appeal to the very low end of the market. Over time, as the new product category gets better, the incumbents’ products retreat more and more into the very high-end of the market, increasingly losing relevance.

The big question is of course if Google’s approach has a serious chance to disrupt the OS market. There’s more than enough reason for skepticism.

First of all, the OS is not a major cost point anymore at the low end of the PC spectrum. According to some sources, A Windows license only adds $15 to $20 to the price of a netbook. It’s unlikely that people will go with a very limited OS just to save a few bucks on a $300-$400 purchase. Disruptive price points have to make a 5x-10x difference to really move a market. Witness the fate of Linux-based netbooks. After a few months, the whole netbook market moved to Windows XP, because most buyers were willing to pay the difference for a more familiar OS.

Secondly, Google’s vision of a purely cloud-based computer (everything in Chrome OS is stored in the cloud, the local storage just serves as a cache) could turn out to be too cutting-edge for the low end of the market. In order for this to work, you need a pretty fast broadband connection and you have to understand and trust the concept of storing your digital stuff on somebody else’s servers. I’m not sure that most consumers are really comfortable with that just yet.

Finally, there’s little reason to believe that the incumbents couldn’t offer a stripped-down version of their OSes for low-end machines in order to defend their market. Microsoft has already been toying with the idea of a limited Windows 7 version for netbooks, but did not release it after complaints from its OEM partners. Apple is rumored to work on a tablet device that probably would run a stripped-down version of Mac OS X and could compete with web-centric netbooks.

It seems fair to say that Chrome OS will likely not succeed as a traditional, straightforward disruptive product in the PC OS space.But Google probably hopes for a much bigger, much more fundamental shift. Most people today have a primary computer that they spend most of their computing time on. The massive shift from desktops to laptops in the consumer market over the last few years shows that people want to take their primary machine everywhere, and that makes a lot of sense in the traditional model of personal computing. However, the increasing availability of cheap web-capable devices (like netbooks, smartphones, tablets, even game consoles) could potentially break this 1:1 relationship between user and PC. The more people get used to accessing the Internet from a variety of devices, the more they will want to seamlessly access their data from any of these channels. The consequence is that people will move more of their data into the cloud, and local storage and applications will lose much of their importance.

Chrome OS is probably a bet that prices for web-enabled devices will drop far beyond today’s $300-$400 netbook price point and that people will have not one, but several of these devices that they can use interchangeably for most (though not all) of their computing needs. Google is not trying to win Microsoft’s game. There will be no new PC OS war.Google is trying to start an entirely new game, where it could easily turn out to be the dominant player from the outset. Or to put it another way: Google is probably not interested in a short-term disruption of Microsoft’s dominance, but in winning the next game — which it hopes to be a fundamental shift in how people use computers.

The only problem is that nobody knows yet if and when this game will take place. Dominant designs in technology, like today’s PC, can be pretty hard to displace. Remember the Segway? Looked like a great idea, a fundamentally new way to provide transportation, much more efficient than the tired old car. But it didn’t go anywhere because people tend to be happy with a “good enough” solution that they already know, even if it’s more expensive and complicated. And that’s why Chrome OS could turn out to be the Segway of computing in the end. Maybe today’s PCs are just not flawed enough to open an opportunity for an entirely new approach. Time will tell, but Google is certainly not fighting an easy battle here.

The idea seems strange at first: Do people actually pay real money to buy virtual “stuff” that only exists in an online game or on a social networking site? Virtual goods, the latest hype in the world of digital business, can take on many forms: digital flowers that you can send to your Facebook friends, better weapons and equipment for online games, or a new outfit for your Second Life avatar.

According to some estimates, Facebook could make about $75M in revenue this year from virtual goods. Social game maker Zynga is even bigger in this space, with most of its estimated $250M in sales coming from the game add-ons it sells to its users. The total U.S. market for virtual goods could reach $1 billion this year. But that’s almost chump change compared to the Asian market, where one Chinese social network alone apparently sold $1B in virtual goods last year. Chinese authorities now even have to regulate this exploding sector.

So why would anybody in their right mind pay hard-earned money for something that is basically just a pile of pixels? Well, most probably for the same reason that makes people pay $450 for a regular pair of jeans just because it says “Gucci” on the label: To impress other people.

It’s no secret that people spend more and more time online, be it on social networking sites, playing online games or even in virtual worlds like Second Life (which is doing better than most people think) or teenage girl hangout IMVU. Online interactions with other people (under real names or nicknames) are an increasingly significant part of many people’s lives. And obviously, the natural need to define your social status carries over into the online world.

Most of these online platforms have managed to establish something like a social hierarchy that strongly depends on virtual status symbols. If you want to be cool in Second Life, you need to own a fancy island and have a nicely equipped avatar, which will cost you a bunch of “Linden dollars” (you can get that virtual currency for real U.S. dollars, of course). If you want to avoid being humiliated by a monster in an online role playing game in front of your virtual friends, you’ll need good weapons, which are available for cash. Oh, and all these annoying “Mafia Wars” and “Farmville” updates that you get on Facebook? Just shows you how many of your friends are already trapped in one of these games. And yes, they’re probably spending money on that stuff.

It’s easy to see why platform owners like virtual goods: Margins on this stuff are amazing. Once the software is written, the costs are minimal. And for that reason, the idea of virtual goods spreads to more and more platforms. For instance, Apple now allows iPhone developers to sell virtual “things” right inside an app.

Probably nobody will claim that virtual goods will make the world a better place, but people get what they want, and sellers make a killing. So everybody should be happy about this rapidly growing new market, right?

Well, there is a dark side to the virtual goods market. First of all, some game vendors don’t make their users pay directly for all goods. Instead, people can sign up for “offers” (e.g. a Netflix trial subscription) and get some in-game currency in return. The problem is that many of these offers are scams or at least use unscrupulous business practices like hidden subscription sign-ups. After getting a wrist slap from TechCrunch, Zynga and other game makers just announced that they will police their vendors more strictly and weed out questionable offers.

The other problem is that almost all virtual goods just work inside a particular game or platform. That’s a restriction that is even more extreme than the hated DRM of digital music which allows you to only play your (legally purchased) music on compatible devices. What happens if you get fed up with a game? In some cases, you can sell you virtual goods, but that’s not always possible. Even worse, if the game company goes bankrupt, you probably lose your “investment”. But the most significant problem is that platform owners can suddenly change the rules. Second Life maker Linden Labs for instance banned virtual banks in its system a while ago, costing several people very significant amounts of money. It’s therefore not surprising that some governments are already considering to regulate virtual currencies and virtual goods markets.

We will probably see quite a bit of growth in virtual goods over the next few years. But the real danger for this market is probably none of the issues mentioned above, but the possibility that people simply could lose interest after a while. To some extent, most forms of virtual goods have the typical characteristics of a fad. It’s a frequent cultural phenomenon that large groups of people spend a lot of money on a seemingly pointless (or at least not particularly remarkable) product or activity, just to lose interest after a few months. Not surprisingly, virtual goods are most popular with very young people, a target group that is particularly susceptible to fads of all kinds. Remember Tamagotchis? Virtual pets looked like THE next big thing at the time. Now they’re just embarrassing.

The economic problem with a fad-driven business is of course that fads are entirely unpredictable. There are entire industries (like the toy industry) that try to produce fad after fad, but it’s very difficult to consistently come up with something that will take off in the mass market. It’s therefore really hard to build a sustainable business on this foundation. That’s bad news for investors and startups in this sector.

So there are good reasons not to believe the hype, even if some industry analysts already provide the usual hockey-stick growth curves. For instance, Piper Jaffray predicts that the global market for virtual goods will grow from $2.2B this year to $6B in 2013. Maybe so, but almost all analysts overestimated the growth of social networking revenues a few years ago. And just to put things in perspective: $6 billion is a big number, but for Google, this would just be a nice single quarter of ad sales. And for Microsoft, it’s less than half a year of net profits.

One of the greatest — or, depending on you perspective, nastiest — effects of the Internet is that it tends to drive prices to zero in almost every market it touches. This effect has been extensively described in books and countless blog posts, so there’s little need to reiterate all the many well-known industry cases (in news, music, movies, software, etc.).
However, almost every week brings a new example of dramatic price erosion through the power of the Internet. The latest case: Google now offers a free turn-by-turn GPS navigation application on Android smartphones (and, subject to negotiations, probably soon on other platforms). Until now, the established vendors charged more than $100 for applications with the same functionality. Poof, there goes another profit pool.

Why can Google do this? It now owns all the necessary map data thanks to its Street View project, and reusing this wealth of data for another application is very cheap. Giving away the navigation software probably won’t cost Google much in incremental costs, since it’s already giving away the entire smartphone OS anyway. Of course Google hopes to make money in the only way it really excels at: Through advertising, in this case built-in ads in the GPS application.

Tough luck for TomTom, Garmin, Navigon and all the other vendors of GPS products, right? They will just need to adapt and match Google’s business model. Well, unfortunately, they will have a hard time doing that. Only TomTom owns a maker of map data, but probably can’t easily afford to just give this data away, because it doesn’t have an unrelated source of profits the way Google does. Things look really bad for the other vendors, because they have to buy their map data from external providers. The only company that could match Google’s free offer is Nokia, which bought map data provider Navteq a couple of years ago. But Nokia has no experience selling advertising, and it’s increasingly losing momentum in the smartphone market.

This case shows very nicely how whole industries can be turned upside down without warning by a new player who leverages the Internet to completely change the economics of a market.

Let’s recap: Google takes advantage of

free distribution of its software (the app will simply come pre-installed on smartphones or can be downloaded)

free marketing (no need to convince people to use a free, pre-installed app)

free support and maintenance infrastructure (software and data updates will be distributed through the wireless data plans that smartphone users already pay for — at no cost to Google)

The missing link that prevented anybody from offering a free GPS app so far was the map data. And Google is in the unique position to have collected the necessary data (at least for the United States) for its Google Maps service. This huge effort has probably already been paid for by the local ads that Google shows in Maps.

Of course there are still years of life left in the traditional GPS device market, since not everybody owns a smartphone and Google still has to overcome obstacles with data availability. But the writing is on the wall: Another fundamentally disrupted market, with incumbent players that soon will fight for survival.

History shows that no business is ever really safe from disruption. But the Internet with its particular economic characteristics speeds up disruptive processes and makes them much more dramatic. The classic case studies of disruption show how disruptive competitors enter a market with products that are less sophisticated than the “state of the art”, at much lower price points. This over time forces the incumbents to offer simpler, cheaper products (if it’s not too late by then). But having somebody come into your market who simply gives away a sophisticated, in some aspects even superior product for free is an entirely different matter.

Sure, Google cross-subsidizes its new GPS service heavily from its traditional business. It will probably not make money from this product for years, if ever. It’s a long-term bet and an investment into a whole ecosystem. The fundamentally important point is that this kind of extreme strategic move is only possible in the digital world, and only thanks to the ubiquity of the Internet, which provides nearly free distribution. The marginal costs of digital goods are very close to zero, and this enables an entirely new set of deeply disruptive strategies that most managers (and academics) have probably not even begun to understand.

If you sell a product or service that can be replaced by an entirely digital product or service, all that is needed for a fundamental disruption is somebody who is willing to invest some money into the initial development of such a product. The money can come from an existing profit pool (as in the case of Google) or from investors who believe that there could be profits in the future. The barriers to entry are extremely low in almost all digital markets, and the speed of disruption can be breathtaking. Just ask a newspaper executive.

The characteristics of digital markets attract competitors who behave irrationally in the short term (by putting money into a free, profitless product) in the hope of somehow winning control of a market in the long run. The low barriers to entry (thanks largely to free digital distribution) make many digital markets look very disruptible and hence attractive to potential disruptors.

Unfortunately, once a player gains significant market share and starts to make profits, it will attract other competitors with similarly aggressive strategies. The result is almost constant disruption, which makes it hard to ever build a consistently profitable business. Commoditization is of course nothing new and happens in the physical world too. But the clock-speed of Internet-based disruption is much, much higher. Remember, just two years ago, MySpace looked like the unassailable king of social networks. Then it got replaced by Facebook, which burned through $716 million in capital to build its current position and is still not profitable. It’s safe to assume that somebody with even deeper pockets and/or better ideas could eat Facebook’s lunch in the near future.

This increasingly frequent pattern will have pretty profound consequences. For instance, the traditional venture capital model is built on the assumption that a tech startup can achieve a strong market position and profitability within 4-7 years and is then ready for a major exit, preferably an IPO. The attractive past returns of venture funds were fully based on this model, not the profitless “Please, Google, buy us” model of most Web 2.0 startups. But when even very well capitalized, market-leading startups have trouble reaching profitability before yet another disruptor attacks them, the whole system of tech investments as we know it breaks down.

Are there any strategies that protect a company against this kind of disruption? There are probably only two: One is to have extremely strong platform-based lock-in effects, as in the case of Microsoft Windows. For most businesses, switching from Windows to even a free OS like Linux would be so costly that it’s almost impossible to justify. However, this is not the same as having just some superficial network effects, which are often weaker than expected. Facebook is not safe from disruption just because a lot of people have built their friend lists on it. Yes, that is a network effect, but the decline of MySpace shows that it’s not a very strong protection against a better competitor. Building a deep lock-in is very difficult. Apple and Amazon are trying to do this for digital media, but their lock-in doesn’t look nearly as strong as Microsoft’s.

The other strategy is to consistently be at least as good as all competitors and invest heavily into a protective ecosystem. Google is of course the poster child for this strategy. No company so far has come up with a search engine that is really significantly better than Google’s. And through services like Maps, Gmail, Google Docs, Wave etc., Google is slowly building a network of small lock-in effects that collectively can build a pretty strong wall against attackers. The free GPS app is of course part of this plan, because Google wants to play in most parts of the mobile Internet market in order to defend its core business against competitors.

But it is clear from many recent examples that the Internet will change how we think about competition, strategy and business plans. And this change will probably be more fundamental than we think.