I’ve long been thinking of doing a series of postings on both Microsoft and the industry’s use of telemetry and was about ready to start when I realized I’d rather put a cart before the horse. Many have scratched their head about Windows RT, and in particular its lack of support for third-party “desktop” apps. Ultimately I think Windows RT is the result of heavy reliance on telemetry. Those who have bones to pick with Windows RT will, of course, think of the adage “there are three kinds of lies: lies, damn lies, and statistics” since it is statistical analysis of telemetry that we’re really talking about. On the other hand, reliance on statistical analysis may explain why the end-user reaction to Windows RT and Windows 8 overall seems much better than that of pundits and power users. It’s hard to be positive on something when you are looking at it from a perspective more than a couple of standard deviations from its design center.

Lots of inputs go into every decision, most importantly data. With all decisions data is in short supply (thus the adage that management is the art of making decisions with incomplete data). One might imagine that if you had perfect data then decision-making becomes easy, because the answer becomes obvious. Typically you are forced to fuse data from multiple sources. The more sources the more potential error that is introduced into the analysis process. Often those sources are themselves the output of analytical processes rather than raw data, introducing their own errors. Thus you get some of life’s truly head-scratching bad decisions, like New Coke.

But what if you had near perfect data? What if instead of small sample sizes, limited sampling techniques, reliance on anecdotal data, etc. you had a sample of both overwhelming size and accuracy that you knew without a doubt that it clearly represented reality? Could you make better decisions? In particular could you make better decisions when high risk and complexity were involved? Well, that is something the grand Windows 8 experiment will eventually tell us.

When Windows Phone 7 was being designed the App Platform team would look at the top 100 iPhone apps to make sure that the platform could do a good job of supporting them. Customer usage patterns were thus driving decisions, but it was an indirect data set. The Windows 8 team could look at this same kind of data, but it could also look at the massive amount of telemetry Microsoft collects from those who opt-in to its Customer Experience Improvement Program (CEIP).

Anyone who followed the Building Windows 8 blog could see how seriously Microsoft used the CEIP telemetry in making decisions about the Windows 8 user experience. Dropping the Start Menu in favor of having even desktop-focused users jump into the Start Screen is an example of a decision driven by what telemetry told Microsoft about average user usage patterns. Of course if you are in the minority of users with a usage pattern far from the average than you aren’t happy about what Microsoft did. And further, just because the data told Microsoft how people actually did things doesn’t mean the resulting design decisions Microsoft made are the right ones. The data didn’t say users wanted a jarring transition out of the desktop whenever they needed to start a new application. Microsoft could have made some design tweaks that stuck with the Start Screen idea but made those transitions less jarring.

It may be a little off topic but let me give you a simple design change that might have made the Windows 8 experience smoother. I don’t think it violates anything one would learn from the telemetry, but rather on a statistical level would make even those out around two standard deviations happier. Provide a snapped view of the Start Screen that could be invoked from the desktop. I’m guessing that half of the negative Windows 8 reviews would switch to neutral or positive.

So let’s go back to Windows RT and the Surface and how telemetry might have figured into the key design decisions there. Recall Netbooks. Five years ago the notion that most computing was moving to the web, and thus all users needed was an inexpensive web browsing device, took hold. Moreover it was recognized that these devices would often be secondary devices that complemented rather than replaced existing PCs. Since Netbooks didn’t have to run existing Windows-based apps Netbook manufacturers initially focused on Linux as the Netbook OS. The rapid unit growth in Netbooks forced Microsoft to pay attention, initially offering a lower-cost Windows XP and then introducing Windows 7 Starter. Despite costing (from my observation) about 15% more (both from OS costs and the need for slightly beefier hardware than their Linux-based counterparts) Windows-based Netbooks eventually captured over 90% of the Netbook market.

Netbook market share growth then topped out, becoming a sizable niche within the PC space, before being hit by a triple whammy. By 2009 Apple’s introduction of, and rapid growth in, the iPhone’s App Store had provided an alternate model to the movement of all apps to simply being web sites and it had decent web browsing capability as well. Then the distinction between Netbooks and “Thin-and-Light” Notebooks blurred as user demand for both focused more on the 11″ screen size. Finally Apple’s introduction of the iPad provided a much better alternative to the Netbook for a device focused on web browsing while at the same time bringing the iPhone’s App Store along for the ride. Netbooks all but disappeared.

Windows 8 design started before the introduction of the iPad. Even as late as the iPad 2 launch many analysts still considered Tablets to be no more than a Netbook-like niche. And until very recently the impact of Tablets on PC sales has come entirely from the shift of the previous Netbook market to Tablets. So from a hard data perspective what Microsoft mostly had to go on in late 2009 and 2010 in designing Windows 8 and Windows RT was the CEIP Telemetry.

Let’s step further away from overall Windows 8 and focus just on Windows RT. It was clear by 2009 that a huge ecosystem was growing around ARM processors. Microsoft had been tracking, and even working on, porting Windows to ARM since early that decade. Deciding to port Windows to ARM was the easy part. Deciding what to do with it is where the telemetry probably came in. And let me be clear this isn’t based on any knowledge of what actually happened, but I’d put money on it being within one standard deviation of the truth.

With the Windows Phone team focused on phones, and ARM processors clearly not being up to the task of (nor having any advantage in) powering full Notebook or Desktop PCs it was pretty clear that the Windows on ARM design center was for the classes of devices between the two. At the time only one class device had substantial market share between phones and notebooks, the Netbook. Again the iPad didn’t exist yet so neither its precise characteristics nor user acceptance of them was known. That tablets with user experience characteristics similar to the iPhone could be predicted (especially since Microsoft’s internal push on Natural User Interface as well as its own Tablet PC experience suggested it as well). So it was pretty obvious that Windows on ARM needed to target Netbooks, touch-enabled Netbooks, and tablets with similar characteristics to Netbooks. How, in the absence of what we know today (disappearance of the classic Netbook and rise of the Tablet) could Microsoft make design decisions? Telemetry.

Why did 90+% of users choose to pay more for a Windows-based Netbook than to go with a Linux-based Netbook? If these devices were simply used for web browsing than the user behavior doesn’t make sense. We can speculate on this of course. Familiarity of UI, compatibility with devices such as printers, ability to run Windows applications (even though that is counter to the original idea behind netbooks), etc. As I said we can speculate. And analysts can survey customers and make their claims. But Microsoft? Microsoft has precise data from the CEIP.

Microsoft could look at data and see how much users printed and what printers they used. Microsoft could see how often they used the USB port and what they did with it. Microsoft could see how often they docked the netbook to make use of larger monitors and better keyboards and mice. Microsoft could see how often they used WiFi, hardwired Ethernet, or 3G. Microsoft could see what percentage of the time they used the web browser and what types of web sites they visited. Microsoft could see what other applications they ran and how much time they spent using them.

And what do you think Microsoft got from the CEIP telemetry? I’m guessing that they saw the vast majority of Netbook usage was for web browsing, with use of Microsoft Office representing a much smaller but still substantial portion. And then I’m guessing they saw a dramatic fall-off with no apps really registering as significant. Netbooks were basically web browsing plus Office machines. Then they looked at the web usage and saw that a great deal of it matched the kinds of “consumption” apps that were popular on the iPhone and that they were going to target with the new Windows 8 “Metro” app model. And they saw heavy use of traditional Windows features like broad peripheral support, network connectivity, etc. Combine the actual usage data on Netbooks with the emergence of Natural User Interface and the re-invigoration of local apps that was demonstrated by the Apple App Store and you have Windows RT.

Some have asked why Windows RT doesn’t have the ability to run arbitrary x86 applications via emulation. Well first that doesn’t seem all that technically viable. DEC’s Alpha ran x86 apps via emulation, but recall that in any given semiconductor generation the Alpha was faster than the equivalent x86. That allowed it to run emulated apps with reasonable performance. In any given semiconductor generation ARM processors are notably slower than the equivalent x86 (though to date they’ve been more power efficient). So emulating x86 apps on ARM would make most apps unusable. But perhaps more importantly, if data from Netbooks shows that users didn’t run apps even on a native x86 machine in this class why would you need to emulate them on ARM?

Emulation isn’t attractive, but why not have supported third-parties who wanted to port their x86 “desktop” applications to ARM by providing the tools and allowing installation of third-party desktop apps? Well of course there are those issues around power consumption, memory use, security, etc. that the new app model addresses but desktop apps would largely still suffer from. Microsoft could have made third-party developers pay attention to those issues, as Microsoft Office did. One issue is that it would have detracted from efforts to get third-parties to write to the new app model. Moreover, why bother if the data from Netbooks showed that users didn’t actually run those apps on this class of machine?

It is unlikely many users ran Photoshop on Netbooks. If they used Netbooks for photography then they likely used lighter weight apps of the type that were appearing on the iPhone and Microsoft expected would quickly appear in its own Windows Store. As they analyzed the telemetry from Netbooks I think they found this to be the pattern, with the netbook experience proving that there was little actual customer usage of arbitrary desktop applications on a device in this class.

So take a look at Windows RT, or even better the Microsoft Surface, and realize what it is. The Surface is the intersection of Netbook meets iPad. It brings exactly what most users liked about Windows on Netbooks into the modern era while dispensing with much of the Windows world that Netbook users simply didn’t take advantage of. It is exactly what users told Microsoft via their actual usage data, extrapolated from the historical Netbook world into the modern device world, they wanted.

Want another possible proof point? Domains. On one hand it seems odd that you can’t connect a Windows RT device to a domain, yet on the other how many Netbooks were Domain-joined? Microsoft may have had many reasons for not including the ability to join a Domain in Windows RT, but whatever those were they could look at the Netbook data and conclude that the ability to join a Domain was not critical to this class of device. Take a look at many of your own questions about Microsoft’s Windows RT decisions and you’ll likely find the answer is in Netbook usage patterns.

The use of Telemetry may explain why Windows 8, Windows RT, and the Surface seem to do better with average users than the pundits and power users out around and beyond two standard deviations. Windows RT and the Surface are designed to actual usage data on a segment of the computing spectrum that was also derided by many pundits and power users. A segment that garnered (as I recall) about 20% of PC unit volume before being obliterated in the “post-PC” shift. If Microsoft has used its wealth of telemetry to build something that nails the real world usage scenarios that originally made Netbooks popular, while also being roughly as good as the iPad for the scenarios Apple optimized for, than they have a huge winner. Even if pundits and power users don’t seem to like what they’ve done.

And if Windows RT fails? Well it could be the result of pundits and power users convincing the target audience not to give it a chance. Or it could be the result of poor design decisions being made despite having excellent data. Or it could be a series of marketing, sales, and partner missteps that have little to do with the product itself. Or it could be that particularly vicious form of lies known as statistics.

31 Responses to Is Windows RT the ultimate example of using Telemetry?

The problem I see is that it’s data about current and past usage. And that doesn’t necessarily reflect the ideal future direction. Nor does it guarantee people won’t willingly embrace something completely different if its holds the promise of providing the latter. I don’t know if they got it right or wrong here. I think you’re right that they could have minimize the hate reaction with some minor desktop changes. And I also agree that average users may react much differently that power users, unless sentiment gets too bad, in which case they’ll never try it.

If you look at any technology adoption curve, the first set of customers (say 15%) falling in the innovators/early adopter categories may clearly have different usage patterns than the rest of the folks. Which means that even if 85% of data indicated a valid pattern, it could be different from what the first 15% are using it for. And then they don’t like the product and say bad things clearly slowing the diffusion of the product to masses who just read the negative press even though the product was designed to fit their scenarios well.

And perhaps Keeping the start button (or doing this a little more differently), I agree, could have completely eliminated one of most common source of gripe and distraction to the Win8 story.

It has been interesting looking at the running commentary by tech bloggers and reviewers on the flaws and issues with the Surface compared to regular consumers. It seems that many reviewers wanted Microsoft to make an Intel based iPad clone and leave ARM alone. At times it feels like some go out of their way to make sure you know how bad Windows RT is and how unnecessary the Surface is as a product. Compare that to a guy a ran across looking at buying a tablet; when talking about Windows 8 the only tablet he thought was a solid buy was the Surface.

Actually a I think the whole difference between tech enthusiasts and regular users can be boiled down to one question I’ve been asked a few times, “Is the tablet like a computer?” To most tech enthusiasts and bloggers tablet’s are this unique stand alone thing. They are not computers and they run restricted programs. To a lot of people tablets=PCs; that’s why you see so many get keyboard docks for their iPad. And that’s what I think many miss; the Surface is made the way many use tablets. Sometimes I think regular users have a better perspective on technology than experts

I only know one person who bought a netbook. They bought a Linux-based netbook because they are penny pinchers in some areas, including tech. They wanted an appliance they could just use. Unfortunately, the Linux file system and application for navigating the file system on the netbook was just too foreign for someone used to Windows and Windows Explorer. They returned the Linux-based netbook and paid extra for a Windows-based netbook. They still use the netbook today, but are using their Android smartphone with dedicated apps for the tasks that don’t require a real keyboard.
Incidentially, they complained about the missing Windows features on the netbook. When told that they should have purchased a laptop, they said they didn’t want to spend the money for a laptop and didn’t want to carry a laptop around with them due to the size and weight. There is just no pleasing some people.

Great article! Completely agree with Paul above: past performance is no indication of future returns. If you truly want to lead, it’s not enough to know what people used yesterday. It’s about anticipating where they want to go tomorrow. Hence, the unlikely success of original Xbox (social gaming) and failures of Zune and, to a degree, Windows Phone 7: both were solid products that followed rather than led. None provided an earth-shattering innovation or a complete paradigm shift*.

Which begs a question: how all the design-by-telemetry is different from a much maligned design-by-committee? Isn’t it orders of magnitude worse? Because your committee is effectively your entire audience. And they don’t provide any insight or suggestions, just patterns of arbitrary behavior. Sure, telemetry as a part of A|B testing is probably a different things. Was it something that Microsoft did though?

* With Windows Phone 7, I strongly believe the story would have been vastly different if Mango were what they had shipped in October 2010. But it’s pointless second-guessing history.

Data isn’t the design, it informs the design. Netbook data could tell you the ability to run arbitrary desktop app was unimportant and the ability to print your airline ticket on any popular printer was important. But what you do with that data, including how you extrapolate it from a backward looking view into a potential paradigm shift, hasn’t changed.

Data is one element in forming good decisions. Other elements include insight as to where the world is going; understanding your key advantages and playing to them while addressing new uses; good design judgement; and so on. When your key assets are familiarity and compatibility — and then you blow them up — then be prepared for a lot of pushback. Microsoft made some decisions with Windows 8 that made the transition between the two worlds unnecessarily jarring, and now they are paying the justified price. Eg, start menu. When you’ve built the foundation of the company, Windows, on a notion of making it easy for people to transition from one version to another, and now, instead of bridges, you build cliffs and walls, it’s not surprising there is a lot of controversy and resistance. All of which was unnecessary. Whatever the “data” may say — poor understanding of what people use Windows for or why and what they are looking for from Windows. IMHO :)

Yes. Poor understanding of how to make products successful, what your assets are, and how to respect and leverage your assets.

Win8 is just not very good for desktop PC’s and this will be a huge inhibitor in Win8 adoption. Win8 forces you to go back and forth between very different and incompatible UI’s. Who asked for that? Who likes that and feels good about that? People are uncomfortable, don’t understand, and don’t want to understand it.

And it was unnecessary — they could have chosen an approach that bridges the two better. They could have made the desktop more self contained, included the start menu and boot to desktop, so that desktop users could live in the desktop, get all the benefits of how Win8 has improved in desktop mode, and then use the Metro environment as they liked. They could have put Metro in a “dos box”. This way there is very little price to adopt Win8 and then it gets adopted 10x faster than it will be. But now, because of making it unfamiliar and forcing people to learn something new — the vast vast majority don’t want to learn something new (which the Windows team would know if they really understood their users), Win8 will be adopted very slowly, it’s getting a bad reputation, and further, Apple has a once in a generation opportunity to switch users. If you have to learn something new and all your existing apps won’t run (RT), then why not see what else is out there? Indeed that is exactly what’s happening. I read a story this week that fully 1/3 of all existing Windows users plan to switch to Apple (Macor iPad) rather than go to Win8. It’s far easier to learn MacOS from Win7 than it is Win8. They could not be happier in Cupertino thanks to Win8 and the decisions Microsoft made.

“Win8 is just not very good for desktop PC’s …. Win8 forces you to go back and forth …”

I use Win 8 with 3 large monitors, and rarely even see the Metro UI while I am working at my desk. Win 8 IS very good for desktops. Is it possible that MS has not driven this point home enough? Or could it be possible that the reviews have driven it home harder that it can’t be done? Maybe MS could have made it more obvious to folks that Metro is the tablet interface, and not necessarily to be relied upon for getting a certain level of real work done. It was pretty obvious to me that I was going to be staying in the desktop to get my work done, and that I would be using Metro quite effectively on smaller screens.

I’m going to guess that your comments are from personal experience. I would be curious to know how much trouble you really have after pinning your desktop apps to the taskbar, staying in the desktop environment? Perhaps the telemetry that displayed that people did not use the start menu works for the majority, and not for those that routinely used the start menu to launch apps. And as the inevitable usage of Metro apps on tablets increases, wouldn’t it be better to have it available to the user than not at all?

I definitely would agree that MS took some unnecessary risks in not allowing the user to choose which environment they land in by default. But when you consider that the desktop is a click away…

I used a Surface for a day and gave up in frustration. Of the 6 friends of mine who bought Surfaces, 4 have returned them.

Microsoft could have kept the start menu and allowed people to boot to desktop, and not forced them to the completely different Metro UI to get to Start. They could have made a far more comfortable experience for people who use a desktop and have no need or interest for a touch based UI without Windows.

As a result they are spending the vast majority of their time explaining and on the defensive, rather than being able to promote the virtues of Win8.

I and over 10,000 of my coworkers have upgraded to Windows 8. No I don’t work for Microsoft. I use my laptop and an external monitor. I log on, hit Windows-D to get to the desktop, and about the only time I find myself back in the Metro interface is when I need to do a search. I really like the improved multi-monitor support in Windows 8.

Telemetry would have validated that users use features which are surfaced and visible. i.e. developers are going to have to add their own search box outside the charms sidebar.

Telemetry would have given users what Ford calls the “faster horse”.

Telemetry would not tell the people at Microsoft that reinstalling software is a chore. That this is why I run web applications. Microsoft executed a half hearted app store for the Windows desktop, without a complete application/sandbox model. It was pulled shortly after.

If marketeers relied on telemetry there’d be no low cost airlines.

The real value of telemetry lies not in usage data, but finding out why people are not using a piece of technology. The answer isn’t that people don’t need to, but a question of pain vs benefit.

But marketeers do (and did) have telemetry that dictated low cost airlines. They could look at the purchasing behavior of consumers and see that price was OVERWHELMINGLY the deciding factor in ticket purchases. Particularly with leisure travelers. They could see that people would forgo comfort, trip length, hours of departure/arrival, number of stops, quality of meals, shorter checkin lines, etc. to save a few (often literally few) bucks.

Why do you think Telemetry would have lead to a “faster horse”? You don’t think that the data on how much people listen to the radio, how often they put in a third-part entertainment system, what they did with that system, how often they added aftermarket alarm systems, etc. isn’t precisely what drove automobiles from their Spartan origins to today’s luxury transports? I hate to say it, but I lived through enough of the transition to know that it was extremely data driven.

Telemetry obviously doesn’t stand alone, there are many sources of data and you have to fuse them. Telemetry gives you more accurate information (how often are windows opened, but what percentage, at what speed do drivers and passengers close the window, at what DB level, etc.) then you can get any other way.

There’s a great deal of research that shows people overwhelmingly prefer whatever that is within easy reach. Eye tracking of websites show the magic triangle where people’s eyes run. Products are stacked at eye level enjoy better sales. Headlines placed above the fold gets read.

While telemetry data is useful, direct observation offers far more bandwidth. Empathy can be more effective than a wall of numbers.

I have performed my own Windows 8-type telemetry study on automobile usage. I have noticed that drivers and passengers never use the vehicle’s doors during trips. My conclusion is that car doors are an unnecessary complication and expense, and should be eliminated. I also noticed that the driver spends all the time turning the steering wheel while driving, therefore it should be very much bigger. Also, only a tiny percentage of trips end in an accident, so air-bags and seat-belts should be discarded. Welcome to the future of the automobile – the Microsoft way.

The big problem with Windows 8 Metro is that it is only aimed at the peak of the bell curve. The old desktop was not perfect, but it could do anything and everything – it did not exclude anyone and it scaled to manage massive amounts of data and extremely complex scenarios. Metro seems, at present, to be a decision to exclude all power user scenarios from the future of Windows.

How is it excluding all power users scenarios??? I haven’t understood the complaints about Windows 8. Most people (I bet) will likely spend the vast majority of their time happily living in the new Win 8 start screen because it serves their purposes (mail, internet, store apps, etc.)…even on a traditional PC. But the desktop and all its functionality is right there with one click. When working, I spend most of my time in the desktop using programs like Visual Studio and SQL Mgmt Studio. But when not specifically working, I don’t often go to the desktop…it’s just not necessary. The one exception to this is when I visit a website using a version of flash or silverlight…but that should become less of an issue as time goes on.

I agree with dafowler. Reviewers are comparing the RT to a device that does not exist. Which is not very useful or helpful. I think real world users are actually much smarter about these devices than most reviewers. Real world users are asking questions like: Compared to what? What is the cost/benefit ratio? Can I view or edit the occasional Office document? Can I read PDFs on it when on the road? Will it keep my stuff safe from iTunes? Will it nickel and dime me over time? Can I exchange information with coworkers or friends via thumb drive when we are offline? If I desperately needed a mouse for something, could I use one? Etc. Real world users understand that the Surface RT is simply a useful device, in a very convenient, portable, light, quiet, cool form factor. It is not an iPad for playing Angry Birds, it is not a PC for writing software (though I did that recently in a restaurant…). Many Surface RT owners probably already own one or more PCs and just need a portable companion device. The only people who seem shocked that it is not either a full PC or an iPad are some of the reviewers.

If you want to do more than play Angry Birds, especially if you are already using a PC at work, then there simply are no other good options out there today. Complaining about stuff that everyone already knows, when there are no alternatives, is unhelpful.

I just hope all these naive reviewers do not end up destroying the market.

That said, I think the fact that Surface RT has a full desktop means there’s an opportunity to create more keyboard / mouse-centric apps (which most users have in a typepad) that might be simpler than traditional desktop apps, but not aimed purely at people using their finger to manipulate things.

I watched a video recently by some of the WinRT engineers which spoke about their desire to bring some of the good things in the WinRT realm (such as XAML for UI) to desktop developers. Why not extend WinRT be usable for desktop apps? That seems to make more sense than bringing WinRT features down to the parallel stack that is traditional windows development. Better to extend WinRT into the desktop…which would have the advantage that they would be available via the Windows Store.

An advantage of the Surface RT is that you DO have a full desktop. Since Surface is more touch-oriented, it’s fine that it doesn’t have Photoshop. But photoshop light, which IS better with a mouse and keyboard? Seems like just another activation state.

“That said, I think the fact that Surface RT has a full desktop means there’s an opportunity to create more keyboard / mouse-centric apps (which most users have in a typepad) that might be simpler than traditional desktop apps, but not aimed purely at people using their finger to manipulate things.”
Only Microsoft knows whether the Surface RT has a full desktop and they aren’t letting anyone else have access to it.

“I watched a video recently by some of the WinRT engineers which spoke about their desire to bring some of the good things in the WinRT realm (such as XAML for UI) to desktop developers.”
XAML has been available to desktop developers for several years in WPF and Silverlight. Unfortunately, the future of those products is now very much in doubt.

Bob – yes, SL is in doubt, although I think for the wrong reasons…everyone said SL is dead, and as a customer-facing browser app it is. But SL became just as much a good LOB technology choice as WPF and WinForms in the recent years – because it is essentially a .NET app easily delivered through the browser. But too much negativity about its restricted use as a UI for customer facing websites makes CIOs (mine, at least) not want to touch it for any reason.

But I don’t know that you can state that WPF is “very much” in doubt, because is essentially XAML/.NET. And XAML is front and center in Win 8. It is one of the three UI technologies available (XAML, HTML5, DirectX). Microsoft spent a lot of time touting HTML5/js as an option, and I don’t blame them – it is a new option that could attract a lot of Javascript/web developers to the world of Windows and increase their dev base. But even if WPF as a brand doesn’t live long, XAML almost certainly will. Unless, of course, developers abandon it in favor of HTML5/Js…and I don’t see that happening for the scores who use .NET for creating LOB apps and love its productivity benefits.

This is an example of how a decision can be well thought out but totally miss the mark. From end user perspective, Windows RT does not provide compelling reasons to switch from the Ipad. Even perfectly executed Windows RT product is an Ipad without the apps. Wintel tablet provides possibility carrying only one device when traveling. You don’t need to bring a laptop and an Ipad. The reason Apple is so successful is because Steve Jobs did not think this way. What he had was common sense and saw everything from end user experience perspective.

I remember have read somewhere on the windows blog that the decision to take away the start menu was made since 95% of the users are almost never using it, and that was directly derived from the telemetry of CEIP. I remember that when I read that, my first look was to my task bar, and it actually made sense to me. I’m almost part of the 95%. And when I very rarely open it, most of the cases, it is to type the first letters of an application that I would not find using my mouse in my long list of applications… I still do have that on Windows 8, and I pinned my key applications in the task bar anyway. Added back the key shortcut I had in Windows 7 on the desktop, and I’m done, at home. I do not feel that much change… I only discovered Metro a few days after, looking at few videos, and there I understood it was a bit more that a gadget for some use… Those days, on my laptop, I started to use snapview with Skype, IM, or Music with the desk displayed on the side… And when you start getting used to it, you are not going to come back.
Somehow, I’m quite sure you are right, the key decisions they took are directly derived from CEIP data. It would be probably interesting if they communicate more on those, as when you get the rationales of a decision, it makes it much easier to give it sense.

I strongly suspect that it said people are bewildered and having trouble with discoverability. I suspect they said that rather than being Janus-like, Win8 was more Frankenstein.

The goals of Win8 are laudable. “Best of both worlds.” But the devil is in the details.

Kind of like OS/2 — “Better Windows than Windows.” Except it wasn’t. It wasn’t better than Win95 for what Win95 did well, and wasn’t better than Win NT for what Win NT did well. And it failed.

The danger with Win8 is that it’s not better than Win7 for what Win7 does well and not better than iPad for what it does well.

Perhaps by version 3, it will be a good product, but as shipped there are too many gotchas, incompatibilties (with apps and machines and devices), and difficult transitions. I predict Win 9 will be to Win 8 what Win 7 was to Vista.

My question through this whole article was whether or not the telemetry data received was a representative sample. It seems like Microsoft and everybody else assumes that it is. Is that a true assumption, though? Many people I know would opt out of the CEIP. These people weren’t only technical people, but of the technical people that I know, most of them opted out of the CEIP and encouraged their friends and relatives to do so as well. I wonder if the technical folks shot themselves in the foot by not allowing Microsoft to sample their usage patterns…

I don’t know what the IT justification was. My employer is one of the larger consulting firms that specializes in Microsoft technologies, so the IT justification might have been a minor consideration as far as the big picture was concerned.