Posted
by
Soulskill
on Friday March 09, 2012 @06:35PM
from the features-fighting-features dept.

MojoKid writes "Metro, Microsoft's new UI, is bold; a dramatic departure from anything the company has previously done in the desktop/laptop space, and absolutely great. It's tangible proof that Redmond really can design and build its own unique products and experiences. However, the transition to Metro's Start menu is jarring for some desktop users, and worse yet, Desktop mode and Metro don't mesh well at all. The best strategy Microsoft could take would be to introduce users to Metro via its included apps and through tablets, while prominently offering the option to maintain the Desktop environment. Power users who choose to use the classic UI for desktops and laptops can still be exposed to Metro via tablets and applications without being forced to wade through it on their way to do something important."

All these form factors tied in the with the vast Win32 ecosystem(except ARM tablets) and a single Touch-first Metro ecosystem.

It's interesting how the comments on Apple/iPad/Post-PC articles, financials of Apple/Dell/HP etc. state that "MS is dying in the Post-PC" era, but now when they come out with a solution to make a OS run on different form factors and to have tablets that are not just consumption devices, the comments on here are skewed towards "Why change something that works?". If PCs are really dying, why not attempt to fix that instead of standing by with their head in the sand(like RIMM)?

There will always be people unhappy with anything you build or change. They should just go with their vision of what they think is right and that's what they did. They envision that with Windows 8, most new monitors will be touch enabled because of the demand so that for some functions(like clicking on links), people can use touch.

You may disagree with the vision, but you can't disagree that there is a method behind the madness.

Let's see; I work on two 22 inch monitors. I can move from the far left edge to the far right edge with a three inch movement of my mouse. Now you want me to have to lean toward the monitors and move my arm over three feet to accomplish the same thing. How ergonomic! How NEW! How efficient!

I can move from the far left edge to the far right edge with a three inch movement of my mouse. Now you want me to have to lean toward the monitors and move my arm over three feet to accomplish the same thing.

Or, you know, you could just use the mouse in Metro.

It's actually designed to operate slightly differently on a device with a mouse compared to touch-only ones like tablets. For example, in touch-only mode, most gestures are done by swiping from beyond the edge of the screen, Playbook style. With mouse, same gestures are available by placing mouse cursor into one of the screen corners.

Then don't use touch? Remove all the Metro apps from the Start screen and pin only your desktop apps and you'll end up with something like Windows 7 with a glorified start menu.

That's the problem though. Sure, you can reconfigure it to be like Windows 7...but WTF? If that's better then why are they wasting everybody's time developing something that serves only to make everybody turn it back off?

Which is the same problem with the dichotomy between tablets and desktops. There is a reason that iOS is not MacOS and Android is not Ubuntu or Mint or ChromeOS. What Microsoft is obviously trying to do is get everyone on the desktop used to their tablet UI so that they can sell tablets and have people be familiar with them. But that's total fail, because having a tablet UI on a desktop is crap. And if everybody changes it back right away then they both never become familiar with it and associate it with fail (on top of the fail of not running legacy apps on ARM) so that the tablets get associated with fail and nobody buys them. Or, as is far more likely, just nobody buys Windows 8 to begin with -- every business I'm aware of is planning to stick with Windows 7 indefinitely.

You obviously haven't used Ubuntu since they changed to the crappy new Unity UI, which is basically a touchscreen UI converted to be used on a desktop. Their eventual goal is to have Unity on both desktops and tablets and phones. Of course, most Linux desktop users are rebelling and switching to Mint or other distros because of this.

You obviously haven't used Ubuntu since they changed to the crappy new Unity UI, which is basically a touchscreen UI converted to be used on a desktop. Their eventual goal is to have Unity on both desktops and tablets and phones. Of course, most Linux desktop users are rebelling and switching to Mint or other distros because of this.

It took me maybe six months to install a version of Ubuntu with Unity, and all that time I heard nothing but bitching about it on/. Now that I have it installed, I don't really mind it. It is a little "Fisher-Priceified," but it is nowhere near as bad as Metro.

In fact, though I may be a square for saying so, I really have no problem with Unity. The desktop is still visually pleasing and the Unity UI doesn't get in my way. It still feels a little bit awkward for me, because I don't really use a Linux desktop for my day-to-day work, so I haven't had much time to get used to it. But the important thing is that I feel like that awkwardness is my issue. I don't feel that way with Metro.

It probably feels awkward because it's not a very good UI, no matter how nice it may look at first glance. Traditional desktop environments have had 30 years to mature to the state you see in systems like OS X, Win7, KDE, and XFCE, and they work well with systems with big monitors, keyboards, and mice. If you don't feew "awkward" with one of these traditional UIs, but you do feel awkward with a new UI, it's not you, it's the UI that's the problem. The name says it all: "Unity". They're trying to merge desktop and mobile touchscreen UIs into one, and it's a bad idea and won't work. Waving your arms around like in Minority Report simply isn't as efficient as moving a mouse, just like using a mouse on a cellphone would be an ergonomic nightmare.

Waving your arms around like in Minority Report simply isn't as efficient as moving a mouse

er, then use the mouse. You can use it to move windows! Which you can't do on a phone/tablet, partly because of screen size, and partly because with finger-dragging, it would suck. Therefore, it's not a tablet UI. I think the source of confusion is people saw some big icons and went "WHAAAA!!, where's the 1990s taskbar gone?!!!!" and couldn't be arsed to try hitting the windows key. oh look, there's my "window li

You're missing my point. The end goal of all these radical UI changes is to force everyone to use a touchscreen interface on their desktop PCs. These people aren't going to stop with having a touchscreen-esque UI on desktop PCs, they want to have a single UI for all devices. Where do you think they got the name "Unity"?

BTW, what's a "Windows" key? I don't see one of those on my IBM Model M keyboard. And besides, if you have to press some special key to see which applications are running on your PC, then there's something seriously wrong with the UI. A taskbar makes perfect sense here, as you can see all the things running without any special actions. WTF is the point of hiding that? To save some screen real estate? Why? I have two 24" monitors; screen real estate is not in short supply here. On a tablet or a phone, then yes screen real estate is in short supply, which is exactly why those devices need different UIs.

You can even hide the taskbar on all these UI's if you want the damn space back. I have a 1920x1080 screen on this laptop and leave the taskbar open and two tabs tall, since I have a zillion terminals and copies of Evince running. On my netbook it's one tab tall and I hide it sometimes. On a phone I'd probably want to hide it all the time, except when I specifically want it.

The taskbar is the most useful thing a UI can do. Don't muck it up by absorbing the action "launch a copy of X" into it (I do that far less often than I switch windows). Don't make me hit modifier keys to get it unless I want to.

Please hand in your Ctrl, Alt, and Sysreq keys in at the door. Clearly they are the product of bad UI design and you should not have the ability to shortcut anything. Oh that's right running a separate program like top or ps is what people should be doing to see what is running.

Sarcasm aside I could not disagree with you more. If anything we need MORE keys and more keyboard shortcuts to streamline effective tasks. You dislike the windows key, but when I start a program I hit the windows key (that's Ctrl+Esc for the handicapped), type in the first few letters of the program and hit enter. This is the pinnacle of UI design. It's fast, straightforward, and allows you to not remove your hands from the keyboard. In fact is just as effecient as typing the first few letters of a program in the command line, hitting tab, then hitting enter.

You're missing my point. The end goal of all these radical UI changes is to force everyone to use a touchscreen interface on their desktop PCs.

No it isn't, and never has been. Citation, or it's bollocks.

BTW, what's a "Windows" key? I don't see one of those on my IBM Model M keyboard. And besides, if you have to press some special key to see which applications are running on your PC, then there's something seriously wrong with the UI

But to do so you have to go AROUND the lousy Metro first to get to the desktop. lets be honest, okay? its a smartphone. YOU know this, I know this, hell anybody with eyes knows this. the entire UI is designed for a one app at a time max screened smartphone. the reason MSFT is pushing Metro at all is because its getting the living shit stomped out of them in the mobile space and they know that X86 desktops simply will never have the growth rate of smartphones because people don't throw their desktops away every 2 years when a contract is up, which they do with their smartphones.

In the end THERE IS A REASON why Apple isn't running iOS on the new macbooks, and that's because that UI just doesn't work in that form factor. For years MSFT tried to force the Windows desktop metaphor onto smartphones, remember WinMo? With the itty bitty start button? Well after years of that being a mega fail someone says 'Hey...what if we reversed that? you know like Willy Wonka? We'll put the smartphone ON THE DESKTOP and then when people are used to using WinPhone because they get stuck with it on every new Dell...why they'll buy the smartphone because its familiar, its bloody brilliant!" only its NOT brilliant, its a failwhale. I'm sure several in MSFT have tried to point out that's a fail, probably got fired or told to STFU, because Ballmer wants to be the head of Apple so bad it hurts.

Mark my words, this will joining Zune, kin, winMo, and WinPhone on the giant fail heap that is MSFT's mobile efforts proving once again they just don't get it. Personally I'll be glad when they just accept they are the new IBM and save any innovation for the Xbox.

My fundamental problem with Unity and all of the hellspawn UI's that copy OSX is that they conflate the two concepts of "make window XYZ the active one, and maximize it if possible" and "launch an instance of program XYZ". These tasks really have nothing to do with each other, other than both resulting, at the end, in program XYZ being on top.

The most common window-manager task I do is to switch from one window to another. GNOME, very sensibly, gives me a taskbar with all of the things running, and I can click on the one I want, and see at a glance what I've got open and how many instances of each. This is important -- this is the main fucking thing I want my window manager to do for me, is fucking manage my windows.

I don't need a giant list of all the programs on my computer lying around on my screen waiting for me to click them, especially if it takes away from my ability to do the above. On the rare instance when I want to launch a new program, I'll fish around in a menu for it, or hit some magic keystroke (like alt-f2) and type its name. If I really want a big list of my common programs, I'll hit Ctrl-Alt-D and pick one from the icons that I've put on my desktop for that purpose. Or I'll even program a hotkey for it (ctrl-alt-T shits out a new terminal window on my system).

This is even worse for Unix folks, half of whose programs tend to be terminals. How people use OSX for scientific computing is beyond me. Just show me my fucking taskbar and get everything else that I didn't ask for out of my way.

Terminal in OS X supports tabs. I don't have to use the dock that often to switch terminals. I'm in a terminal constantly and it doesn't get in the way. I'll agree that iOS is not a good desktop UI, but OS X has not (yet) turned into that. Apple hasn't made the Microsoft mistake yet.

I think the real problem is that many people who design user interfaces have given up on desktops prematurely. There will always be a need for a real desktop for development and several other tasks. it might be the minority, but it has a place. If you think about it for a minute, it's obvious why all these UI idiots are jumping at the bit for reinventing the wheel, it's something new. They can actually do something significant that might get them fame or credibility if it's a success. It's the biggest opportunity since the graphical user interface. Of course, the GUI never replaced everything and we still have terminals. When the tablet people realize that the traditional desktop isn't going away, maybe they'll get better at accommodating people like me who still want a desktop GUI and a terminal along with the touch interfaces in places it makes sense.

I don't need a giant list of all the programs on my computer lying around on my screen waiting for me to click them, especially if it takes away from my ability to do the above.

I...don't think that's mandatory. I'm pretty oblivious to all this unity nonsense since I generally stick to the LTS releases (we'll see whether I decide to switch to Mint after 12.04 is final), but assuming it works anything like the dock in OS X, you get the behavior you're asking for by just taking all the applications out of the dock. Then you open one and it shows up there, just like the task bar.

But the default behavior is useful because you can put the half dozen programs you keep running 99.9% of the time there, which makes it easier for you to open them if you ever reboot your computer (like if the power goes off).

That said, I get what you're saying. It seems to me that half of Linux users are software developers and Canonical has decided they want the other half. The way you design a UI for a programmer is totally different than how you design a UI for your standard issue Farmville customer. Which has actually been one of the problems with Linux previously: The UIs have all been designed to work well for software developers, not so much for others. So I can appreciate what Canonical is trying to do here: They're trying to make something that could convert less computer savvy users to Linux. But now there are a lot of people who don't understand that they aren't the target market anymore, who don't like the new UI because it wasn't designed for them.

WTF? If that's better then why are they wasting everybody's time developing something that serves only to make everybody turn it back off?

Money. Microsoft probably doesn't give a crap what people actually want. Imagine you take all Windows users today, and push them into an environment where the primary access to everything is through an app store model you control. Now think about how Apple makes money on everything that happens within their iPad/iPhone devices, etc.

I don't think Microsoft wants you to use the desktop anymore. It probably doesn't matter how much you bitch about it, there is probably too much money on the table.

To the poster on Ubuntu and Unity. Initially I was very much against it as well. But I have become very used to it. Though Unity != Metro... Unity is a search based mechanism to find your app, which can be pinned to the toolbar. Once you grasp that idea it actually is pretty clever. I really like it now...

Now with respect to the tiles. I find them an absolute waste of time. My problem is that tiles are there to show you live information of the app. So far so good. But here is the problem... The space and information to be shown is a waste of time. It works well on a phone because the information is targeted. But on the desktop I want more general information. Their example is a stock price. Sounds good, but as I trade the market I don't just have one stock, but 250. How on earth will that be displayed? It will be a mess.

I think it is a failing on behalf of Microsoft. But that is to be expected. After all it is Sinosky who is in charge and well he wants Windows everywhere. Remember this goes back to Windows on the smart phone. The irony is that the Windows smartphone predated the iPhone in terms of functionality. What the iPhone did and excelled at is that you could use a finger instead of stylus. Now Microsoft is taking the opposite approach, but still the same, everything is touch! Wankers! They have no grasped the basic that you talk of, a tablet != smartphone != desktop computer != notebook.

Win8 stands a good chance of being the next Vista, but by the time Win9 rolls around, I imagine that most of the important (ie: most-used) programs will have added Metro support

They have to be careful that doesn't lead down the road to perdition. The trouble with expecting good results from developers redesigning for Metro is that if you give developers the impression they're making an app that will be running on tablets, they're going to realize they ought to port it to iOS and Android since they're the #1 and #2 tablet platforms, at which point you've got "the most important (ie: most-used) programs" running on non-Windows operating systems. And then who cares about Win9 if I ca

Agreed, there is too much of hot air from people who expect it to be similar to the transition from Windows 95 to Windows 98. It's not, it's a complete rethinking like from Windows 3.11(for Workgroups!) to Windows 95.

Summary says:

The best strategy Microsoft could take would be to introduce users to Metro via its included apps and through tablets, while prominently offering the option to maintain the Desktop environment. Power users who choose to use the classic UI for desktops and laptops can still be exposed to Metro via tablets and applications without being forced to wade through it on their way to do something important."

That is exactly the strategy behind making iPad a consumption only device, and will exclude many nice form factors like the Transformer, Samsung Slate or the where you just take a tablet with you, and get a full powered device when you attach a keyboard/mouse to it. Or when you ta

Disabling Metro on the desktop will lower the demand for touch monitors as well.

You've missed the point. Why do Microsoft believe that people want or need touch monitors? Why do Microsoft believe large-dimension touch interfaces are better interface than a mouse?

They're just giving people the choice. Remember, a billion people use Windows. A significant percentage of them might want to use touch monitors. The rest can ignore that and move on. Did they remove the mouse support in Windows and I didn't get the memo?

Unlikely. The full sized touch monitor has always been relegated to niche uses. For example, we see them in Point of Sale, ATM and various forms of vending or other kiosk like setups. All of these devices, regardless of their internal components, are configured to run a single specialized application from which the user cannot, at least by design, deviate. Furthermore, these devices are almost always encountered in public places and are used by many different users for specific and time limited operations in that context. Compare that with more typical home or work use patterns where sessions are longer and the keyboard is generally kept within easy reach of the fingertips with the mouse in close proximity. This is an efficient setup for general computing use, whereas reaching across the desk to touch the screen repeatedly is not. Touch works in the hand-held and portable format because the use cases and ergonomics are almost completely different from those of the more traditional desktop. Will some people want to use touch screens as their desktop display(s)? Perhaps. Will those people represent a majority or even just a substantial minority of users? Almost certainly not.

The rest can ignore that and move on.

Based upon the reviews of the preview release, it's not that simple. The interface is designed to emphasize Metro, imposing itself at the expense of the traditional desktop and forcing users to wrestle with it in order to get their work done. This is particularly irksome in the desktop usage scenarios because few people would prefer a touch-based interface with a single full screen app at a time over the more traditional windowing system common in modern desktop operating systems. Indeed, the windowing systems now present in Windows 7, OSX and the various Linux distros represent decades of accumulated experience and feedback from professional, business, scientific and home users. A radical departure from this well defined and honed interface, ala Metro, is the height of hubris and foolishness. The traditional desktop users, who're still Microsoft's bread and butter, will punish them severely for missteps or other nonsense as they did with Vista (which was itself a less radical departure than Metro).

Microsoft would be well advised to tread cautiously here. It's alright to pursue new markets with new concepts. However, this must NOT be done at the expense of existing users, especially those with professional needs. If the new concepts have merit, they will stand on their own without forced attempts to get people into using them. Finally, to anyone from Microsoft reading this: Heed the warnings of the Win 8 / Metro reviewers and don't ignore them. Remember the lessons of Vista: users will NOT accept software that gets in their way and doesn't work how they want to work, no matter how innovative or cool you think it is. The desktop and mobile touch worlds are DIFFERENT and ought to be treated as such. Don't screw this up.

Did they remove the mouse support in Windows and I didn't get the memo?

They might as well have with the new gestures. They're really quite fiddly, and I'm a veteran FPS player. Swipe into the hot corner - and it's a pretty narrow hotspot, you've basically got to 'overshoot' to hit it; then swipe up or down in a straight line into the middle-ish to bring up either the running apps sidebar, or the charms sidebar. Slide too far off line? Disappears. Not far enough, or too far? Disappears. Move off the 'start' hot corner by a few pixels, to try and click that popup metro that appears? Disappears and you end up lauching the far-left icon on the taskbar instead. It usually takes me two or three goes to bring up the charm bar, and I've been testing the CP since it came out, and the DP before that.

Also - have two displays? run in a virtual machine or RDP session in a window on another host? Now you can't 'overrun' into the corner, you have to hit it absolutely precisely and stay there; trying to swipe down and stay in the narrow accepted line? Rediculously hard. I'm familiar with windows from 3.1 up to current, OSX and its predecessors, KDE 2, 3 and 4 for years, gnome for the last few, now unity, CDE and XFCE and BEOS and god knows how many other UIs have been and gone. None of them have made me want to throw my mouse through the screen at the UI. But windows 8? God-damn it's awful.

Couple of pop quizzes - how do you shut down? Not in the metro window. No icons, shortcuts or squares. It's under the charms bar, settings, then there's a little power icon at the bottom. Log off? Ctrl-alt-del, or goto metro and click your name picture. While we're on charms/settings; half the stuff you need has moved there into a new arrangement; half of it hasn't. Finding which bits are still under control panel, and which are under metro is basically guesswork, especially as some app stuff is not under charms, it's right click on a blank area and get a new options bar at the bottom. On the Metro mail program, you have a little bit at the bottom of the accounts side-bar to add a new account, with a close button. Click that close, and there's no way to bring it back. If you right click and then click accounts, it brings up the accounts side bar, but not the button. You now need to go into charms, and do it via settings - but only when you're in the mail app full screen, there's no other way to get to it.

It is a mess, it's completely illogical and it feels like you've got the old and new interface half-bodged together glued together with gestures that don't make sense on a pc, especially if it's not a full-single-screen pc. Dual screen isn't that uncommon - all our teaching classrooms are setup as dual screen with one on the desk, and the 2nd being the projector - they drag windows to which one they want to display on, so they can put something up for the pupils while having a private desktop for reference while they're at the board or desk. Doing precise mouse gestures at the edge of the screen without wavering, possibly while standing and leaning over the desk? It's ludicrous. I cannot possibly see deploying windows 8 anywhere on our network to replace 7. I'd get lynched.

I'm not even going to start on the insanity of the same interface with tricky gestures for VM-hosted or RDP-managed server 8 boxes; and while the remote admin-tools from a client box work for say, AD operations and file management, they don't work for 3rd party apps that use a local management app on a server, of which we have several.

And no, you can't turn it off. The registry hack and file replacement methods have been removed in the current versions. Now you need to fake it with something like Stardocks software, but there's nothing native to revert to windows 7/2008r2 behaviour.

Metro in and of itself is ok, if a little sparse; I'd actually quite like it as an OSX dashboard equivalent available on a hotkey/gesture for an over-view of various live tiles; I could even live with it as a start menu replacement if it handled a

Dude a 17 inch touch is $300, a 24 inch non touch is $130, do you honestly think MSFT is gonna magically change those figures? Last time I was looking around I read that adding touch to a laptop added around $60-$80 to the cost depending on the screen size, do you think the OEMs are gonna eat that cost? or that users are gonna turn down a more full featured laptop with better specs for one that can touch?

Lets call a spade a spade, this is a Hail Mary by MSFT who has got such a woody to be Apple since Ballmer took over it ain't even funny, but its not gonna work. they have already missed the boat, both Google and Apple completely pimp smack them in mobile and Win 8 won't change that. The way I see it they have TWO choices, 1.- Accept they are the new IBM and accept their place is desktops/laptops and Office, along with gaming, or 2.-Spin off the mobile division so they can live or die on their own terms without having to drag along the desktop.

they are gonna have to do one or the other, because the market has so little confidence in Ballmer when MSFT had its best quarter ever a little while back their stock didn't even twitch which tells me nobody believes Ballmer has got what it takes and this giant failwhale after Vista failing so soon is gonna be all the proof the market needs. spin off mobile, put the office guys in charge of the desktop, and continue to support gaming and the living room by making windows and Xbox seamlessly work together. this is just a giant disaster.

You're missing the point. The iPad's UI is significantly different to desktop Mac OS, exactly because apple managed to realise one simple thing. The traditional desktop metaphor UI doesn't work when you hold it in your hand and touch it.

The reason past tablets weren't successful was because they tried to cram a destop UI onto a tablet.

Microsoft are making the exact same mistake again, only this time in reverse. They're trying to cram a perfectly acceptable tablet UI onto a desktop platform. Worse, they're doing it in a way that only half deprecates the old way of doing things. The result is that they have a tablet UI that doesn't work well because you're not using a tablet; and that when you actually try to do anything, you immediately get shifted into a different UI paradigm, because the apps haven't all been recoded.

It's a complete UI disaster, and perfectly sums up microsoft –copy the trend, do something that they claim is new, and don't update anything at all to integrate well into it. The result –a cludge.

Can't agree more. This new UI has to be the most unintuitive GUI i've used on a desktop. Although I'm sure it's fine on a touch screen, it was painful to use with a mouse, took me 20 minutes just to find common items, a few mins to find the login options, etc.

This from a geek. I can't imagine what my folks would do with this, other than to turn ape like, beat not he screen and make lots of jarring screeches in frustration.

Maybe the entire problem is that MS is introducing something radically new when the current OS is still new. It feels like just change for the sake of change. They really should have a Windows Tablet versus Windows for Real People and keep them separated. Integrating them is silly, and will result in silly things like people wanting to get touch monitors.

The big fallacy people seem to make on this topic is that newer = better. This is not always the case. It's great that a new class of device has revolutionized certain tasks, but forcing their interface onto existing systems that cover the rather large corner cases for them REDUCES choice for those who need/prefer these other systems. This is being done completely for marketing reasons. There is no benefit to the consumer at all.

Have a touch based tuned environment for touch devices, and a traditional desktop environment for desktops.. make it an install and/or control panel option as it is something you'd know up front you want (you are installing on a tablet or a full PC). what ms is doing is forcing touch environments on traditional desktops by having the traditional desktop buried as a metro app that forces you to go back to it to select new applications. making application launch, task switching, and window management visually disruptive moves on a phone might make sense, but not on a desktop with a 23" monitor. tacking mouse support into this 'fullscreen start menu' doesn't help its case either.

Finally, I dont' consider giving touch platforms 'a boost' at the expense of desktops a good thing.. they should both sink or swim on their own merits as they serve different needs as surely as a hummer H1 and F1 mclaren do. (there's your car analogy!)

I travel a lot, all over Europe, North America and Asia, and I've come to realize that tablets are basically a myth. While there is a lot of hype around them, and many have been sold, almost nobody actually uses them!

During my travels, I see people using cell phones. I see people using smart phones. I see people using laptops. I see people using netbooks. I see people using desktops. But it's extremely rare to see anyone using tablets. I see literally thousands of other people using smart phones for every tablet user I see.

I visit all sorts of environments, from huge corporate offices, to parks, to restaurants, to planes, to universities and colleges, to airports, to train stations, to city squares, to government offices, to subways, to cafes, to so many other places. Given the amount of traveling I do and the huge number of people I see in any given day, and given how much we hear about tablets, I should be constantly seeing people use tablets. But I just don't.

I think that they're the kind of device that somebody buys because of the marketing hype or because they sound like they might be useful, but then in practice they turn out to be feeble and impractical. Then they sit there on a bookshelf or table top, completely unused, until they're all but forgotten about.

I'm sure a bunch of people are going to reply to this saying how they find tablets useful in some very niche situation, but these are indeed very niche cases. The widespread usage of tablets just isn't there, like it is with smart phones or even netbooks. The popularity of tablets is a marketing myth, I suspect, rather than a reality.

About the only places I've seen tablets are on trains. Even then, they're massively outnumbered by laptops and phones, but I do see a few. I actually own a tablet, and the only thing I use it for is watching films when I'm on a long trip - it can manage about 7 hours of video playback, which is more than enough for most journeys. With power sockets being common in trains now, there's less of a need, and my laptop has the nice advantage that I don't need to prop it up - the screen comes with a convenient

I pretty much only see people with e-book readers on planes, not "tablets" per se. There's a big difference between the two, even though it's quite possible to make an e-book reader work like a tablet; the use cases are quite different. Ebook readers are great for reading (esp. with the e-ink screens), but that's about it.

Actually, not a single niche –many many many niches. The same is true of computers –they suck if you go "hey, what's the killer app", but are great if you realise that they're useful for hundreds of useful little things.

Several of my friends are now considering them, simply because each time they're round the phrase "could you pass the iPad over" gets uttered a couple of times. Be it to check wikipedia, look up some random cat video someone mentioned, display the rules of the game we're playing,...

All of these individually are trivialities... but they add up to a really fucking useful device.

I travel a lot, all over Europe, North America and Asia, and I've come to realize that tablets are basically a myth. While there is a lot of hype around them, and many have been sold, almost nobody actually uses them!

Maybe that's because most tablets suck for getting any work done, and thanks to the corporate hostile takeover of all of our lives, people have to work a lot more than they used to. Thus, the tablet stays at home and the phone or the laptop comes out on the train, at lunch, etc etc.

If there were reasonably-priced tablets that could be used to do actual work, you'd see a lot more of them. On a plane, on a train, bus, etc standard laptops can be very clumsy. You end up needing quite a bit of space to use them.

I watched the video of the Lenovo Yoga, and while there are things about it that don't look so great, it's a start toward a tablet-style form factor that can actually be used to accomplish something besides consumption. It's a step in the right direction to create functional tablets that provide keyboards for the few billion people who would rather key input than anything else.

I'm sorry, but Siri-style voice commands are not going to be anything but a novelty until they can be used with sub-vocal sounds. I really don't want to be anywhere near a plane or train where everybody is talking to their computer. I can type faster than I can talk to Siri, anyway.

Now, I won't buy Windows 8 because I won't give Microsoft money (for reasons that don't have anything to do with the quality or lack thereof regarding their products. It's political. But I'm glad to see that somebody is thinking about computer interfaces that can be used to make something, not just buy something. Apple doesn't seem to be doing it, but they're apparently too busy being the richest company in the world by making products that are for consumption-only. That's not a knock on them. I just need tools more than I need home shopping network on steroids.

Travel more. Tablets are all over Korea. I see tons every day. Coffee shops, subway, bus, etc.yes, lots of smart phones, but its easier to whip out a smart phone than to pull the tablet out of your bag. Just because someone has a smart phone in their hand doesn't mean the don't have a tablet in their bag that they're going to use when they get into a more comfortable place.

Having recently finished backpacking across Europe and South America for 6 months, my experience of wifi users in hostels is

(a) iPads are more commonly owned US travellers than other nationalities. More hipsters?(b) It's rare to see Android tablets(c) Netbooks are more common than full size laptops by a factor of 3:1 - portability(d) Netbooks outnumber iPads by 10:1.(d) smartphone users check a couple of things but jump on a full sized desktop whenever a machine becomes free.

So netbooks are still popular with the traveller. Keyboards haven't gone the way of the dinosaur for those that want to type lengthy messages to folks back home. Netbooks predating the iPad craze and the cost of choosing a new machine is also a factor, obviously.

The future, for Apple competitors, is to reinvent the netbook as Asus have done with the Transformer. Tablet AND lightweight laptop in one device. Stick Win 8 on these things and MS have a touchscreen tablet that runs Word and Excel when docked.

The gap in the price of capacitive touchscreens over regular netbook displays just narrowed with the new iPad's retina display. As soon as MS have Office ready on ARM, it's game over for the Atom as Transformer-like devices running Win8 retail at netbook prices ~ $US300.

So MS is releasing this preview for x86 desktops but the real prize is claiming a share of the tablet+keyboard market from Android and wooing business customers that a Win8 tablet can run Office when you need to 'get real work done'.

It's interesting how the comments on Apple/iPad/Post-PC articles, financials of Apple/Dell/HP etc. state that "MS is dying in the Post-PC" era, but now when they come out with a solution to make a OS run on different form factors and to have tablets that are not just consumption devices,...

Tablets don't have to be just for consumption - people are already using iPads and the like for creative purposes. But, when you think about it, most of what the typical person uses even a full-blown computer for tends to be mainly consumption and communication - Netflix, YouTube, Facebook, email, chat, etc. Even for work, the most content creation they do involves making a Word document or an Excel spreadsheet.

As far as that "vast Win32 ecosystem" goes... remember that Windows tablets aren't exactly a new idea. Microsoft has tried - and failed - to leverage that vast ecosystem to make Windows-based phones and tablets a success before. Time will tell regarding Metro, of course; but while you seem to think their success is a foregone conclusion, recent history shows otherwise. It comes down to whether or not Microsoft learned from their previous failures, which is something I, for one, am not convinced of.

I don't agree with Thurrot's analysis that "the desktop is just an app." Oh really? The desktop is still there, with Explorer, the taskbar, the system tray, and every other feature the desktop has ever had, and Thurrot wants us to believe this is somehow just some little "app" that's running inside of Metro? Hardly. The desktop is still the desktop. It is Windows.

What Windows 8 has done is given us this new launcher application, called Metro, which accepts plug-ins, called apps, and which will now launch automatically when you login to the system and again every time you push the Start button. Metro feels like the ultimate terminate-and-stay-resident program from the 80s, where every time you push the hotkey it takes over your entire screen.

Also, try to spend a few minutes learning shortcuts etc. before dissing the experience. It's not a SP for Windows 7, it's a new OS.

No, it isn't. It really isn't. Keyboard shortcuts do not make an "OS." The fact that the device drivers for every weird hardware device on my laptops carried over from Windows 7 to Windows 8 without a hitch demonstrates that the two are essentially the same OS.

What Microsoft has done with Windows 8 is it has taken a UI that works and put a big curtain in front of it (Metro) so that every time you want to use the OS the way you're accustomed to doing, you have to push the curtain aside. And as soon as you push the wrong button (the Windows key) or you want to launch a new application, the curtain drops down again.

They envision that with Windows 8, most new monitors will be touch enabled because of the demand so that for some functions(like clicking on links), people can use touch.

Just because I can use touch doesn't mean I will want to. I am not going to be reaching across my desk to click on links when there's a mouse sitting in my right hand. I don't need a new repeat strain injury [infoworld.com] and I don't want to smear my monitor with fingerprints. Poking around in midair with your fingers looks cool in movies, but in practice what we do now is more efficient, which makes it preferable. It's not logical to get rid of the more efficient way of doing things for the sake of something that looks cool.

You may disagree with the vision, but you can't disagree that there is a method behind the madness.

I don't disagree that there's a method. But that doesn't mean it's not madness. When your friend guns his engine and says, "Don't worry, I know what I'm doing -- we can make it across the canyon," it's time to get out of the car.

But ok mister serious-pants, we have self-cleaning items now -- digital camera sensors, for instance -- but even assuming that, the debris must go somewhere. Self-cleaning is usually accompanied by making the materials non-sticky to the materials with which they're likely to come in contact, and inducing a periodic vibration to shake off the particles that do adhere. I bet it doesn't work with peanut butter.

And Windows users accuse the *nix crowd of being arrogant because we say "rtfm" too often for their tastes.

A lot of people are flippin' lost without visual cues. 8 has taken visual cues and turned them invisible and put them in hot corners and stupid shit like that.

Metro on the desktop is a goddamned failure. Microsoft is doing this simply because they can, and there is almost a cult-like movement within Microsoft about metro that if you don't like it, then there is something wrong with you. This is exemplified by your statement here.

I downloaded and installed both the developer and consumer previews and ran them. I still have the consumer preview loaded.

It is a nightmare of stupid UI decisions. Switching between both metro and the "traditional" desktop is a whole level of stupidity not seen in UI failure since Microsoft Bob. The total lack of consistency is jarring.

But hey, obviously i don't know what I'm talking about because I've never used it and this is not a screenshot.

Have you paid attention at all to the Microsoft astroturfing that has showed up here in the past few years?

Articles even slightly critical of anything Microsoft are met with a large copy-paste of some pre-written text within minutes. This is especially apparent when it becomes the first post. Like in this thread, itself. Go look at the first post.

You may disagree with the vision, but you can't disagree that there is a method behind the madness.

The problem with Microsoft, and a lot of us have been around long enough to see it repeatedly, is that when they decide that something is shiny and new, they drop all ongoing development of everything else like it doesn't even exist any more, even if the new thing is not a suitable replacement for the old thing.

When Microsoft decided that XAML was the "Way Forward" for rich web applications, they moved all but one guy off the IE team and into the Visual Studio and Expression teams to develop things like the improved XML editor, the designers, etc... Now, as a developer, I find those improvements very useful, but meanwhile there was one guy left of the IE 'team' doing just security fixes for years. This is why there was such a huge gap between IE6 and IE7, and why IE7 was such a small improvement compared to the progress made by Firefox and WebKit in the same time period.

Now, if you're a XAML-only programmer, then Microsoft was being innovative and moving forward. To HTML-based web application programmers they were being stupid and counter-productive, dragging the entire Internet down to the lowest common denominator that was IE6. That made a lot of people very upset with them, and rightly so. There was no way XAML could ever replace HTML, because it was tied to the.NET Framework in practice, which is not cross platform. A HTML replacement has to be cross-platform. That didn't stop them from ignoring HTML for half a decade.

With Windows 8, Microsoft is doing the exact same thing again. If you're a phone or tablet programmer, then Microsoft is innovative and moving forward. For desktop users -- Microsoft's biggest market -- they are being stupid and counter-productive, dragging the entire Desktop world down to a lowest common denominator with limited devices that don't have keyboards and mice. The walled garden of WinRT applications can never replace desktop applications, because the APIs are deliberately limited to suit the tablet environment. They have to be, otherwise apps would kill battery life or introduce vulnerabilities. A new framework has to be more than just a Tablet API or GUI, but that won't stop Microsoft from ignoring "classic" Windows applications for the next half a decade.

It's not just the GUI, Microsoft's other technologies have been suffering too from a lack of newness an shinyness. For example, their C++ standards compliance is woeful: the next release amounts to little more than some additional header files -- basically whatever one of their interns could whip up in a month, instead of a real revamp of the core compiler technology to have significantly new features. This is because they were too busy coming up with yet another bastardised non-standard version of C++ so that they can call WinRT APIs efficiently. Don't even think of asking for C99 support!

Sure, nobody is being forced to use WinRT, or tiles, or tablets, but if you're not using them, then you're using APIs and systems that will basically stop dead in the water, which in the computer world is the same as going backwards. Microsoft is atrocious at "seeing things through", because of their short attention span. For example, did you know that both Vista and Windows 7 natively support higher color depths than 24-bit, and GUI scaling? Had Microsoft kept going with that, our desktops could have had "double resolution" just like tablets, 36-bit deep color, wider gamuts, 200 DPI, and a bunch of things by now. But nooo... it was shiny then for a couple of years, and then Microsoft got bored and dropped all ongoing development of that as a feature. They even have a JPG-like image format that supports all of those better color features, but they never had more than some demo code written. Meanwhile, Apple demonstrated the value of technology that Microsoft had been sitting on for years, and suddenly everybody wants an iPad 3 with a Retina Display. Sigh...

Apple and Microsoft have both failed to fully realize the technology needed for high-DPI displays. Apple isn't really doing the complicated scaling in iOS that OS X and Windows have been capable of. They're just doubling the pixels. Sure Apple has the first high-DPI displays, but they're going the easy route of simple doubling.

I don't blame Apple or Microsoft for not following through on the development of scalable UIs either. High-DPI displays haven't caught on because 1080p panels became incredibly cheap.

MS had, basically, two options: create a new brand for an OS tailored for post-PC devices, or continue with what they had. They chose to create a new (and pretty good, actually) interface in Metro, but then apply it to both post-PC devices and PCs and brand it as Windows in both places. I think that I would have gone the other way, creating a Metro brand to go with the interface, and tailoring it even more closely to post-PC systems, while keeping the Win7 interface on the desktop, and sharing the underlying kernel and as many APIs as possible between the two variants. Time will tell if that was a good decision or not; it was certainly a bold decision, given the success that Apple and Google have had with specific post-PC brands and interfaces.

I hope Win8's Metro is better than the sucky "ribbon" interface in office. I just started using it last week, and today I couldn't even figure out how to "undo" a mistake I made in Excel. I'm looking at it right now, and all I see in front of me is a confusing mess of heiroglyphics. Grrr. If I wanted my computer menu to look like the wall of an Egyptian pyramid, I would have imported it from there.

I never thought I'd ever say this... but the Commodore GEOS was actually easier to use. I've hated Micros

Problem is they fucked up the setup on the desktop. On an embedded device, Metro is everything. Makes sense, it is the embedded GUI, and they can't run PC apps. So you fire up the device, Metro is what you get.

However on a PC, the desktop should be what you get, Metro should be something you open in it. That way you can run Metro apps if you want, which is cool, but on the terms of a desktop. You can let them run full screen, or not, put them in a window. It'll seem "full screen" to them, they'll just be told that window is their screen.

The reason is the multi-window paradigm is what works for desktop computing. It is an efficient way to work with multiple programs, which is what almost everyone does. Even non tech types. It is efficient to be able to open up multiple things, arrange them as you like, switch between them easily, and so on.

The smart phone idea is not an efficient way to work, it is just a necessary one given the limitations of the platform. Trying to force it on the desktop is rather stupid.

I can see the benefits of sharing a codebase, but the fundamental interface is going to need to remain different.

However on a PC, the desktop should be what you get, Metro should be something you open in it. That way you can run Metro apps if you want, which is cool, but on the terms of a desktop. You can let them run full screen, or not, put them in a window. It'll seem "full screen" to them, they'll just be told that window is their screen.

While it can't do it out of the box, it seems that it can be hacked into working that way. Have you seen Stardock's Start8 [stardock.com]? There was a story about it the other day. What it does is bring back the Start button, but it doesn't pop up the traditional Win7-style Start menu. Instead, it shows [addictivetips.com] the new Metro home screen - except it does that inside a popup window that's about as big as the old Start menu was.

The next logical step for them would be to do just what you say - let people run Metro apps in movable, re

Yep, Metro apps being full screen is a deal breaker for anyone with large screens. I'm really not sure what they were thinking.

Back when computing started, all apps were full screen apps. The *instant* technology allowed it, we moved towards a windowed paradigm, because it's nice to be able to do and look at multiple things at once. Since then, displays have gotten larger with higher pixel density, making the windowed paradigm more and more useful.

My gut feeling is that Win 8 is going to be a spectacular failure like Vista. People who buy PCs with Win 8 loaded are going to throw a fit and demand a downgrade to Win 7. Microsoft will survive because no matter how much they screw up, the competition can't really take their place. So it's not necessarily a bad gamble for Microsoft. It might work. I doubt it, but I could be wrong. If I'm right then after it fails and they get burned by the "not gonna buy it" and "I demand a downgrade from this crap" crowd, they'll quickly re-design WIn 9 to look like Win 7 with some added features and put that out.

for now, There were talks a while ago of integrating IOS with OSX or something along those lines, you can see it start with the mac store

I dunno. It's easy to put two and two together and suggest that a unified OS is the way Apple is heading. No doubt that's why I've read a lot of blog posts along those lines. But I've never seen a story speculating about an integrated Apple OS that includes a source from Apple.

Apple is, of course, notoriously secretive. And it certainly wouldn't surprise me if there were some hybrid OS concepts lurking somewhere in Cupertino. But my gut feeling is that, for the foreseeable future, "Apple OS" is a concept pr

The Apple implementation of touch on the Desktop is light years ahead of Windows 8. The entire enterprise can be summed up as: "We like Apple's unlimited control over a walled garden, we want that at all costs, desktop users be damned, they will come to like it or else"

Failed web "designers" are ruining GUI applications left and right. It doesn't matter if they're open-source apps or if they're closed-source commercial apps. These self-labeled "UI designers" and "usability experts" get involved with a popular project that had a usable UI, and they completely trash it.

This has happened to GNOME. This has happened to Firefox. This is now apparently even happening to Windows!

Somehow, these "designers" have managed to create UIs that are far worse than even non-artistic programmers came up with. Firefox is a perfect example of this. The earlier releases had very usable UIs. Then came Firefox 4, and the entire UI was shit upon. Each subsequent release has fucked up the UI more and more. Now we don't have menus by default, we don't have a status bar by default, and Firefox is damn near unusable without heavy tweaking to re-enable such basic UI elements!

The only appropriate thing to do is to shun these people. It doesn't matter which project it is, or what sort of application is being developed. Refuse their contributions. Refuse their ideas. Shoot down their suggestions in mailing list discussions. Don't allow them direct commit access to any source code. Ensure that bugs are logged regarding their horrible designs, especially when usability is impacted.

We need to go back to software developers creating UIs. Maybe they're not artists, and maybe the UIs they built weren't "pretty" (a.k.a full of curved corners and gradients), but at least they were intuitive and we could use them to get real work done efficiently. We can't do that any longer, now that "designers" are trashing every UI they come into contact with.

Failed web "designers" are ruining GUI applications left and right. It doesn't matter if they're open-source apps or if they're closed-source commercial apps. These self-labeled "UI designers" and "usability experts" get involved with a popular project that had a usable UI, and they completely trash it.

This has happened to GNOME. This has happened to Firefox. This is now apparently even happening to Windows!

I used to think that about Gnome, until I installed it and started using it.

True, I installed a couple extensions to help me out, but after spending some time with Gnome Shell, it does a really good job of just staying out of the way.

I'm very much a keyboard kinda guy though. To me, too much mouse use gets in the way.

I certainly won't argue against the examples that you've provided, as those truly are UI's that underwent destructive "UI improvements". With that being said, I do think that your rant against all designers and usability experts is misplaced. A lot of the time, developers are so intimately familiar with the product and code that it is difficult to discern when something isn't intuitive to a novice user.

Depending on the product, the developer may not even be too familiar with the actual user's job. Take for instance utility power management. There is a whole ecosystem of tools that are used by people who work at big utility companies. A developer on one of these applications is probably intimately familiar with the specific application, but the likelihood is that they've never worked for a power management company. Do they know what the user does during their normal day at work? Do they know what other applications the user uses? How does their application fit into the user's day?

In such cases, you actually have a situation where the developer may make bad decisions in UI design. The developer may not realize that the user doesn't sit in front of their application all day. Rather, the user may use the application for a sub-set of their work, and use information from one application in conjunction with other applications. The developer of the application may think "well, duh, why wouldn't the user know to look under menuX->optionY->wizardZ to do that?" The reality is, the user probably isn't interested in knowing the ins and outs of the application they are using. If the information they are looking for isn't apparently available, then it might as well not exist.

Is this the way things should be? Perhaps not. Perhaps the user should spend their time reading manuals and becoming intimately familiar with the product. However, this isn't the reality. This is where a good design team can come in. A product can deliver everything functionally, but still be considered a failure if the user isn't able to easily accomplish their goals.

I realize I've digressed from the topic at hand, which is the Metro UI (which I really don't like from what I've seen). However, I think it is worth challenging the assumption that developers are the best people responsible for developing UI's.

A lot of the time, developers are so intimately familiar
with the product and code that it is difficult to discern when something isn't intuitive to a novice
user.

Perhaps you didn't quite mean this, but it's a very one sided statement. There are novices and experts, and UIs shouldn't just be designed for novices. In fact, for software that gets used a lot, a user stays a novice only a small amount of time, before transitioning to advanced status. So a UI should be designed primarily for advanced users and experts first, and novices second, provided that doesn't interfere with advanced use too much.

The trouble with outside UI designers is that they think like novices, which they often are when they initially join a project. So their priorities are all wrong, and must be fought. Alternatively, they should prove that they already understand advance usage inside out, and then argue that a change is going to improve novice usage without worsening advanced usage.

If there are no on screen visuals I'm lost. I rarely use keyboard shortcuts and I can see that increasing when there isn't a keyboard.

The problem I have with Metro is that it's so hard to organize things. I have over 1000 shortcuts currently on my Windows 7 machine, where are they supposed to fit in Metro. I'd need to scroll for a week to find what I'm looking for. "Oh, but you can just type the name of what you are looking for. " but I don't remember the name just what the icon looks like. Keep your Metro, give me a start menu and we can both be happy.

This is very true. It's a problem with a lot of touch-centric UIs: There are no onscreen hints or anything to explain to you how to use the UI.

Anyone who has ever used a word processor can sit down with Microsoft Word and write a letter. There will probably be things you don't know how to do, so you'll end up searching the Ribbon to find them. But that's just it -- you can find them. There will be icons there and the icons will have labels that say things like "Insert Date/Time."

Metro, on the other hand, has a few clever icons, but they don't necessarily mean anything to someone who has never seen them before. Some of the other functions involve gestures or moving the cursor to just the right part of the screen to activate a feature. I found I had to stumble around awhile before I knew how some of the most basic navigational controls worked.

Note: I didn't say search around, as you'd have to do with the Ribbon. I said stumble around, meaning I had to try mouse movements and push icons without knowing what they were actually going to do. Inevitably that meant I'd end up activating controls I hadn't meant to. I might luck out and find the thing I want, or I might immediately think "Undo, Undo, Undo"... but of course, Undo might have been the thing I was looking for in the first place. This is a lousy way to learn a UI. It's a step back from what we've grown accustomed to.

But are you retarded? You can't remember the name of something but you can recognize thousands of icons designed for 16x16 display?

You don't have to be retarded to be slowed down by a less-than-optimal interface. Every brain cycle you have to burn figuring out a sub-optimal GUI is one less brain cycle available for actually getting your work done.

Little things like this might seem trivial, and they are, but the cumulative effect can build up to the point where your productivity is significantly less than it could have been.

... Apart from Metro only being useful if you have a laptop/tablet/smartphone (touch screen Desktop/TV never worked!)

Was that both GUI's weren't linked, they were in essence 2 separate desktops.So if i opened IE (or any other program) on Metro and had to switch to the "other" desktop, If I opened IE there it was a totally new session. (I'd think it would be better, or nice, to ask the user if he wanted to pull the session from metro.)

I hate to say this--but this concept of 'Apps' that everybody is latching on to--it is a huge pile of steaming buzzword. Yes their are applications, but the concept that all of computing can be neatly tucked and packed into an easily marketable single purpose flashy shiny big round button GUI software as a service plug in API model full of synergism and one-click-wonder wow--perhaps, but not for the power users, not for enterprise. There may be a day, but it isn't this decade IMO. I understand how consumers want this and blah blah rah grandma simplicity blah new age computing blah ease of use apple blah, but I'm here to comment about Apps and how I hear that word used in the wrong places (IMO).

Where's my 'app' for DBA activity? Where's my simple one click 'app' that monitors hundreds of servers, routers, switches? Where's my 'app' that automates my build processes? Where's my app that gives my complex analysis of all my interconnected nodes? You wont find them--not soon and not on 'markets'. Because these are complex intertwined multi-APPLICATION, to use the full word, work-flows that require desktops or complex usage of scripting and consoles. Sorry but for power use, it's just the way it is, in this decade and probably a few to come. These things can be done well and simply, but not without serious power-tools and planning.

Let's me honest, computing has been around for decades now, and even though on the consumer level 'apps' reign supreme it seems, there will always always always be power users who will need more complex environments for the vast array of software suites, tools, languages, and utilities needed to maintain and administer complex networks for build processes or whatever. Perhaps there will be a day when it is all unified. But that would require vast cooperation across industries, standards bodies, companies, open-sources houses, etc. Until some defacto design standard from layer 1 to 7 and from user space to kernals to whatever is implemented across the industry, nothing will ever be 'simple apps' while separate unique tools and such exist--thus guaranteeing the lifetime of the terminal and the desktop. It seems we are now defining apps as "guis that are flashy, sleek, use large rounded buttons, and have limited functionality', well, there's many of those out there. End rant. (the word app just sets me off)

To that I agree. What is really happening is a rethinking of interfaces and what is really wanted and needed by the unwashed masses for their computing needs. Simple and streamlined and single task driven. Which I can agree that user roles exist that fit this model. But as PCM2 says below, it is the push by some in the industry to shove this 'App' model down the throats of everything, fitting their square peg into the round hole of already good solutions, that gets frustrating. The way I look at it is simply this--the people who write these so called new-fangled 'apps', well, they are writing them in all likelihood in complex desktop environments, not on tablets, not on smartphones, apps are being written on full-blown desktop operating systems. Perhaps 100 years from now people will be using natural language and speech recognition to convey concepts to some natural human symbolic interpreter that writes everything and pushes it to some compiler, but in the meantime massive IDEs and libraries and filesystems and all that will be needed through some sort of multi-application multi-tasking desktop that cannot be simply boiled down into a single self-contained app.

Any time you find yourself explaining Why People Should Like Your Stuff if they Only Used It Right, it means you have failed Marketing 101 and need to turn in your diploma, because you obviously weren't paying attention in class.

An article that everyone (including plenty Slashdotters) see when they open Microsoft Visual Studio today:

Create your first Metro style app using C++[This documentation is preliminary and is subject to change.]

A WindowsMetro style app is tailored for the user experience that's introduced in Windows 8 Consumer Preview. Every great Metro style app follows certain design principles that make it look more beautiful, feel more responsive, and behave more intuitively than a traditional desktop app. Before you start creating a Metro style app, we recommend that you read about the design philosophy of the new model. You can find more info at Designing Metro style apps.

Here, we introduce essential code and concepts to help you use C++ to develop a Metro style app that has a UI that's defined in Extensible Application Markup Language (XAML).

If you'd rather use another programming language, see:

Create your first Metro style app using JavaScript

Create your first Metro style app using C# or Visual Basic

ObjectivesBefore we start coding, let's look at some of the features and design principles that you can use to build a Metro style app with C++. It will also be helpful to look at how Microsoft Visual Studio 11 Express Beta for Windows 8 supports the design and development work. And it's important to understand how and when to use the Visual C++ component extensions (C++/CX) to simplify the work of coding against the Windows Runtime. Our example app is a blog reader that downloads and displays data from an RSS 2.0 or Atom 1.0 feed.

This article is designed so that you can follow the steps to create the app yourself. By the time you complete this tutorial, you'll be prepared to build your own Metro style app by using XAML and C++.

Comparing C++ desktop apps to Metro style appsIf you're coming from a background in Windows desktop programming with C++, you'll probably find some aspects of Metro style app programming to be very familiar, and other aspects that require some learning.

What's the same?You're still coding in C++, and you can access the STL, the CRT, and any other C++ libraries, except that you can't invoke certain functions directly, such as those related to file I/O.

If you're used to visual designers, you can still use them. If you're used to coding UI by hand, you can hand-code your XAML.

You're still creating apps that use Windows operating system types and your own custom types.

You're still using the Visual Studio debugger, profiler, and other development tools.

You're still creating apps that are compiled to native machine code by the Visual C++ compiler. Metro style apps in C++ don't execute in a managed runtime environment.

What's new?The design principles for Metro style apps are very different from those for desktop apps. Window borders, labels, dialog boxes, and so on, are de-emphasized. Content is foremost. Great Metro style apps incorporate these principles from the very beginning of the planning stage. For more info, see Planning Your App.

You're using XAML to define the entire UI. The separation between UI and core program logic is much clearer in a Metro style app than in an MFC or Win32 app. Other people can work on the appearance of the UI in the XAML file while you're working on the behavior in the code file.

You're primarily programming against a new, easy-to-navigate, object-oriented API, the Windows Runtime, although Win32 is still available for some functionality.

When you use Windows Runtime objects, you're (typically) using C++/CX, which provides special syntax to create and access Windows Runtime objects in a way that enables C++ exception handling, delegates, events, and automatic reference counting of dynamically created objects. When you use C++/CX, the details of the underlying COM and Windows architecture are almost completely hidden from your app code. But if you prefer, you can program directly against the COM interfaces by using the Windows Runtime C++ Temp

Why are all my computer interfaces being transformed into children's toys?

Why are my menu bars, tables, and text boxes being replaced by coloured icons dancing around the screen. Am I expected to just intuitively "feel" where all the programs and options are now?

This isn't just an OS problem. It happening across the program spectrum and I blame the influence of smartphones and similar touch oriented devices.Speaking as someone who has never owed a smart phone I have always found them restrictive and confusing. Using one is like navigating a theme park without a map. Eventually you'll want to just find a place to sit down but you'll only get more lost among the theme rides and hot dog stands.

If this nonsense gets rolled out onto computers that people are supposed to be working on, it will either precipitate a recession or an injunction by employers groups. Either way, I'm sticking to menubars.

All I want is an option in the Control Panel that says "Completely disable Metro UI. I understand this will prevent me from installing, launching or utilizing Metro Apps. This will enable the classic Start Menu and will make the Classic Desktop your only operating environment."
Problem solved. Just fucking humor us.

OK, I resist change just like everyone else. But that's not what is going on here.

Monitors are getting bigger. I'm doing more things at once. I want better ways of managing that. But Metro just gives me one thing at a time. Sorry, that's not a solution to the problem. That's going back to the original Macintosh.

Apple isn't perfect, but at least they've been trying some new ideas. I don't think the new ideas on screen management have been all that successful, but at least they're attacking the right problem.

At the moment, nobody has a better idea for a smart phone or a tablet than to show one app at a time. The only way W8 makes sense is if they're adding a piece for portable devices, and said "while we're at it, let's let desktop guys use it too." Fine. But only if they realize that the desktop systems still need new ideas as well.
And if I were doing a ground-up redesign, I'd consider whether we might be ready for a better approach with tablets as well. The new iPad has more pixels than many monitors. I'm not sure one app at a time should be the only way to use it.

At my job I'm already working with Win8 a little, and I don't think it's so damned great. It holds your hand like you're a silly child and hides even more from you than any version of Windows I've ever seen. I suppose if you're looking for the OS for the most dumbed-down generation ever then it's great, but for those of us who want something functional and powerful, I think it's a huge flop.

I'm really trying to work with this. Other than Metro, Windows 8 isn't bad. It's actually a marked improvement over Windows 7. The biggest change is the number of windows I can manage and keep open with 4 gigs of RAM. Memory seems to be cycling by itself with no third party software, registry hacks, or manual optimization. Silverlight is better on my 2gb Gforce card. Netflix is clean, and looking great. On Windows 7, the picture was muddier. So in terms of the things I care about (lots of open windows and netflix) Windows 8 is a boom.

What I'm not impressed with is the way Metro is locked down. I downloaded Visual Studio 11 beta so I could start writing Metro apps, and was immediately reminded that Microsoft will be approving any and all Metro apps, but they're letting me run my own stuff out of the kindness of their ever loving little hearts. That annoyed me, and it made me question my motivation for wanting to write Metro apps in the first place.

I mean, I can write an Android app today, compile it into an APK, and it'll run on any Android device within the scheme I compile for. Google doesn't and shouldn't care about the apps I write, and I like it that way. I don't really see the point of building something in the first place when someone who has nothing to do with anything can control my ability to publish it. If there's any chance of rejection at all, why should I bother to begin with?

I'm not learning new platforms because I like new platforms (well, I am, kina), I'm doing it because I want to have viable programs that I can do things with.
Screwing with my ability to publish my work is not a way to launch a new product.

It's the way MS treats its users. The main difference between MS (who couldn't get rid of the "Start" menu for close to 15 years even though their final user testing prior to launching windows 95 revealed that it was a horrible, broken idea) and Apple (who can seemingly come up with a new paradigm for the iPhone/iPad and have it accepted) is in how they think.

MS thinks like developers. So when they have an idea they like, they force it on the users. And if the users don't accept it, they force it some more.

Apple thinks like designers. If they have an idea, they test it out and refine it until the users love it.

And that's why this would have worked if the one Steve had come up with the idea, but it'll be an epic fail in the hands of the other Steve.

Wow, that's one more adjective than I've ever seen that apostrophe substituting for. But get on with your bad self, anonymous grammar cop. And everyone remember, punctuation is critical. "Let's eat, Grandma" is not the same as "Let's eat Grandma."