Paul Thurrott has posted yet another
look at Windows Vista. Ever since the first alpha and beta releases
of Longhorn/Vista hit the web, Paul has been giving us regular
updates on the progress of the operating system. Paul's articles are
usually for the most part positive with a little hint of negativity
thrown in where appropriate.

Paul's latest
article though lays everything out on the line when it comes to
Vista. Now that Vista is supposedly feature complete and many things
will stay as is when the final product ships, promises that Microsoft made in regards to features in the
operating system, usability issues and application blunders are now
fair game. Here, Paul rants about missing features that Microsoft promised:

There are so many more
examples. But these two, WinFS and virtual folders, are the most
dramatic and obvious. Someday, it might be interesting--or depressing,
at least--to create a list of features Microsoft promised for Windows
Vista, but reneged on. Here are a few tantalizing examples: A real
Sidebar that would house system-wide notifications, negating the need
for the horribly-abused tray notification area. 10-foot UIs for
Sidebar, Windows Calendar, Windows Mail, and other components, that
would let users access these features with a remote control like Media
Center. True support for RAW image files include image editing. The
list just goes on and on.

I must say, I've tried and tried to
give Vista more than a second glance. I've tried every beta release
that Microsoft has issued, but every time I find myself being less
productive and utterly frustrated using the operating system compared
to Windows XP. Fortunately, it looks like Microsoft has a few more months to get some of these issues under control.

Comments

Threshold

Username

Password

remember me

This article is over a month old, voting and posting comments is disabled

Yeah, you claim NT is BSD derived just because a piece of BSD IP stack is there. Alright, so Linux has a small piece of NTFS code in its kernel, so then I say Linux is NT derived :P

Nice try :) but I just ment that at least some code (IP stack) had been taken from BSD's - don't know how much or what's the impact, but the fact remains. Of course under the terms of BSD license one can just take a piece of code and put it in it's own project as long as he admit's it somewhere. I read once that they (MS) hired some BSD folks, but I cannot find any prove it as of now. Not talking about Dave Cutler :)

Wrong, if you consider design as well. One thing is a hurried patch on top of a mess, and the other is a neat design from scratch. Functionality may be the same, agreed. But the code might be so different you won't believe your eyes. I say patched mess and non patched nice design are very different given that functionality is 100% the same.

Well - you may have a point here - although there is nothing inborn in patches that makes them impaired from a technical point of view - in reality it's more difficult to drive whole project with them to different architectural design. But still - it does not imply that 'more patches = more clutter'. It's largerly dependant on the initial design. And non modular design definitely hampers producing neat and efficient code. Linux is modular, although it's monolithic kernel (contrary to microkernel).

It's because patching sometimes doesn't work and even Linus & Co have to break stuff, annihilate it and start from scratch once in a while. Well, there they go, your patches and fixes.

But they don't start _everything_ from the beginning - maybe some stuff has to be removed once in a while, but nothing THAT radical as you claim. The overall kernel design stays untouched.

About 10GBit nets - how about "click a GUI button and turn your 2 PCs linked with 10GB link into an SMP box" feature?

One word : latency.

And how about "add more boxes and get SMP with shared mem and as many CPUs as you have in your PCs together"?

And add more latency ? :) I mean really - if you are talking about distributed computing i.e. have a specific task that is easilly parallelised than solutions are there. They may not be perfect or easy to deploy from casual user perspective, but neither it should be. Distributed computing is a specific area not so appealing in every day uses (say editing word document, converting to pdf, photoshop editing). Of course they may be some benefits (like applying some filters in photoshop/gimp to a large collection of files , dvd rip's etc.) , but probably nothing _kernel_specific_ . It's in the application layer of abstratcion, not kernel itself, although OS like MS Win probably could ship this functionality. Problem remains with authentication on those other computers (you would have to have an account on them). If for example the second computer being yours sister's she wouldn't be very pelased to see her task crawling becouse you started ripping your porn :) using her computer now would she ? But mutliple concurrent logins on one machine would be very welcome (not sure it it's not already available). If you have many computers than you probably have specific task to perform (render farms) and solutions are already available.
For gaming this stuff probably will never be appliable (to slow response time).

Oooh cooome ooon man, why are you lying now about "manufacturers not opening their specs" when THEIR SPECS ARE MS STANDARDS SINCE DX10??? So this part of your answer is officially BS :)

DirectX API is not meant to perform general computations. It is good for transforming triangles, but reverse discrete fourier transform computations are not made that easily on it, right ? What is crucial is 'specs' i.e. low level instructions (not some DX operations manufacturers "kindly" permit you to use).

DOS run doesn't has to run fast, since everything that was written for had been done with slow computers in mind, now we have much faster rigs. Don't know details about MacOS9 emulation so I won't comment on this one.

It has nothing to do with gaming, don't pretend being stupid again :) It's just a STANDARD GPU INTERFACE, nothing more nothing less. If games use it - why other things like that cluster stuff can't use it?

Maybe because there is no need for it ? What exactly task would you perform on GPU's on network ? Gaming won't do, becouse of latency. Rendering is OK - but it is ALREADY being done by apps, so ... ? And why this god damns DX ?!? If you would like a real "standard" than pick OpenGL - it's platform independent and more suitable for "seriuous" tasks (CAD, rendering etc.). DX (which incroporates not only graphic API, but also sound specific stuff) was introduced as solution for gaming industry ONLY. It should be fast, but not necessarily accurate. I would gladly see it being ditched in favour of OpenGL instead of trying to lock down everyone to their proprietary formats.

What would be of great value - is a powerful command line tool (POSIX compliant shell ?) , new file system without constant need for defragmentation (ext3 ? - neee - that would be to good to be true), easy support for other files systems (JFS, XFS, ReiserFS), more flexibility during installation (e.g. more MBR options, more other systems awarness), changing registry in sth. of more ellegant design, enforcing and better handling of multiple accounts (now amost everyone on desktop is using root accounts), more modular design (separating text mode from graphical interface?, those modules you've mentioned...).

quote: I read once that they (MS) hired some BSD folks, but I cannot find any prove it as of now. Not talking about Dave Cutler :)

Never heard of BSD folks there but wouldn't be surprised they use some even these days, say, for Vista, since it has uniform IPv6 for everything with IPv4 as a fallback. BSD sure should be of use here, so there is a place for them.

However, the design of the original hybrid NT 3.0 kernel itself was done under the lead of Dave Cutler, who was one of designers of OpenVMS and got many of his ideas from there, so some people even called NT 3.1 a "microVMS" or personal miniVMS, somethihng like that. It's all in wikipedia and all over the net, kinda classic MS history stuff, who did what and why.

quote: in reality it's more difficult to drive whole project with them to different architectural design

This is exactly what I meant. Reality check shows that it's often the case that original design could be patched to death but this patching has to end sooner or later. So, basically, I was watching how MS dumped DOS and Win9x instead of patching them to death because their design was too ugly, and I also saw Jobs killing even more ugly classic Mac OS, after 20 years of patching it. This, however, is never going to happen with Linux or FreeBSD or any other open source OS. It's only a feature of large commercial OSes, and only the successful ones (say, OS/2 could benefit from rewriting its workplace shell and presentation manager, because these were quite crappy, and also its kernel is a mess of 386 specific assembly and would benefit from redoing it properly instead of patching, but it won't happen because OS/2 does not satisfy that "successful" part of definition).

So, here you see the roots of my logic. I was watching MS promising heaven on earth and then delivering refreshed XP instead. OK, I'm may be wrong and it's not just a refresh, but you know after hearing all this WinFS stuff which makes me remember Soltis and his OS/400 alien beauty and such... I feel that Vista is not that alien beauty. It's gonna be great and large upgrade, but maybe it's not the revolution yet, in the same sense as NT and Mac OS X were.

Hence my thoughts about "stop lagging behind this stupid toy Mac OS X and do something major, like blow Macs away with a total new OS in 10 years". I think so because I saw how good was the radical transition both for MS and Macs, how far they jumped both by ditching overpatched DOS and Mac OS classic. And I wanted to see the repetition of the same story. I wanted to see MS stepping up and telling competitors "you'll get your medicine, just wait until we finish design of our new kernel". Hell, they could even take this L4 thing which is quite interesting, and organize uniform architecture on top of it. Cut out auxiliary servers and fit the OS into smartphone, or add a bunch of those servers and fit THE SAME OS into an enterprise mainframe. I call this cool, but not the Vista. Vista is a nice update, but it is essentially a huge patch for XP, and this new scalable OS probably requires going from hybrid kernel to L4 like kernel. And this is not a patchy patch, it's a design from scratch.

quote: But they don't start _everything_ from the beginning - maybe some stuff has to be removed once in a while, but nothing THAT radical as you claim. The overall kernel design stays untouched.

Like I said, this is not because patching is better than dumping old design and switching to a clean new one, it's simply because in open source world patching and evolution is the only way to move forward. Linus can't afford dumping monolithic design and switch to L4 internally, that's too big a deal for him. The open source can NOT shed skin and come out like a butterfly from a cocoon. The only way for them to grow is to drift where the wind blows. Is there an Itanium coming along? Here's your Itanium patch. Is there AMD64? Here's the patch. Mainframe? Patch. Anything else? Patch. Well, how about replacing monolithic kernel with L4 with a set of independent servers? Nope, it's not a patch, so it won't work, sorry, bye-bye.

It's all ok while old design can keep it up, so you are right here. Now tell me what would they do if they were writing open source PC-DOS in 1980? Will they evolve into enterprise Unix or NT 3.1 ten years later ONLY BY PATCHING DOS? Whoa! Sounds funny, isn't it?

quote: If for example the second computer being yours sister's she wouldn't be very pelased to see her task crawling becouse you started ripping your porn :)

LOL :)) Well, distributed ripping of porn is just one of the things that might be useful in future. Porn does not require a cluster, but how about a cluster being used to monitor things around a house? Imagine this scenario - I have a nice house, too nice to be ignored by local thieves, so I buy a set or wireless hidden cams, set them everywhere on the perimeter and plug the video feeds in my cluster. The cluster has those 10Gbit or 100Gbit links, and all my 5 or so home PCs are busy watching out for burglars through all the night. Neat! But only if MS cares about making such a plug'n'play cluster a part of their OS, or this new kernel. I don't wanna spend my life setting up beowulf, too complex and I'm a bit lazy sometimes.

You are right that distributed computing is not very useful for today's tasks, but think where would they use all the wireless toys 20 years ago? There was absolutely no market for this stuff, but now things evolved and voila - radiowaves everywhere! What makes you think that advances in computer vision in 20 years won't make my scenario possible? How about throwing all my house computing resources to rip a nice 150GByte Violet Ray DVD? Takes 6 hours on one 8-core PC or only 1 hour if I click a button on my desktop and tell my Windows "please get all the cores in the house involved, except for my sister's"

Hence my crazy fantasies about clusters and embedding that in a new kernel (or a server above microkernel which may be a better idea). I just project the situation in 1980 onto the situation nowadays and extrapolate it 20 years further.

It's hard to say whether NT kernel in present form is suitable for this kind of future tasks. Maybe it is, but because of the feature creep maybe it is not. If it takes 6 years for the vendor to patch his OS (XP->Vista transition) it may be a sign that too much bloat and patches are in. Maybe NT got so fat and messy because clean design of Cutler has turned into who knows what in Vista? Look, this is what happened with DOS! They got clean design for Intel 8088, then they went up to 80386 by building a whole freakin WinMe mansion on top of the original DOS mud hut. Now look at NT. They've got clean design for 1991 era of PCs, essentially for the same 80386. Where are we 15 years later? We are looking at OS X which is so different from NT, because it was designed from scratch about 10 years later than NT, and it has different everything, especially 3D accelerated PDF-based windowing system which Vista copied as well. So why Vista is so late? Is it because nice clean kernel design was downplayed by moronic managers who couldn't understand what to add and what to change to ship product on time? Or was it because clean design of NT has been patched to some amorphous blob where adding things is real hard? They got very talented, not moronic at all managers and tried to quickly patch XP into Vista. They've got this 6 year delay as a result. This is why I'm asking now - "is it time to avoid another 8 year delay after this one and then 15 year delay after that?" Because I'm afraid Vista got delayed because of the architectural reasons, not because BG was in charge as some say. I'm not sure patching WinMe into WinMe2003 were a good idea, it would take a long time and produce a nightmare, so they dumped it. The same story for Vista - I'm not sure taking this old design forward will somehow prevent significant delays in future. When you patch old mess like WinMe - it is the same as patching old mess called XP and hoping that you can ship nice lean Vista in 2 years. Know what? Doesn't work. 6 years PLUS cut off of many cool features they promised. No WinFS, nothing. Compared with Windows itself it's a progress, compared with OS X it's a failure, BECAUSE... right, you got it! Because OS X was designed much later than NT hence it has a better design, in a sense that it's better suited to modern hardware (come OOON who in their sane mind would fit PDF into a windowing system at Microsoft in 1989?? Cutler would shoot anyone proposing that in the head with a big nasty railgun, I tell ya! see how Jobs beat them now?), and it will age slower than Windows just because of that. Very simple logic, easy to grasp, right?

quote: DirectX API is not meant to perform general computations. It is good for transforming triangles, but reverse discrete fourier transform computations are not made that easily on it, right ?

Right, but there is still gpgpu.org out there, which means what? I think it means as GPUs evolve and become more and more flexible, people will find more and more ways to tap their computational power. So why MS wouldn't add GPGPU specs to their DX 11 or 12? Why not, IF there is demand? You are right, there's not a lot of demand now, there's only gpgpu.org, there's Havok which uses GPU as a helper engine, not much, yeah. You know what future holds? I am not, this is why I propose crazy ideas, to not let MS be beaten by competitors in future :-)

quote: DOS run doesn't has to run fast, since everything that was written for had been done with slow computers in mind, now we have much faster rigs. Don't know details about MacOS9 emulation so I won't comment on this one.

Virtual machines are everywhere, all old OSes and their apps are always emulated on newer hardware using virtual machines, and this is why PCs are slowly starting picking up traits of IBM VM OS. This is a global trend, and I just extrapolated it in future. Hence my words about this new kernel being IBM VM like, which is sooo far from current NT kernel... you can NOT patch NT kernel into IBM VM competitor. Try to prove the opposite and see how your patchy sandcastle crumbles :)

quote: Maybe because there is no need for it ? What exactly task would you perform on GPU's on network ? Gaming won't do, becouse of latency. Rendering is OK - but it is ALREADY being done by apps, so ... ? And why this god damns DX ?!? If you would like a real "standard" than pick OpenGL - it's platform independent and more suitable for "seriuous" tasks

As for OpenGL issue - MS just doesn't want to lose time waiting when manufacturer A adds a necessary feature B on hardware C, they want things made fast and uniform, click and install experience, and most important they can afford it. Trust me, Apple would dump OpenGL a long ago if they were as big.

What task uniform DX is good for? In current form probably not much, only stuff from gpgpu and Havok, which is not for home users for sure. Still, why not add extensions in DX 11 or 12 which would run some background computations on idle shader units? You have 48 uniform shaders in your GPU, say right now you're not running Quake 6, and 24 units are idling, but you want some DVD decode, video recode, a nonlinear video edit/transition, whatever - bingo, you got that DX waiting for orders.

Your problem is that you see DX as a child gaming thing, which is a big mistake. Look at Apple Aperture, they use GPU in a Photoshop like environment, look at nVidia - they work with video in their GPU hardware. It is not about gaming only. It's about using your monster GPU minicomputer for everything that requires number crunching, EVERYTHING! Uniform DX10, and constant growth of programmability in shaders, and amount of local video RAM - all this points NOT only in the direction of Quake 6, as you're trying to tell me. General trend (GENERAL!) is to make GPU excellent flexible renderer, WHILE using it for whatever possible while not playing. And the number of software titles that use GPUs not for gaming is slowly INCREASING, and it's a matter of time until MS decides that DX now is good not only for video extensions, but could also benefit users with computer vision and speech recognition extensions for example. WHICH USE IDLE GPU FOR COMPUTATION! See what I mean?

quote: is a powerful command line tool (POSIX compliant shell ?) , new file system without constant need for defragmentation (ext3 ? - neee - that would be to good to be true), easy support for other files systems (JFS, XFS, ReiserFS), more flexibility during installation (e.g. more MBR options, more other systems awarness), changing registry in sth. of more ellegant design, enforcing and better handling of multiple accounts (now amost everyone on desktop is using root accounts), more modular design (separating text mode from graphical interface?, those modules you've mentioned...)

For cool text shell - check out Monad, promised to be a wonder. Non fragmenting file system is great, but where did you get numbers supporting your claim that NTFS is more fragmenting FS than ext3? Is there a link with some research about it or you just believe in ext3? ;-) Registry is good enough already, you have to propose justufied changes. What do you want changed there and why? The only thing that could be useful is some automatic background backup for registry, but external utils exist for that, I think. Account separation is enforced in Vista and I'd say enforced too much :) Should decrease number of root users in Windows, I think. As for text-GUI separation - cool idea but MS won't ever care about it, they always made GUI default and Apple did the same even earlier. People who want small fast text kernel with cool shell - they have a crowd of open source OSes to choose from. MS won't go there, just as Linux won't go for dumping text shell and switching to X11 GUI only.