Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Mortimer.CA writes "In a weblog entry, Paul Murphy mentions a Microsoft report (40 page PDF) that in many instances FreeBSD 5.3 and Linux perform better than Windows XP SP2. The report is about MS' Singularity kernel (which does perform better than the OSS kernels by many of the metrics they use), and some future directions in OS design (as well as examination of the way things have been done in the past)." From the post: "What's noteworthy about it is that Microsoft compared Singularity to FreeBSD and Linux as well as Windows/XP - and almost every result shows Windows losing to the two Unix variants. For example, they show the number of CPU cycles needed to "create and start a process" as 1,032,000 for FreeBSD, 719,000 for Linux, and 5,376,000 for Windows/XP."

Here's an interesting snippet I found while perusing the PDF...thought I'd share.

On the other hand, this paper does not validate our goal of increased dependence. Measuring that aspect of a system is significantly more challenging than performance. We do not yet have results for Singularity.

Interesting...Singularity is ostensibly supposed to be about stability, but the 44-page paper has no data on this. Kinda like saying, "Our new bulletproof vest is 40% lighter than our leading competitors, and twice as flexible. How well does it stop bullets, you ask? Sorry...we do not yet have results for that benchmark.".

Wake me when a paper comes out about Microsoft's new stability-oriented OS that actually addresses that particular aspect of the product.

Well, that's sort of to be expected. Stability is not as easy to measure as other things, since you need benchmarks over a long period of time. Further, since it's still a research OS, it's likely in constant flux and doesn't have the same kind of stability hardening of a retail OS.

Insightful? How about "apologetic bull"?My job is in QA. Your statement says that my job is impossible. Here are a few ways you can test stability:

1) See if the OS comes back online after a power cycle2) Insert and remove device drivers3) Send mangled data across the various data busses4) Run programs that try to allocate all the memory5) Run programs that try to hog all the CPU6) Run a program that fills the hardisk/erases the hardisk/refills the hardisk7) Do all the above all at the same time

I didn't say your job was impossible, but your job is only testing a small subset of reliability. Reliability is also whether or not the OS stays up for a year at a time, or whether it has long term memory leaks. Reliability also has to do with weird race conditions that only show up after several programs interact for a significant amount of time, etc...

What you're testing is simple stuff, stuff that's easy to identify. There's a whole other class of reliability testing that's far more long term.

While technically, reboots are not required for anything other than kernel patches, there are lots of situations where it's easier to reboot than to restart every application (which might as well be a reboot anyways). For example, glibc updates will require almost every application to be restarted, or you risk exposing vulnerabilities.

Typical distros that support pervasive no-reboot updating (like Debian) don't exactly replace a "running" libc (or any other library), they simply update the on-disk copy. So any programs run after that will get the new libc, but any programs that were started before the update will of course be using the old libc.

Usually this works very well; I suppose for a mega serious security update you might want to restart all your daemons too or something.

For example, apache and sshd, and various FTPds, can be restarted without anyone possibly noticing, because they simply leave any running children open. You connected before a certain time, you got the old copy, you connected after it, you got the new one.

And, of course, many protocols work fine if you go away for five seconds, like SMB. The client program will just say 'oops, connection hiccup' and reconnect silently, and the end user never notices. Same with IMAP clients. They go 'Hey, the server closed my connection, I better open it again'.

Restarting services on a Linux box is 99% transparent to end users, even ones that are currently directly doing something with the server.

Rebooting is not transparent, even if all the connections are reaqquired automatically, simply because work stopped for the two minute reboot.

Same difference... you've still shutdown all your network services which to the users means you've had downtime. It's a reboot in all but name.

mmm... I can see that in a few specific cases, like if you have a lot of users who log on over ssh. Less so for webservers and remote filesystems where you bounce the runlevels fast enough, the interruption will probably never be noticed.

Of course, the context where the Curse Of A Thousand Reboots really bites is for the home computer. I mean, I only have one user on this machine. Rarely I'll have two, never any more than that. So if I cycle runlevels, no-one is going to be put out bar me - and I'm the one doing it.

In General, I find that the people inconvienced by a compulsory reboot are not networked users.

Of course, even if you have remote users, your downtime is going to be a lot less if you don't have to go through POST, bios initialisation, device scanning and all the rest of it. And of course you only have to do it once, becaue you're controlling the process, so you don't get fifteen reboots in a row because windows brute forces everything.

So, I think "all but name" is overstating the case. By rather a lot, actually.

Some OSes don't require a complete reboot when you do an update. Also, some don't require a reboot but recommend one (Solaris comes to mind). In most of the *NIX world, you can just restart a few services instead of rebooting. Personally, I've never been in a situation where I can't take the server down at non-peak times for routine maintenance so it really doesn't matter to me. In fact, a lot of times rebooting is easier than finding which services were patched and restarting them.

Seriously though, very few Linux updates, for example, require a reboot. Most updates occur in user space and can be adequately applied by restarting the applicable services (if any). You just have to be aware of exactly what is being updated and what it affects.

You clearly don't know much about what makes an operating system stable... Stability depends partly on how much error checking the compiler is capable of doing, partly on how people write software (design) and partly on how well the operating system is designed to separate processes and different parts from each other. Singulary addresses all of these issues: Its mainly writen in a "safe" language which is strongly typed and does lots of compiletime check and it is a microkernel operating system which (at least in theory) prevents your cheezy usb webcam driver from crashing the kernel. Most other unix wannabe systems are writen in the ancient language C:), and run monolithic kernels.

But singularity isn't all new, it just implements old ideas: Occam and QNX!

But in my opinion, Singularity just might be the most interessting os to emerge in the last years. It will be interesting to see how long it will take the free software world to come up with something similar:) (btw, I am a long term happy gnu/linux user, and have no plan of switching...)

And in reality it just makes things like disk I/O extremely slow, ala OS X. Personally, I am pretty disappointed by OS X as a server. Both in stability and speed. If that is a good example of what a microkernel can do in the real world...

It's a Microsoft OS, and you're saying that they made a mistake when mentioning that one of their goals is increased dependence? Hell yes that's their goal. Vendor lock-in, forced upgrade cycles, dependence - all the same thing, and all the goal of any winning software company.:)

Interesting...Singularity is ostensibly supposed to be about stability, but the 44-page paper has no data on this. Kinda like saying, "Our new bulletproof vest is 40% lighter than our leading competitors, and twice as flexible. How well does it stop bullets, you ask? Sorry...we do not yet have results for that benchmark.".

You didn't really read it, did you? From TFA(bstract).

...Singularity demonstrates the practicality of new technologies and architectural decisions, which should lead to the construction of more robust and dependable systems.

The point of the paper is NOT to demonstrate a fully working uber-dependable system, but to validate the practicality of the architecture that is under development, and the new technologies being included. That's why they have the section on performance, with the preface (right above your quote, btw):

If Singularity's goal is more dependable systems, why does this report include performance measurements? The answer is simple: these numbers demonstrate that [the] architecture that we proposed not only does not incur a performance penalty, but is often as fast as or faster than more conventional architecture[s]. In other words, it is a practical basis on which to build a system.

That's the point of the paper. I understand, however, that you might have been in too much of a rush to get first post that you didn't understand the point of the paper...

This is the real big threat to the open source community. Once Microsoft becomes honest whith themselves they might start making real progress on the engineering side of their product. Marketing will get you so far when you have no more competition but good engineering can make it stick.

A great product is a great product regardless of who makes it. I thought OSS was a big deal because it emphasised great engineering with openness. So, if you can't handle the heat, then stay out of the kitchen.

I think it's more telling that the paper shows Linux or FreeBSD as performing better in a few tests, which is the reason it was able to appear on the front page.

I'm happy though that MS may be taking Singularity seriously. Maybe we will see their OS in 2011-2015 based on it? Unless some sort of major shift in its purpose occurs, then I would definitely jump ship from whatever I am on then, to that and I will definitely port/develop my software for the OS.

I don't know what those 5m vs 1m cycles are doing. But what I do know that fundamentally Windows was designed with high-performance threading/wait operations and high-performance asynchronous operations, whereas Unix and its derivates rely on high performance process-creation, blocking I/O for sever applications.

I.e. Apache 1.3x series performs poorly on windows because it was a straight copy of the Unix edition - using processes rather than threads.

I don't have something I can point you at right, however, the information is true. Linux used to have horrible overhead imposed by thread creation. As a result of both the NGTL and NPLT projects, the time needed to create a thread on Linux is tiny...tiny...tiny...some of the well known results from the projects were published... Here's a quote:

"One test mentioned in Ulrich's email - running 100,000 concurrent threads on an IA-32 - generated some interesting discussion. Ingo Molnar explained that with the current stock 2.5 kernel such a test requires roughly 1GB RAM, and the act of starting and stopping all 100,000 threads in parallel takes only 2 seconds. In comparison, with the 2.5.31 kernel (prior to Ingo's recent threading work), such a test would have taken around 15 minutes."

http://kerneltrap.org/node/422 [kerneltrap.org]As you can see, the stellar increase in thread performance has been unbelievable. Keep in mind, prior to this effort, Linux's thread creation was no where near the performance delta gained from these projects. Ergo, one can easily deduce that Linux far exceeds (less time) Win's thread creation latencies.

UNIX creates a process with fork, which takes no arguments. UNIX runs a new executable with execve, which takes 3 arguments. So in just two system calls with 3 arguments, you launch an app.

Windows has a CreateProcess() [microsoft.com] function with 10 arguments, many of which are pointers to structs. I call your attention to the absurd "LPSTARTUPINFO lpStartupInfo" argument, which supplies info about the windows style and current desktop.

Nothing that people didn't already know there - take a look at the numbers for thread operations and note how they're much, much faster than the Linux/BSD numbers. Creating a process on Windows has always been very expensive, and threads have always been fast. It's why Windows applications use threads where equivilent Unix ones fork.

First of all, the Windows processes created in this example are Win32 processes, which have a lot more baggage than the posixy processes that FreeBSD, Linux and Windows's Posix subsystem use. I'd like to see added to that list the number of cycles to create a native Windows process and a SFU process (they're going to be a lot shorter), and also a WINE process under Linux.

Some of the things that Win32 processes do that SFU and native processes don't:

The Application Compatibility Database, a user mode service has to be contacted to see if the new program needs to have any compatibility shims added. Half of the compatibility that XP has comes from modifying programs as they start, or giving them special treatment. This stage alone causes so much overhead that Windows Server 2003 has a special group policy that lets you turn it off to make starting processes faster.

The Software Restriction Policies database, a set of registry entries that have allow/deny rules for starting processes based on hash, filename or certificate. To make any actual comparisons, the entire binary has to be hashed and checked for certificates before the program even starts.

Registering with the Win32 subsystem server (csrss). This involves several out-of-process function calls.

Load the current locale, including NLS files.

If enabled, contact the Themes service.

Except for talking to the Themes service, all those steps are done for every new Win32 process, even if it doesn't have a GUI.

Singularity is a very interesting system. But that's not surprising, when you consider some of the brains behind it: Galen Hunt, Wolfram Schulte, Ulfar Erlingsson, Rebecca Isaacs, and many others who are well-known for their research.

In twenty or so years we may look back at Microsoft Research with the same admiration we have for Bell Labs.

Now that you're done being sarcastic, go look into some of the research [microsoft.com] that is being done at Microsoft Research. Like it or not, it is top of the line work. They're at the cutting edge, and they're well financed.

Like it or not, it is top of the line work. They're at the cutting edge, and they're well financed.

Okay, but how many of their innovations (Christ Microsoft loves that word!) actually make it to the outside world?

I think your comparison to Bell Labs is good, however, in that much of what Bell Labs created required others to make into real products. AT&T/Ma Bell sat on every innovation until it nearly suffocated due to lack of capital investment.

We will just have to wait and see. Then again, even if a specific project doesn't go commercial, there is always the knowledge that was gained from it. In many cases that is more valuable than the tangibles the project may deliver.

And where did the last great scientific lab come from? A raging monopoly (AT&T), that's where. When did they do their best work? When they were funded by a raging monopoly. I think the comparison to Microsoft Research is quite apt.

The interesting thing is that MS Research does do a lot of truly interesting pieces of research - but the funny thing is, despite this, MS itself uses few to none of the fruits of this research, preferring instead to just buy up other companies and copy other technologies.

I've seen talks and papers that have come out of Microsoft research, and while it may look good as a website summary, the quality of the actual projects and results varies wildly. They may talk big, but in the end, I've only ever seen a couple of projects out of MSR that were even worth talking about, and the research labs of places like IBM, HP, and even Sun do many far more interesting things.

One other big problem from MSR - on the occasional project that's actually good, they somehow manage to kill it, or at least never tech transfer it into products. I cry when I think of some of the awesome dev technologies MSR was working on a few years ago that never made it out.

So, if you work for Microsoft Research, there's no way you can be doing cutting edge research?

No, but if you work for Microsoft Research it is likely that the results of your research may never see the light of day as products. Unless there is a way for Microsoft to make hoards of cash from your idea, it will be stillborn.

You definitely have a good point there. Everyone around here bashes Microsoft obviously, and for good reason. Their business practices can get a bit on the shady side sometimes, though they problably aren't deserving of quite the amount of hate they get around these parts. But their programming and research, particularly research, isn't that shabby, and certainly isn't "evil." Remember, M$ doesn't just sell operating systems, it makes them too, and to do that you have to have brains. I think some people around here need to give at least the engineers and researchers in Microsoft a little more respect.

Sad that your comment is modded as funny. In fact what you say is insightful and probably will turn out to be true. It amazes me how most people in this forum refuse to give Microsoft credit for anything they do or have done, but they are more than willing to heap blame upon them. I believe that *overall* Microsoft has in fact been a positive force in the industry. This doesn't mean everything they have done worked out for the "common good", but I think the scale tips in that direction. And don't forget that they continue to spend lots of R&D dollars both on product development and pure research. You would think a technical audience like/. would appreciate that. To me it smacks mostly of envy and jealousy. Can't we all just get along?

I guess this explains why linux boxes do so much better than windows boxes at high load, it takes the windows computer almost 8x as long to start a new process! That's something where a little bit of optimising really helps =)

And even once started, it is slow. I find Windows desktops to be HORRIBLE multitasking environments. Try unzipping a large file in the background and the machine comes to a crawl. On Linux you hardly even notice it unless you try to do something that requires lots of CPU. I mean, obvoiusly you're not going to get 100% CPU for each process, but the desktop shouldn't crawl.I always found it amusing that Windows and Mac users used to think that multitasking meant just having Word and IE open at the same time.

Yawn, same old stuff - read the rest, Windows is better at thread switching. That makes up for the slow process creation. Windows programmers know that processs creation is slow, and thread creation is quick. Using Threads over Processes is not the same model you generally use in the Linux world which prefers Processes to threads.Even the Blog author makes the same comment: "So why is this interesting? Because their test methods reflect Windows internals, not Unix kernel design." yet he will still draws out

Yawn, same old stuff - read the rest, Windows is better at thread switching. That makes up for the slow process creation. Windows programmers know that processs creation is slow, and thread creation is quick.

As a result, you get tons of unstable Windows applications because to get any reasonable concurrency you have to throw out the years of hard work that OS designers put into having protected memory.

Threads vs. processes isn't "two different ways of doing the same thing". Barring a massive implementation boondoggle, you make that choice based on whether you want memory protection or not. These numbers highlight a massive boondoggle, which takes the correct choice away from the application author in many cases.

Threads certainly have a place, I never said otherwise. The problem is that the Windows system forces you to accept shared memory to get concurrency, and those two are unrelated. The number of problems that want concurrency and memory protection is large, and eliminating that option is a MAJOR problem.Having done a fair amount of GUI programming myself, I find a multiprocess solution is often correct (e.g. in something like Photoshop image filters, where you want shared access to one memory segment but do

I've been seeing this damn report hailed all over the Internet for the last few days as Microsoft saying Unix is better than Windows, but apparently nobody has actually read the report.

For one thing, Windows is not slower than Unix in most of the tests. It's slower than Unix in some of the tests and faster in others. For another, these benchmark results are for low-level things like spawning processes and threads. Any programmer who knows anything about Unix and Windows will tell you that threads are cheaper in Windows and processes are cheaper in Unix, because that's how they were designed. So of course Windows is going to be slower than Unix at creating processes, and of course Unix is going to be slower than Windows at creating threads.

The only thing worth reporting about this thing is the performance of Singularity, which looks like it's shaping up to be an excellent modern kernel.

Yes. It's the "almost every" that I have an issue with, because it's a blatant exaggeration. I've also seen that phrasing used in several news articles about the report. But when I looked at the actual report, I saw plenty of tests where Windows actually beat Unix. I didn't bother counting, but I'd estimate that the two came out pretty evenly matched, with Unix maybe slightly ahead. In any case, no matter which one beat the other more times, these are very low-level tests. Nobody's going to notice these differences unless they're running a high-traffic server or doing some really heavy-duty computing.

There's been a lot of work on improving the threading under Unix variants. M:N threading models, zero-copy where data structures are identical, etc. It is entirely possible, if not probable, that some cases of threading will actually be faster under some u*ix-like OS' than Windows. Because there has also been a lot of work on security models under u*ix-like OS' (role-based memory encapsulation, etc) which are inherently slooooow, there will certainly be u*ix-like OS' which are slower at starting new process

Actually, both thread creation and process creation are much faster on Linux than Windows. However the margin is smaller for threads.

I agree that the report is meaningless for the purposes suggested in this slashdot write-up. If anything, it tells us that something coming out from MS Research has the potential to kick the asses of both Windows and Linux.

"So why is this interesting? Because their test methods reflect Windows internals, not Unix kernel design. There are better, faster, ways of doing these things in Unix, but these guys - among the best and brightest programmers working at Microsoft- either didn't know or didn't care."

So, Windows still loses at times when using what seems to be a biased (or simply uninformed) testing method? Loelz.

NT (and its latter incarnations like XP and so forth) are desgined to use threads rather than process for multi-processing/concurrency, I understood. Is Win XP less efficient in multi-threading than BSD/Linux? The history of threads in UNIX seems more later bolt-on - UNIX was designed with multi-process model, I think.

Exactly. NT got it's process model from VMS, and process creation was a very heavyweight operation. Unix, by contrast, had a very lightweight process creation operation. Hence NT needed threads to provide a faster alternative to processes, while Unix (whose processes were almost as cheap to create as NT threads) didn't really need threads for anything other than a marketing checklist (about the only thing Unix threads get you that processes don't is fully-shared address space, and I'd argue that's often more a problem than an advantage).

You're misreading. It's not 90% of the problems out there, it's 90% of the code in a given program that's synchronous.

I really doubt that.

Take, for example, the process of reading data from a single input source and processing it. With no other input sources to look at, and no processing that doesn't require the data you're trying to read, exactly what can the code do while the read's completing?

No, the equivalent of 'R&D' on the microsoft campus is R&P, or 'Research and Patent'. Like most of their would-be innovations, they are
born into formaldehyde, destined to serve as court room exhibits; the last thing microsoft can afford, is a lively, competitive software industry spurred by brilliant implementations and ideas.

This article takes a very interesting report on a reference implementation of some innovative ideas in OS design and reduces it to a couple of entirely peripheral, seat-of-the-pants benchmarks that support the "OSS rulez!" thesis.

Even people like me, who have only a basic knowledge of OS architecture, can tell you that processes are lightweight in Unix and heavyweight in Windows. The lightweight objects in Windows are threads, which is why Windows makes so much use of threads, while Unix spawns processes left and right.

The scenario stated in the slashdot post does show a situation where linux performs better than Windows....but after looking through the "performance" section of the whitepaper, it's pretty much the only case where linux is better. Windows appears to beat linux on quite a few other tests (such as memory use of a 'hello world' program, the executable size, and even some of the 'cost of basic operations' tests)

This is Microsoft Research. They have the same independence as university researchers--that is how Microsoft lures them away from academia. These guys are making honest comparisons to Linux and FreeBSD, because that is what they do as good researchers. Microsoft is enlightened enough not to interfere.

It's very nice working for an outfit that lets you do full-time research, doing pretty much what you want to do. Microsoft generally has fairly bad press, but I think that this is something that Microsoft should really brag about, because they pay lots of people to do essentially very freely directed research. They don't correct our papers, they let us go to whatever conferences we want to. I'm publishing at a higher rate than I did at the university.

Instead of paying rapt attention to what Microsoft is doing what I would like to see the OSS community do is consciously form more organizations that would as an express purpose chip away at Microsoft's software base. What I mean by this is make sure your program runs on Windows for now. Get people using OSS and used to the idea so that the next time average-joe needs some software he'll search for an OSS program first. Then once that mindshare has been established begin to work towards the more core functions like the OS itself. Who knows, Microsoft might at some point simply open up the source of Windows to counter a loss of control to OSS if they see that their customers are truly ready to abandon ship. And to build that feeling in customers give them options - if all their useful software is OSS then they can swap out the lower levels (like Linux for Windows) without feeling any transition pain at all because their software applications didn't change at all only the plumbing did.
Ballmer's right, it is all about developers. OSS developers can introduce OSS values into the Windows "ecosystem" for lack of a better word and see what happens.

Why is it that some people seem to think that all OS names, when they have a qualifier of some kind attached to the generic term, need a slash to separate them? Just because GNU/Linux is written that way does not mean it's some kind of law, people...

The only stat in this table that Windows trails on is process creation. And anybody who has ever ported Unix code to Win32 knows exactly why: Windows is thread-oriented, and Windows systems don't tend to use helper programs or demand-forking to get work done. Which might be why Windows beats Unix in the thread benchmarks, but not in the IPC benchmarks. On the more general benchmarks, like cycles to issue a system call, Windows falls smack in the middle --- and, again, Windows has a slightly different take on what is and isn't a system call.

Drawing comparisons between Singularity and normal operating systems here is silly. Singularity doesn't have processes in the conventional sense; since there's no hardware dependencies on "process" creation in Singularity, IPC and forking are much faster.

Which is why this benchmark is reasonable inside the Singularity tech report (they're trying to demonstrate that there's a major performance benefit in rethinking boundaries between programs), but totally unreasonably outside that context: these are micro-benchmarks, like the ones CISC and RISC people throw at each other, and don't describe the amount of time it takes to complete a high-level task. Time to execute a system call is meaningful only in the context of how many system calls it takes to complete the task you're measuring.

- in File Operations XP beats Linux and Singularity at sequential reads, with the exception of FreeBSD being fastest if blocksize is high(and very bad for small blocksize)

- linux executable sizes are larger than these of the other OSes, (whatever that means, more good coding, or less bad code SCNR)

Please bear in mind that a benchmark does not tell whether the "slower" OS actually invested more time in doing some smart stuff that pays off in some other way. In particular, I would not be surprised if an experimental OS like Singularity did less.

I am a recent convert to Ubuntu, but I do Unreal Tournament mapping which can't be done in Ubuntu, so I had a dilemma. I was about to learn how to dual boot when I found out about VMware player, and setting up a virtual machine to run XP. I set it up, and honestly, XP runs faster this way than it ever did on a regular install. No, I can't install stuff like video drivers I need, but the drivers that install with XP work well enough to run the unreal editor. I wonder if someone could test XP in VMware in Ubuntu against XP on a hard drive and see what kind of difference there is. I sure seems like XP is way faster than it ever was.

Shame to have to set up like this just to run unreal editor, though. Oh, for you gamers out there, UT runs so much smoother and faster in Ubuntu, it's not funny. UT2k4 (has linux installer on the 1st cd) runs way better in Ubuntu also. You might want to check it out if you have a spare hard drive you can play around with.

It's not so much about its ability to start thousands of processes. What is important is that it takes Windows XP five times as long as FreeBSD to create a single process, and seven times as long as Linux. That's a significant difference.

Processes in Unix are lightweight objects, and the OS spawns them left and right. Processes in Windows are heavyweight objects, and the OS creates only a handfull of them. The lightweight objects in windows are threads, and you'll notice that Windows thread creation is faster than Unix thread creation. These are just different OS design philosophies.

I didn't mean to say that there aren't some negative consequences to the choice of making threads performant and processes less so. There are, and your post correctly identifies one of them. But I think it's wrong to say that that design decision is therefore across-the-board wrong.

There are 2 seperate issues here1. Are threads faster than process? Yes, on both Unix and Windows.2. Are process so slow as to be essentially unusable for concurrency? On Windows, yes for a relatively large problem domain.

Creating thousands of threads seems like a horribly inacurrate way to gauge performance. Creating a process is something that only goes on every once in a while -- threads or state machines are what are used when high levels of concurrency are required. The problem is that any code being executed repeatedly will be cached and often optomized by modern processors. P4s are especially aggressive about this thanks to their microcode cache. And different code will be optomized and perform from cache differen

My always on XP SP2 machine has not had any spyware in 3 months. Its the dumb users who get infected. There are less dumb people using Linux (due to the learning curve), therefore less problems with unwanted computer activity. An XP machine properly set up with firewall, spyware, and virus scanners/blockers, and used responsibly (no Kazaa) will get a serious virus about as often as a *nix user will get rooted.

The fact that the virtual machine is a bit slower isn't the point. The point is that because the virtual machine ensures memory protection, Singularity doesn't need to use hardware memory protection for the kernel. Doing a single system call costs hundreds of clock cycles on a modern CPU, because of the userspace/kernelspace switch. It also necessitates all sorts of complex (and slow) IPC mechanisms that go through the kernel (and invoke the aforementioned switch), all because we're still programming in an