Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

coondoggie writes "While conventional wisdom says virtualized environments and public clouds create massive security headaches, the godfather of Xen, Simon Crosb, says virtualization actually holds a key to better security. Isolation — the ability to restrict what computing goes on in a given context — is a fundamental characteristic of virtualization that can be exploited to improve trustworthiness of processes on a physical system even if other processes have been compromised, he says."

If OSs hadn't failed so bad on isolation, we wouldn't need so much virtualization. "Virtual machine monitors" are just operating systems with a rather simple application API. Microkernels, if you will.

Plus the minute you start sharing things within a virtual machine(ie: apache, cgi-type middleware, database all on the same machine), you've just lost all "extra" security from virtualization. You may keep the top level OS "protected", but who cares, you've lost private data from your database, through a hole in apache(or whatever). OOoops....

The problem of security is slightly improved, if you run each thing on separate virtual machines on the same hardware. You should in theory get relatively fast interconnects. If you VM is any good, that is. But you're still losing efficiency, unless you're doing "zones" or something like that.And it's 3x the headache to manage 3 separate instances of OSs, for what is in effect just one top level system anyway.

The problem of security is slightly improved, if you run each thing on separate virtual machines on the same hardware. You should in theory get relatively fast interconnects. If you VM is any good, that is. But you're still losing efficiency, unless you're doing "zones" or something like that.And it's 3x the headache to manage 3 separate instances of OSs, for what is in effect just one top level system anyway.

Well, nobody (or at least, nobody sane) does it like that. There is no non-trivial datacenter that

Maybe if you are used to setting up development systems but as far as enterprise they are all different machines. You might have both an IIS server as well as an Apache server feeding off of a common database, you don't want a fault in one client to take down more than one system and the list goes on. If you only plan to have 3 clients you might not want to hire a specialist but for such a small VM but as long as you are taking nightly snapshots there is not too much that can go wrong that you can't fix w

Reason is that money isn't a concern there, reliability is. So you can throw tons of technology at making something work well. There's plenty of stuff that mainframes do that we'd love to see on normal computers. The problem is being able to implement it at an acceptable level of performance and at an acceptable cost.

The difference is, mainframes did it properly. The first system to support virtualisation was VM/360. It didn't just support virtualisation, it supported recursive virtualisation. This meant that any VM could contain other VMs, so you could use the same abstraction for isolation at any level. Operating systems provide a very limited form of virtualisation: processes. A userspace process is basically a VM for a paravirtualised architecture. Any time it wants to talk to the hardware, it has to go via the kernel. The problem is, it stops there. A process can't contain other processes which can only contact the kernel via the parent process, so programs end up adding their own ad-hoc isolation mechanisms. Things like the JVM, web browsers, even office apps all need to run untrusted code but have to isolate it without any help from the hardware. Fortunately, modern systems are providing things like capsicum, sandbox, and systrace, so it is now possible to create a child process with very restricted privileges.

Among other things, I'm responsible for a cluster of windows terminal servers, which users never fail to find creative ways of breaking. Yes, Windows sucks, but it's necessary to run the software my customers use, so there is no alternative. Virtualization may be overkill in theory, but in reality it may be the only way to keep users from hosing our systems. Would be different if MS knew how to properly design an OS, but if wishes were ponies......

You're insightful. You see the problem clearly. Including the possible "team identification" urges.

And you're lashing out at them.

Take your insight to the next level? Look at how you're presenting your information. Are you just venting, or are you trying to effect change? If you were intentional about effecting change, would you still heartlessly condemn those you were trying to persuade?

Show me a man who stands strong on all his principals and I'll show you a man who is hated and doesn't have anything.

Your principals need to be prioritized and the ones lower on the list may need to be bended more and more. For most mentally healthy people using a computer OS that you don't like is low on the list of principals.

And I have seen one other important application on Windows: Assume you have some MS server software than can only handle 200 users or so (there are a few). If you have, say, 20'000 users but only ever 100 active concurrently, with a traditional Unix server software, you would just use it directly. With Windows, you can put 100 virtualized installations on the same hardware.

I am halfway convinced that this example is the real reason virtualization is so successful, namely lack

Actually privilege separation done right is far superior, as you can do application-integrated intrusion detection rather easily at the internal interfaces on the separation lines. Virtualization does not give you anything like that or the fine-grained access control model either.

You're correct. A security kernel that is provably (and proven) correct is hard to design, but has been doable for a long time. Any "Trusted" (as opposed to "Trustable" - which means "you can't actually trust it at all") OS is built around a verifiable level of isolation. (For example, if prior to the Common Criteria, you'd wanted Linux to be an A1-class OS, you could have done it even though Linux wasn't specified out from the start. A1 was perfectly achievable if the security kernel alone was specified from the start and the rest of the OS was merely audited to prove everything went through it.)

Even that is unnecessary, though. GRSecurity went belly-up because there were not enough developers interested in it and no funding for it at all. Problems any of the commercial distros could have fixed in a heartbeat and any of the major vendors (IBM, you listening?) SHOULD have fixed in a heartbeat. That wasn't perfect isolation but it was vastly superior to what we currently have which is too limited in scope and too limited in usage.

Remember, though, this last bit only applies to Linux. Some of the BSDs have MAC of some sort, but not all, though all of them could have it tomorrow if they wanted.

Windows - the only relationship it has with MAC is the British image of a dirty old man in a raincoat. But even there, where was the necessity? It has a built-in hardware abstraction layer and a few other key areas that could, quite easily, have all linked up with a proper security kernel. Instead, we've got BS and I don't mean it earned a degree.

Even that is unnecessary, though. GRSecurity went belly-up because there were not enough developers interested in it and no funding for it at all.

Do you refer there to a company that was also called GRSecurity? Because I'm running a Gentoo Hardened system right now with both PaX and GrSecurity integrated into the kernel (coupled with a hardened toolchain and various userspace features). That is one reason it was worthwhile to me to build from source -- well that and USE flags but this would be another discussion.

If the company going under was what caused the work of the same name to become GPL software, this may have actually increased its avail

No, it was the GPL patch folk. If you look through the old news, you'll see the announcement that they lost their sponsor. They later announced - I think on LWN - that they were indeed stopping all work. Well, obviously they got the money they needed so fortunately I'm wrong in thinking that this had continued into the present day. Nonetheless, for a while they were zombified.

The higher security certifications start to have WEIRD consequences for a general purpose system, we went over these a bit in computer science.

For instance, under the (apparently now obsolete) orange book ratings, C2 is pretty normal, NT4 (not on a network) was certified to this level, and a certified version of HP-UX, Irix, VMS, etc. were sold back in the day at level C1.

In theory, there are exceptions. In practice, you're so close to 100% right that I'd need extended floats to find the exceptions.

By the time you get into the Bs and As, which is where MAC gets involved, MAC is considered to encompass ALL communications, ALL memory management as well as ALL program access, except where otherwise noted. (Orange Book doesn't cover all the uses of MAC, so the Orange Book definitions alone aren't enough.)

Thus, it is possible to have SYSV shared memory in B1, but all processes sh

Bar stewards! (Actually, just been to their site. They're distributing kernel binaries, if you dig around some, which means they're legally required to provide the source code for the kernel on request, though they're not required to provide the source code for the security kernel if it is a module. They have to if it's solidly built into the code, though.)

Because the security kernel is the only part that actually has to be formally proven for A1 or EAL7, provided the security kernel totally isolates the pr

Oh, and their license agreement states that in order to download the software you have to renounce all Open Source rights and agree not to pursue them. Now, I personally didn't see any clause in there saying I couldn't forward the info to the FSF guys for them to pursue, I just had to agree that I wouldn't.

(If they're not going to make any meaningful use of the project, I'm sorely tempted to take the risk and let the Gnu folk know what's going on. The risk being that the developers take the website down and

I agree. Most people don't realize that a proper OS shouldn't need virtualization for security. They're basically saying that it's impossible to make an OS secure, and then they create a solution that is really an OS that can run other OSes. Except this OS is "different".

I can understand virtualization being used to consolidate multiple servers onto larger servers, you can use less network adapters and even aggregate them, decrease the network cabling/switch infrastructure. You can have multiple megaservers

I can understand virtualization being used to consolidate multiple servers onto larger servers

Except that, in theory, you should never need to do this: if you have a bunch of servers running various processes, and want to consolidate them onto a single, larger server, you should be able to run all those processes at once on the big server. You shouldn't need to run separate OS instances for each one. The whole reason the timesharing multiuser system was invented was so that one computer could be used by l

It's not the OS that failed, it's the applications. Different applications want the system settings changed to what they think is best, and you can't make them all happy. Granted, it should be possible, but today's application developers can be total idiots who have an egocentric view of the OS. I have Oracle support telling us we should increase the maxuproc to 16384, when it's obvious that the system will die long before that many oracle processes are running, which is defeating the purpose of maxuproc. "

The problem with Solaris zones is that a kernel panic takes everything down. They have a lot less overhead than a hyperviser solution, but the cost is stability. I first hated Solaris zones when a ZFS filesystem caused a kernel panic and brought all of the zones down. Then, through the miracle of Solaris Cluster, it was brought up on another machine and when the ZFS filesystem was imported, another kernel panic and all of those zones went down as well!

Right, but again this is an OS failure. It shouldn't (in theory) matter what version of an OS you have, as long as it's not too old; there should be no such thing as a "legacy app" that only runs on a legacy OS, it should be possible to run the old app on a new OS version without any issues whatsoever. The fact that this isn't the case shows that there's a giant failure in OSes.

Right, but again this is an OS failure. It shouldn't (in theory) matter what version of an OS you have, as long as it's not too old; there should be no such thing as a "legacy app" that only runs on a legacy OS, it should be possible to run the old app on a new OS version without any issues whatsoever. The fact that this isn't the case shows that there's a giant failure in OSes.

If someone writes their application to use deprecated (or, worse, undocumented) APIs and features, then its failure to run in more

Well first, if there's undocumented APIs, that's absolutely an OS failure. There shouldn't be any undocumented APIs, period. There's no good technical reason for such a thing.

Anyway, APIs shouldn't be deprecated. Programs written for the standard C library on a Unix system back in the 80s will probably still compile and run fine on a modern Linux system now. And if there's "quirks" in the APIs, that again is an OS failure; the behavior of every API should be documented and well-defined.

Of course there is. Functionality and features only meant to be used within the OS by other OS components and not by third party applications.

If there's a genuine cause to have functionality only available to the OS and not applications (third-party or not is irrelevant), then there should be some mechanism to prevent applications from accessing that functionality, rather simply trusting them not to.

The problem is that you're talking about Microsoft having undocumented APIs that are meant to allow their app

But to use for security? That's as lame as installing anti-virus software because you know your OS can't handle security.

I've said for some time that anti-virus is not security. It is damage control, at best. The way it is currently marketed and commonly used, it really is a terrible substitute for the inability of an OS to maintain security. As damage control it isn't even very useful because the only correct response to a successful intrusion is to reformat and reinstall from (read-only) media that is reasonably known to be good. It is only in the Windows world of ignorant users and routine infections that anyone desire

I've said for some time that anti-virus is not security. It is damage control, at best. The way it is currently marketed and commonly used, it really is a terrible substitute for the inability of an OS to maintain security.

They are two completely different aspects of security.

OS security is the fences, the gates and the locks. It's there to stop the bad guys getting in at all.

AV security is the motion detectors, the dogs and the security guards. It's there to try and minimise the damage once the bad guys

If OSs hadn't failed so bad on isolation, we wouldn't need so much virtualization. "Virtual machine monitors" are just operating systems with a rather simple application API. Microkernels, if you will.

Sounds like the solution might be enforcing some sort of (hmm what would you call it?? Dirt box? Dust box?? ahh thats it!!) Sandbox on applications in order to achieve the isolation you desire. I bet if I'm quick, then I might able to patent the iAmSparticus sandbox technique.

If OSs hadn't failed so bad on isolation, we wouldn't need so much virtualization. "Virtual machine monitors" are just operating systems with a rather simple application API. Microkernels, if you will.

Sounds like the solution might be enforcing some sort of (hmm what would you call it?? Dirt box? Dust box?? ahh thats it!!) Sandbox on applications in order to achieve the isolation you desire.
I bet if I'm quick, then I might able to patent the iAmSparticus sandbox technique.

Does Windows provide no functional equivalent to a *nix chroot? That would be a good place to start, especially if you can harden it against known methods of circumvention like you can with Linux and Grsecurity. Or would a chroot be as important when you're using an OS in which not everything is a file?

If Windows has no such function out-of-the-box, are there generic third-party sandboxes that can be used with any application? For example, I understand that the Chrome browser runs in a sandbox but I d

Try Sandboxie [sandboxie.com].
I've had good success with running apps and games in a sandbox with it. The only thing it lacks (although it's better security wise) is being able to pipe files between the boxes so you'll have to install programs multiple times if it's needed in more than one box (think PDF reader, zip stuff, etc.).

Try Sandboxie [sandboxie.com].
I've had good success with running apps and games in a sandbox with it. The only thing it lacks (although it's better security wise) is being able to pipe files between the boxes so you'll have to install programs multiple times if it's needed in more than one box (think PDF reader, zip stuff, etc.).

Thanks for the link. You can probably tell I don't use Windows myself and haven't for some time now (back in the day I used to dual-boot with Win98 until months went by without ever using the Windows system, so I reformatted it ext2 because ext3 didn't exist at the time). So, I'm not terribly informed about specific software available for that platform.

Still, am I the only one who thinks it's terrible, borderline irresponsible that Windows doesn't come with something like this out of the box? Configur

I completely agree. Not only OS failure, also application development failure on top of that. Even today most academic programs producing people that will architect/design/write software do not include mandatory software security lectures. There are also whole important areas of operational security where virtualization does exactly nothing. One is preventing applications from being hacked and used as SPAM-relays or to hack other systems. For this you do not need a root-compromise, just hacking an applicati

OSs don't fail that bad at all. They are simply aimed at another task, namely making processes cooperate. A system designed for that task will never be the best solution for another task that aims to achieve the opposite, namely to make processes completely invisible to each other. Virtualization has nothing to do with OSs failing bad, they're just not designed to make a single piece of hardware look like 20 pieces of hardware you can rent out to 20 different customers.

"While conventional wisdom says virtualized environments and public clouds create massive security headaches, the godfather of Xen, Simon Crosb, says virtualization actually holds a key to better security. Isolation — the ability to restrict what computing goes on in a given context — is a fundamental characteristic of virtualization that can be exploited to improve trustworthiness of processes on a physical system even if other processes have been compromised, he says"

Given the track record of the companies in IT, I really doubt his words.It will probably become mass breaches of security made easy.

I rolled my own RHEL5 desktop cloud. If an engineer does something stupid, the VM he has reserved dies and he reserves a new one. He doesn't impact the other virtual desktops and the VM that he crashed gets rebuilt from a single master image. This is the benefit of isolation, and it can be extended to security if you plan it right. It all boils down to the competency of the admins.

His words are fine. You CAN use virtualization as a way to strengthen security, just as you can use concrete to make really strong structures. The problem is that concrete, on its own or poorly-utilized, is worthless for making much of anything.

To me the biggest security win with VM's is the ability to properly size a system for what it is actually doing. No more adding "just one more" service to a box because it's got more horsepower than it needs. By doing more logical partitioning of the software you limit the commingling of data, administration, and crash risk between different services.

No more adding "just one more" service to a box because it's got more horsepower than it needs

Yet, with virtualization, that is EXACTLY what you're doing. The only difference is that you're not just adding an Apache instance to the machine as that 'one more service' you're also adding an entire OS as well.

doing more logical partitioning of the software you limit the commingling of data, administration, and crash risk between different services.

Isn't that what your OS is supposed to be doing? Why do you think another layer can do something that the one you're already using is incapable of.

Except, its not in different universe. Its in fact its still on the same planet, just at the parking lot next door, now instead of being in a normal parking lot, its in a multidecked parking lot with a whole bunch of other cars they can break into at the same time because when you drive into this parking lot, everyone has to give the keys over to the parking attendant and trust that they will hold the keys securely.

The rest of your comment was fine, but just because you hold a skeptical view of hypervisors doesn't mean you need to impute I don't know how a computer works. And yes, I've paid my dues, including memorizing 8088 machine language (not assembly).

Given that some people actually think we are living in a computer simulation [simulation-argument.com], I believe the universe analogy was particularly apt.

Well, HE thinks it means everything because without it meaning everything he is irrelevant.

He also seems to think his OS is different than every other OS that came before it.

Virtualization is just another layer of software to exploit, the real problem is that it allows idiots who may have separated services onto physically separate devices due to incompatibilities with various bits of installed software on the machines, now they are once again back on the same hardware with shared memory...

Virtual machines are useful for utilizing under utilized hardware for doing trivial things you wouldn't want to waste full hardware for and that are unimportant. ISPs are a great place for virtualization as they let the ISP 'sell a machine' with a lot less effort than would traditionally be required. Using the current 'virtualization' tech for security purposes just shows your ignorant.

Adding more software and bugs does not add security, especially since you're just doing the exact same thing the original OS was supposed to do. So your argument becomes 'I'm better at it than you', and when ever that happens I run the other direction as fast as possible. If you have to tell me you're important, you aren't.

"Virtualization is just another layer of software to exploit, the real problem is that it allows idiots who may have separated services onto physically separate devices due to incompatibilities with various bits of installed software on the machines, now they are once again back on the same hardware with shared memory..."

There are many real world scenarios that are currently only supported by virtulization. If all these people think virtulization is such a crutch, then they can solve the problem. Currently

I completely agree. And from what I have seen so far, the available virtualization systems are all actually less reliable than the same OS run on bare hardware (at least if the OS is Linux;-). That would also imply they are less secure. For that reason, I don't think you can regard virtualization that runs as root as much of a security/isolation gain. It may even represent a net loss, except that the attackers have to invest a bit more into research. But they may gain portable attacks as a benefit.

I completely agree. And from what I have seen so far, the available virtualization systems are all actually less reliable than the same OS run on bare hardware (at least if the OS is Linux;-). That would also imply they are less secure.

Citation needed. Please care showing everyone grave security exploits in Xen (as much as I know, there's none that are very serious if you don't use PCI passthrough, just few hard to exploit DoS).

For that reason, I don't think you can regard virtualization that runs as root as much of a security/isolation gain. It may even represent a net loss, except that the attackers have to invest a bit more into research. But they may gain portable attacks as a benefit.

One thing you seem to fail to understand, is that to get this type of exploit, you'd need to 1/ get root (hard already) then 2/ get to the hypervisor level. So please explain to me how this is LESS safe, when you have 2 layers of exploits to find instead of just one.

"Less reliable" is a personal observation. I have so far managed to crash the network stack of every hypervisor I tried within a week in a specific set-up. (Sorry, cannot tell you about it.) The observations do not include XEN (have not tried it) but most definitely include KVM and qemu and may therefore well be applicable to XEN. I have not managed to crash a natively running network stack in any Linux kernel version I tried.

Unreliability transfers to exploitable at a rate of about 1:1000. Or rather 1 expl

In 7 years of using Xen (since 2.07) and selling VPS, I've seen many things crash, but never the network stack of Xen, which I believe was the part that was the most easy to virtualize, and that has been reliable for years. Now, maybe with Qemu, but I don't use it (I use PV).

That is nice for you, but the article was not only about XEN, but about virtualization as security isolation method. From my perspective, there is indication that every virtualizer that needs parts to run as root is less secure than an application that does not run as root in the first place.

I'm sorry, no one is going to teach you the basics of software design just to show you up on slashdot.

More code means more bugs, which means a larger attack surface.

You seem to confuse 'not having some major public exploit to date' with 'secure' and that means you really don't need to be in this discussion as you're disconnected from the way reality actually works.

One thing you seem to fail to understand, is that to get this type of exploit, you'd need to 1/ get root (hard already) then 2/ get to the hypervisor level. So please explain to me how this is LESS safe, when you have 2 layers of exploits to find instead of just one.

Because you only need to exploit EITHER of them, not both. You didn't add another defense mechanism, you added another place for a hole. If I c

If I can find a bug in the network code in your hypervisor that allows the virtual machines to communicate with the real networks

Then you need to be root to start a new network driver module...

You simply don't understand security, sorry, its unlikely I could teach you enough to make it clear anytime in the near future.

Thanks, but I don't need a teacher. Your condescending tone is quite impolite by the way.

I can safely say this because you seem to think that the fact that you are unaware of current root exploits that they don't exist.

NO! I've been asking for past security exploit that have (not) been found for Xen, in order to be able to compare the kernel exploits and the one in Xen. OF COURSE, you just tell in reply that you can't show me... Easy enough...

But it's a WAY smaller than the kernels you may run. On my laptop, Xen is a bit over 650KB, but the initrd image for my kernel is about 11MB. That is, 16 times smaller. I believe that Xen is more than 16 times safer than the kernel, since absolutely zero "root" exploits have been found (if you don't use PCI passthrough, which historically, has been quite worrisome).

Adding more software and bugs does not add security, especially since you're just doing the exact same thing the original OS was supposed to do.

The point is, Xen doesn't. It does only virtualization, not drivers, where most of the security exploits have been found.

On my laptop, Xen is a bit over 650KB, but the initrd image for my kernel is about 11MB.

Two things here. First, the initrd image is a RAM disk containing a recovery filesystem. If you want to compare it to Xen, you need to compare it to the Xen admin tools as well as the kernel - and they are written in Python so come with 5MB of Python dependencies before you even get them to start. Secondly, 90% of the size of any modern kernel is device drivers. Xen does not contain any device drivers - it delegates all of that to the domain 0 guest (or to multiple driver domains).

Since it seems you didn't get it, let me explain again. Let's say you need to run service A, B and C. If you don't have virtualization, you'd run them all on the same server. Then if A, B and C are each running as their own user, you need to get an attack vector on let's say A, get root, then you can affect B and C. If you have virtualization isolating A, B and C, each on their VM, then you need to find an attack vector in A, get root, then find an exploit in the hypervisor if you want to reach B or C. That

In other conferences Microsoft says that Windows Advanced server is the best tool for the job, drug dealers show benefits of increased cocain use and Hitler says that final solution to the Jewish question improves German ecosystem.

Virtualization also leads to resource overbooking. If I run on two physical X5355 Xeons, I know that I have two physical X5355s at my disposal. If I run on two virtual X5355, I can't tell if provider does not use same X5355s for other clients.

Simon Crosby is not the creator of Xen. It was created by Keir Fraser while he was doing his PhD, under supervision by Ian Pratt (it was actually created as the result of a drunken bet between Keir and Ian). They then went on to found XenSource, which was bought by Citrix. Simon Crosby (yes, his name does have a y on the end - well edited Slashdot) was brought in to do marketing for XenSource. He had very little to do with the technical side.

Yeah, we all knew that a decade ago. My simple SOHO office server is in the process of migrating from two linux boxes to one VM server with 8 VM's for role isolation. I'm no visionary or security genius - I d

you want to virtualize a computer, run the program and then check that:* the computations have not been hampered with* nobody has been snooping in your computationsThis goal is currently out of reach. It is an open problem in computer science if it's even possible!

The exact term is "encrypted computation". Imagine if you could not only encrypt a file, but run it after it's been encrypted! You could send the file to some cloud and run it there, without revealing _what_ is being computed or what data you use.

The OS+hypervisor has a larger attack surface than the OS alone, period. Unless you can prove your hypervisor is un-hackable (don't make me laugh), a virtualized system is less secure.

Even Windows, at the kernel level, is quite secure, and should be more secure than using it with a hypervisor; even a hypervisor made by Microsoft for Windows (or should I say "especially a hypervisor made by Microsoft") will be less secure than the OS alone.

> The OS+hypervisor has a larger attack surface than the OS alone, period. Unless you can prove your hypervisor is un-hackable (don't make me laugh), a virtualized system is less secure.

This is a fair point. On the other side of it, though, you have emerging new features such as the ability to install your anti-malicious software tools up at the hypervisor level, which can, in theory treat the VM as a sort of honey pot. You can also install the hypervisor's mgmt system onto a private network.