Edward Capriolo

In my last blog I established that it would cost 6 million dollars a year to store our 300TB data in the cloud. Well the bad news just keeps coming for Amazon in this head to head battle. My ace in the whole is my long time rant over Xen vs container style virtualisation. Let's take a quick look at some Xen claims around the interwebs:

"preliminary tests run with Xen[25] wehave indications that using virtual machines introduces a verysmall overhead (less than 5%). "

Well we have some off-the-cuff estimates that Xen does not have much overhead. Unfortunately 5% of 100,000 is $5,000 and 5% of 5000 is $250. At minimum you have to accept the notion that your paying 5% for nothing. You can call it overhead, your can call it FooBar but if you would have spent that $ on a real server would have 5% more foobar to deploy your application on.

When you look at anything even semi-scientific you can see that these 1-3% claims are all puffery. Last time I checked 5% was not "very small". If 5% was missing from your pay check would you notice?

Size of Image: Do you remember the old days when you could readily identify every single file in your *nix operating system? Now since most distros have to support USB wireless LAN, and flash thus the distros have gotten larger. But when you are dealing with servers you do not need about 95% of that. !!!!!!!!!!!!GET READY FOR THIS!!!!!!!! Our minimal vserver template is a whopping 100MB! When I need to make a vserver for tomcat sure I need java and tomcat but this usually never gets larger then 200MB. This is a total game changer. I have a laptop at home with about 60 different images. It seems like the average AMI I find is 1GB-8GB. A great example of how the cloud is filled with bloat and excess. Who cares about a little more transfer here or there? You not paying for this right ?

Performance - Containers are lean and mean. A container is little more then a jail. Think about full virtualization or paravirtualization, emulating kernels and interrupts, emulating network card, emulating physical devices. *Nix is multi user. How does emulating everything, including that bloated file system mentioned in point #1 gets you anything? Containers makes it easy to leverage mechanisms to control CPU, memory, number of files. Most of this work has been mainlined into Linux Containers aka /cgroups now. The cost of the container is little more then the weight of a process. I regularly have 10 running on my laptop at any given time. The idle cost is the same as any idle process. I do not need 2GB dedicated heap to idle efficiently.

Mainline. Not only is XEN not that efficient it is essentially abandoned and again. I forget how the story goes but microsoft bought citrix who created xen and then RedHad lost it's Xen boner and switched to KVM.

So nphase, "Why doenst everyone arbitrarily use openvz?".

Well I guess you have the case where you need to run Windows and FreeBSD on the same server........

Is Xen performance negligible? Busted! 5% or more is not negligible.

So if your struggling on the AWS calculator trying to finagle numbers to make my cost equal to yours do not forget you need at least 5% more servers to make up for lost overhead. %5 of 20 servers is 1 entire server BTW.

Are you comparing VServer and Xen ?! They have really little in commun.
>emulating kernels and interrupts, >emulating network card, emulating >physical devices.
Emulating kernels ? lol Then you should google "VT-d" and "SR-IOV". This is not emulation.
Xen is not dead. It's just the opposite : http://blog.xen.org/index.php/2011/06/02/xen-celebrates-full-dom0-and-domu-support-in-linux-3-0/
>Last time I checked 5% was not "very >small".
It's very small compared to the gain related to virtualization.
Keep bloging about BigData and Cassandra, you do it better.