Posted
by
samzenpuson Sunday May 26, 2013 @05:41PM
from the best-of-virtual-class dept.

Gonzalez_S writes "Let's say you need to give access to 100+ users to create their own virtual machines and devices (eg. switches, .., ms windows or linux family) in a manageable and secure way. Which virtualization solution would you choose? There are vmware, xen, kvm, .. based solutions, but which one would you prefer and why? The solution should be stable, manageable, scriptable and preferably have ldap integration. In this case I also need to setup a playground for IT students, next to hosting production servers on the same system."

You have a very good point in that Amazon is about 80% of the virtualization market and growing and are far more competent than anyone except Google. There's almost no other API it is worth dealing directly except for ones which access both EC2, Eucalyptus and OpenStack. Amazon's infrastructure is also pretty cheap as long as you are not too demanding. Certainly much cheaper than their competitors.

There are some serious problems though. Amazon will ban you if you start to run serious security, stability or load tests on their systems. This means that whilst it may be suitable for production use (if you overload in production they will normally work with you to solve "real" problems) it is not suitable for testing or learning. Amazon's infrastructure is also pretty opaque and when you start researching into detail they may get upset. Finally, Amazon has some "interesting" performance limits which they will never care about fixing.

This means that the correct answer to the question posed is to use Eucalyptus [wikipedia.org], which provides an Amazon compatible interface as your private cloud and to use Amazon for whatever suits the public cloud. Your research students and some of your production use which has a benefit from being private (typically needs access to large amounts of data currently locked inside your network for whatever reason) can be on the Eucalyptus.

Eucalyptus had some stability problems which are going away. It was also delicate to configure and the configuration files are still nasty. However it's definitely the only currently functional solution to the problem set above.

vmware is cheaper and easier to set upCitrix is a lot more expensive and a PITA to set up but a lot faster since Windows 7 and later has native citrix code in it for virtualization and a lot more customization

Not to sound like an ass but I need something tested and well supported. Not freeware.

+100 users have specific needs as well as the I.T. staff who need to manage it on 100 users. A hypervisor is not what is needed. What is needed is a real managed, supported, and configurable way, and scaleable. That means clustering, no special software if possible for each client, authentication to the VM, scalability on the servers, IE or Firefox addons or none at all with a java server frontend to the VMs etc.

A lot depends on what you want to host. The Windows Type 1 hypervisor platforms are well-known. If you want to host Linux/BSD/etc., there's really a different family for that.

If you want to add-in VDI, it's a different mix of products, but the commercial vendors are the same. VMware is expensive, Citrix less-so, Oracle is reasonable if and only if you like Oracle; Microsoft supports Microsoft and a hand-picked set of Linux options.

But you can teach a lot by using Xen, vyatta, and a bunch of FOSS components

When my company had to come up with a solution to have all of our developers to develop in an environment that absolutely mimicked the production server we used a combination of VMWare to run a version of the Ubuntu. Puppet made creating all of this really easy. It gave us the ability to completely blow away a machine and reconstitute in very little time.

When my company had to come up with a solution to have all of our developers to develop in an environment that absolutely mimicked the production server we used a combination of VMWare to run a version of the Ubuntu. Puppet made creating all of this really easy. It gave us the ability to completely blow away a machine and reconstitute in very little time.

We did the exact same thing for developing proprietary trading software, using KVM on Gentoo with Salt Stack. There are numerous free options for achievin

This is a dumb question, but is there a recommended way to share operating system virtual disks between VMs, so you don't need 100 copies of the same Ubuntu? I realize you could set up one server VM and advertise/usr/share over nfs or samba across a virtual switch, but are there better approaches?

Considering that you are likely out of an educational institution, Microsoft likely provides you with free licenses for their products. As such, Hyper-V and SystemCenter would provide you with a fairly good experience that is easy to manage and automatically deploy based off of Active Directory. It is a solution that will likely meet all of your stated requirements and your other likely needs and wants in a package that is "good enough".

If you have a budget, consider VMware's vSphere offering. It can get pretty expensive (license costs greater than that of your physical hardware) however it is currently best-in-class and provides some truly amazing administration tools.

There is basically no lock-in to any virtualisation platform these days. They all use essentially open virtual hard disk formats and it's trivial to convert from one to the other. But you end up locked in anyway, as all your scripting & management is targeted at whatever platform you choose - be it KVM/vSphere/Hyper-V. So choose the one that makes managing it easiest for you. If you like bash, choose KVM. If you like PowerShell, choose Hyper-V or vSphere.

Honestly, I've not found that to be the case. In most cases, you can disable the integration drivers in the guest, then move the VM to the new virtualization platform and start it back up. You may need to do a startup repair or in-place upgrade on an older version of Windows; Windows 7 (2008 R2) and 8 (2012), however, are fairly resilient.

The smoothest way to do it, though, if you've got the time, is to use the new platform's P2V tool to create a new virtualized VM based on the old one. This is how I've mov

As such, Hyper-V and SystemCenter would provide you with a fairly good experience that is easy to manage and automatically deploy based off of Active Directory. It is a solution that will likely meet all of your stated requirements and your other likely needs and wants in a package that is "good enough".

As long as your definition of "good enough" includes endless problems with Linux guests.

A couple of years ago, you would have been right. Anything with a 3.0 or above kernel has all of the Hyper-V modules in the kernel. For CentOS or RHEL, you can use the integration tools. I run about a dozen Linux machines on our Hyper-V cluster without any issues.

I second this. I've migrated several business services (e.g. svn, flyspray, etc.) from physical boxes running various OSes (W2K8, Ubuntu) to CentOS virtual hosts on HyperV. Apart from one issue*, which is a stupidity using Minimal CentOS unrelated to Hyper-V, I have yet to see a single problem running CentOS on Hyper-V.

* CentOS Minimal requires manual network setup, which is fine, but there is no plug-and-play support. So whenever the VM is moved to a new Hyper-V server, the CentOS networking breaks (the so

Our central infrastructure is on Hyper-V at work now on account of VMWare wanting way too much money. We use a lot of RHEL systems and they all work well. Our web server, MySQL server, puppet server, that sort of thing all run on Hyper-V. The Linux admin didn't have much trouble with it. The main limitation I'm aware of is that you can't do dynamic memory.

While it isn't ad Linux friendly as VMWare, it seems to work just fine. As to which between them you should use, depends on features and price. In our case Hyper-V was "free" since we have software assurance with MS campus wide and VMWare wanted like $20,000 per system for vSphere with the feature set we wanted, so it was stacked heavily to Hyper-V. You case may be different, so make sure to check out both.

However don't write off Hyper-V because it is MS. With Server 2012 it is a real, no-shit, enterprise virtualization solution that works well and has loads of good features. They fixed their rubbish networking from 2008R2 also, their virtual switches are exceedingly fast, and it supports full SR-IOV if your NICs do.

I was very pleased when I tried it out, our Linux admin liked it, so we migrated (we had an old VMWare 3 setup). Migrating VMs was easy too. Uninstall VMWare tools, use the Starwind converter to go from vmdk to vhd, use Hyper-V to go from vhd to vhdx (and make it fixed size), set up a VM, start it, and install the integration services.

I came here to say this. Proxmox is very cool. I haven't had the opportunity to use it in a production environment, but the testing I did with it left me impressed with its simplicity and capability. It has node management built in and is laid out very logically. Definitely worth a look!

Yes, +1 to Proxmox. Runs on commodity hardware, performance is good, cluster and backups haven't given me a headache yet. I'm running 100+ VMs across 5 machines, with about a dozen users, and it feels nowhere near its limit.

End of story, everything else here is overkill. KVM sounds just about right for your needs and is very stable and FREE.

You can provide people with a variety of images and single command to deploy them (without root). It's not even that hard to setup. The hard part really is setting up an LDAP server to meet your needs.

KVM is great for a environment where everyone is being cooperative; and sorta knows what they are doing. It lacks the resource management and isolation features you'd want in an academic lab. You need to be able control how much storage I/O a single vm can use. You might have someone learning about networking even doing things purposefully that are going to slam CPU resources like creating loops in Ethernet topologies.

Yes you might be able to get some Linux hosts with KVM to what you need with cgroups, and limits, etc but its going to be anything but simple and manageable across multiple physical hosts without tons of scripting and testing on your part. Libvirt is still a moving target, so keeping everything working is going to be adventure as well. All the precursors to provide the experience vSphere and Xen offer are there but lets not kidd anyone about the work that is still needed to get there. It would be wonderful if original poster could offer the resources to do that and even better if it could get contributed back to the community but its a tall order.

kvm itself doesn't really give you anything in terms of control or management features. That all comes from libvirt or ganeti or whatever you've got. We've been using ganeti for a while and it does a reasonable job for our purposes but it is still a long way off from being something i'd feel comfortable deploying for customer use.

If you want KVM with the manageability of VMWare, then oVirt is what you're looking for. Fee as well, open source and RedHat is investing heavily in it as they base their RedHat Enterprise Virtualization Manager product on it.

Xen with paravirtualized guests would be stable and scale well, as I understand it. There is Xen Center to do this, or you could get the new Debian 7, which is supposed to have good support for that out of the box as well. It has good manageability as I understand it.

But yeah, I'd be of the inclination to do your research rather than have us make the choice for you. We can only offer suggestions, but you need a good idea of what you want to do too. For example, IT students often don't have a good understand

Virtualization will not isolate them against each other. For example, it is quite easy to saturate I/O from the playground. Then your production performance goes down the drain as well. Also, basically no plain virtualization is really secure, these things are fat too complex. Another reason not to mix different classification levels like production and playground. Maybe if you really, really carefully isolate them with SE-Linux, but then you still have things like VM-to-VM crypto-key leakage.

The timing and cache attacks are very much non-academic, unfortunately. As are the problems of generating good key-material in virtualized environments in the first place.

Your SAN proposal should solve the I/O issues, but it makes everything that more complicated as this has to be configured right, and that is _not_ easy and requires quite a bit of experience and skill. If it can be done at all without having the thing fail regularly for a while. It would be far easier to just have on production cluster and

No, sorry, I do not need to re-read anything. VM key leakage is a practical problem at this time. It is just that in environments where it would be really interesting, they already know to use clouds segregated into classification levels. The problem of generating good keys in a VM is also very real and basically unsolved in practice.

That the script-kiddies do not understand it does not mean people with real skills cannot do it. But these people do not brag.

I think the closest thing you'll get to "out of the box" for what you're looking for is Apache Cloudstack running on Citrix XenServer for a hypervisor.
With basic networking, you can keep things pretty simple. With advanced networking, you can allow your users to build virtual data centres.
It can be 100% free open-source software as well, although if you get Citrix CloudPlatform, you get a couple of extra features, and support, but you pay for the support.
You could be something similar with other products, but CloudStack actually has a pretty amazing amount of stuff that is just there already, and doesn't need configuring.

There are a lot of options, and the OP is just asking for a general structure. Classic/. community fail to assume we are even dealing with someone that will be doing with implementation. This could be the director trying to get a ballpark before sinking their teeth in or a under-paid teacher, with little time, whto wants to make their students' learning environment better. I was the only one with a VPS in my classes, and thus the only one, in the end, who actually knew how to get anything done, outside of theory.

My rant to/. is over. Now to answer the OP:

The easiest way to get started would be Xen Cloud Platform + Citrix Xen Center. That alone will get you a free robust virtual hosting environment, but this will require you to set up a few VM templates and manually deploy to students. You can take this one step further by using OpenStack + XCP which will give you an API which you can use to build a web-front for student deployment. Some might already exist, but all the ones I am aware of are built around payment models.

As for users managing switches, I have no clue and good luck there. IMHO, I would VLAN and let OpenStack manage it. You can use the US Navy's network simulator [navy.mil] to teach concepts if you like. It even allows using tools like wireshark for real-world analysis experience.

Good luck, I hope you use this to make students more ready for the real world.

As for users managing switches, I have no clue and good luck there. IMHO, I would VLAN and let OpenStack manage it.

VLAN used to be the common solution for networking with OpenStack. Though there are major drawbacks with that (limitation in the number of VLAN, hardware needs to support it, etc.), so these days, mostly everyone (me included) prefer the GRE tunnel solution.

I ran redhat 6.0 with virtualbox to 60 plus student doing computer science projects. The base was on a quad core with 16 Gb and local Tb storage. this worked great with ssh access. Adim was via nomachine and ssh.

If you want to do it efficiently you might also want to consider using it as a service. Other people are already selling OpenStack on a massive scale with levels of efficiency that you'll never touch. Rent what you need, see what works, and then start building your own in-house when (or if) you find things you need to improve.

What about Open Stack? For production, don't oversubscribe RAM. For a play ground, isolate them to one physical machine and let that machine over subscribe. I'm guessing but you can host about 20-25 virtual servers per compute node, you'll need a physical management machine, and if you do a lot of different images/want backups, you'll need a machine with a bunch of disk space or a iSCSI appliance. The open stack doc will tell you which iSCSI system will work.

VMware - best in class but can be hideously expensive if you start using vsphere, but support is greatHyper-V - probably the most sensible way to go if you're just virtualizing windowsOracleVM - immature for prime-time on commodity hardware, but free to implementSmartOS - is an OpenIndiana based solution where the whole stack runs in memory.RedHat has implementations of their own virtualisation stack, and they also do openstack as well.

VMware - best in class but can be hideously expensive if you start using vsphere, but support is great

I get the idea you have some issue with VMware's pricing?

Of course their per-2 CPU up front software license costs for vSphere Enterprise Plus at $6,990, and probably closer to $8k per host after SnS are higher than the cost of paying $2500 for a basic XenEnterprise license, or nothing for Hyper-V.

The Hyper-V solution is more appropriate for running a very large number of cheap servers with local s

That's OK if the organization has deep pockets, deadlines, and defined SLAs, and you happen to be an outside contractor who is called in to make a solution where he/she has to be able to walk away from whatever solution is in place at the end of the day, and have it supportable by other people.

However, at some places where they pay in-house admins, they might have carte-blanche to hack together whatever solution they like in w

Hyper-V is not free because you have to pay for all the management bits and pieces that go along with it.

One of the supposed selling points of Hyper-V is you can perform live migrations directly between a pair of hosts without having to have a central management server, and you can write custom scripts to accomplish what vCenter would do for VMware.

There's a lot more to vSphere than vMotion.You can write custom scripts for ESXi to "accomplish what vCenter would do for VMware" as well, but by the time you did, you would have spent more on person time than you would have on just buying vSphere.

You can write custom scripts for ESXi to "accomplish what vCenter would do for VMware" as well, but by the time you did, you would have spent more on person time than you would have on just buying vSphere.

Very true, but there are people in organizations that fail to acknowledge this, and they feel that "writing the custom scripts" instead of buying the overpriced management tool is a better decision, because maintaining their own scripts lets them avoid showing
a tangible cost for the management capab

Will you just run Windows and Linux? If not, what? What is your budget? How complex will your virtual network be? What are your security requirements? What are your performance requirements? Are the vms more for desktop user or will they be network server? Do you need high-availability and live vm migration? Does your virtualization setup need to work with an existing storage solution? If you simply don't know, and want to get something quick, the easy, but expensive, way to go is vmware.

I'd suggest taking a look at Eucalyptus [eucalyptus.com], an open-source cloud management system that's compatible with the Amazon EC2 APIs and thus pretty easy to script and automate for production resources and any of the students who want to play with features like on-demand load balancing.

Asking this is much like asking 'which is the best linux distro'. You won't get one answer.
What type of system are you most comfortable with operating? If it is Microsoft system (for example) you have already got you answer. Are you are looking for a bare-metal hypervisor? Do you need GUI-heavy management tools? What sort of hardware are you going to use (old/new?). Probably looking at a comparison chart would be your best option.
I could tell you what I use and why but that won't do you a bit of good. (

That's easy: Choose the one your distro of choice recommends - I'm presuming you're using Linux here.Otherwise I'd recommend you switch to it before virtualising things - my fairly safe blind guess is that the custom-virtualisation-setup-community is by far the largest for x86 Linux.

If you run into troubles you can't get a grip on, start switching through the ones the most helpful people in the forums/irc channels you're using recommend.

SmartOS is pretty amazing. You can create virtual environments that share a kernel space, meaning that YOUR os is running directly on the hardware, making it _extremely_ fast with almost no overhead. The file system (ZFS) is also 'shared' using zones and pools so there's almost no cost there either. Migration a vm between SmartOS hosts is also a pretty amazing thing. And finally, DTrace allows you to figure out exactly why something is slow... There's a huge library of DTrace scripts available on the intern

Opinions are a great thing to gather when building any type of system no matter how experienced you are. People stand shit up all the time that they aren't 100% familiar with and in this day and age products can change drastically. Do you really expect OP to know everything about every possible virtualization product? I don't see anywhere in his post that he is asking for anything more than an opinion. He doesn't even state that he needs one, he's simple asking for peer feedback. Instead he gets asshat responses from the internets...

Ah fuck off. It's actually a good and interesting question to see what the various specialists come up with.

Nah, it's called getting a set of basic user requirements and then looking through a set of products to see which match the list. This just reeks of laziness and namedropping on slashdot so someone will post the solution for you.

By the way, I'm looking for a toaster on linux, it needs to be able to have 6 settings, usuable by many people (including students). I need to be able to develop toast on it, but it also needs to run an operational toasting environment, preferably on the same hardware. I would like it to be fully scriptable, and I need to be able to hook it up to an LDAP. It would be nice if it came included with a coffeemachine, which should also be fully scriptable. I've found the Coffee HOWTO [tldp.org], but haven't bothered reading it. Could you guys give me an opinion on how to adapt this to my toaster project? I've looked at relays, resistors and capacitors... They all seem very nice.

Please spend a little more time reading the manuals and typing in a few requests in Google before posting this to Ask Slashdot: be a bit more professional.

Fuck it, karma to burn anyway.

You could try doing a little basic research before posting your question.

Here's a toaster that meets more of your requirements, though it runs NetBSD rather than Linux:

If you have to be so arrogant and pretend to know what is best without research or asking other I.T. professionals then I have to say you are not doing yours and neither are the moderators who made this +4??

Stating that you are not qualified is also highly insulting and ruins the quality of candid discussion on Slashdot that I do like and enjoy reading the comments.

In fact regardless of the field I do not know of anyone who is competent who does not look to others with more expertise in a specific area for opinions. No matter how badass you think you are at your job there is always someone who knows more than you. Especially in a particularly area such as this case virtualization.

My first response halfway through Gonzalez' post was "Oh, yeah, he's an instructor, maybe at a community college, and he's in charge of getting this thing up and running." Next thought, "He's done no homework other than learning the names of some virtualization methods/engines and wants the smart folks on/. to do it for him." Clinched with the last two sentences.

Then, before delving into all the helpful posts thus far, I figured it was also possible he'd done a bit of swotting up and reached the point where he's brain-burnt, confused and maybe over his head. As another here has said, simply trying to use Google to get to sources for decent advice or real infos can be... disheartening.

Finally, since we all plopped out of the womb knowing little more than how to suck, poop, and cry, it's not unreasonable to ask those who might know more, or who've been in the same boat, for any useful info, pointers, advices, which lead him to right here and now.

Now to continue reading, see if anything interesting and useful shows up.

I never did any of this for a living, only a few classes, and very little of it for a hobby as time allows, only use VirtualBox for my own stuff, having tried several of the other end-user solutions over the past few years. Already got hipped to some neat things I'd not heard of - proxmox, chef, vagrant, ovirt, jenkins, etc. Don't know what OP gets from it, but I have some reading to do.

What a load of elitist bullshit. Maybe he has already done a lot of research and has a good idea. Do you really think he is panicking and turning to/. because he has no clue? I think that this, being a technical community that still has alot of expertise and insight in it, he decided to hear other peoples/professionals perspectives.

you need to steer a million miles clear of.
They are guaranteed to implement the project quickly, skillfully, and in a way which misses the entire point.
Q: A wise man says "I know that I know a) Everything b) Nothing

I don't agree. There is nothing really unique to virtualization, it's just really interdisciplinary, storage, network engineering, wintel admin, Linux admin, physical datacenter management, etc on these scales. Nothing anyone who has been in IT for awhile and worn a few hats in that time can't be expected to do so reading and then get started.

It is a useful question to ask though, at least several of the products mentioned can likely meet his needs, there are qualitative and technical differences and soliciting some info on he experience of others, to help direct his research effort is not unreasonable

I don't agree. There is nothing really unique to virtualization, it's just really interdisciplinary, storage, network engineering, wintel admin, Linux admin, physical datacenter management, etc on these scales. Nothing anyone who has been in IT for awhile and worn a few hats in that time can't be expected to do so reading and then get started.

If he had those discplines and skills then I doubt he would be asking slashdot. Seriously if you need to ask slashdot the question he asked then he is unlikely to have the skillset to implemet ANY of the solutions in a well managed way.

I highly agree with you. The answers to technical/geeky questions on Slashdot always have a lot of experience and insight. That is something Google searches would never yield, unless they happen to be results of Slashdot questions regarding the topic you're searching for.

I have setup VMware before but I sure as hell would ask others before I put live production and recommend an expensive solution and put my job on the line for 100 users. Google will show just search engine optimized crap of people trying to see stuff anyway and it is hard to tell which is real and which is a fake website pulling data from another designed to pimp up the ratings of a 2nd website.

Windows 7 forums are copied by bots all the time and put in fake ad/malware ridden sites with links to someone trying to sell something to get a higher Google SEO rating whenever I try to search for something technical. It is annoying.

I like VMWare for larger installations as well. We also have special requirements, specifically we need GPUs. Until recently, that meant offloading that work to real hardware, but nVidia GRID is a godsend because we can install that part on the VMWare server (this is still in beta at my company, so I don't yet personally have access to it, but I've seen demos and I have to do the multi-server setup by hand and that is no fun).

I would have say that 8 months ago. Now, with the latest release (code name Grizzly, version 2013.1.x), we are up to a very good level, with quantum finally working correctly. For storage, I would suggest Ceph rather than Swift + Cinder.
Thomas

yes they're nice for the software written for them, but most would prefer x86-64 based solution to gene amdahl's architecture now emulated by mutant powerpc. yes I'm aware of the x86 blades that can go into z expansion cabinet, but that's silly if primary need is x86

nothing nonsensical about specifying instruction set, we're talking about people wanting virtualization solutions for software already written, and most are not runnable, or are too cost prohibitive on ibm mainframe (trying licensing oracle and see how many megabucks that goes)

Hello,
You'll want to look at mininet and opendaylight:
http://mininet.org/ [mininet.org]
http://www.opendaylight.org/ [opendaylight.org]
for network device learning.
I highly recommend proxmox for managing the virtual machines. Container based is the way to go if all you need is a lightweight guest on an isolated VLAN.
If you want to have an all in one solution to manage networking and everything, I highly recommend OpenNebula.