On Mon, Nov 6, 2017 at 10:08 AM, Greg Helledy <gregsonh@gra-inc.com> wrote:
> Does the overhead of virtualization make sense for small organizations?
Since there were already a bunch of replies instead of reiterating all
of that or posting 8 different replies I'll just add some thoughts
that I don't think were covered.
First. I would definitely strongly consider containers over
virtualization where it makes sense. Containers involve a lot less
overhead. Docker tends to be what everybody uses for this, and on any
serious scale I'd strongly consider it. I don't personally use it, as
there are a few things about it I don't like, but I wouldn't benefit
as much from its upsides.
I will point out one downside to containers relative to VMs that
didn't get a mention: security. In general linux containers are not
considered entirely escape-proof if somebody manages to obtain root
inside of one. Containers running as non-root on the host are a lot
more secure in this particular regard. This isn't really anything
inherent to containers so much as the fact that they're still
relatively new. A VM would provide more isolation if a malicious
intruder is part of your threat model. However, containers are great
for general isolation - a process isn't going to escape from a
container merely because it has a bug - a human would almost certainly
have to be behind it.
Lee is correct that VM hypervisors themselves do not add much
overhead, but he neglected the overhead that comes from the overall
approach. With containers RAM is a completely shared commodity across
guests (subject to the resource limits that already exist for
processes in linux). With VMs it usually is not. If 47 VMs all
access the same files on the same network filesystems, each of the 47
VMs will end up keeping their own private cache of those files in RAM.
If they were containers they would all share the same cache, both for
reading and writing. When you launch a new container the only cost is
the RAM used by the process itself and any shared libraries that
aren't also shared with other containers (to be fair sharing shared
libs across containers isn't the typical approach). When you launch a
new VM the cost is whatever RAM you would need to run the entire
OS+application. On linux launching a container is essentially the
same as launching any other process as far as the kernel itself is
concerned - all processes already run "in a default container" on the
host.
Maybe VMWare has some solutions that make guests nicer about RAM
allocation, or capabilities like dedup. Lee could probably speak to
this better than I, but since this is one of the biggest limitations
with RAM I'm sure VMWare has focused on it. However, I'd be shocked
if you could really get a VM down to the same footprint as a
container. I guess the flip side of this is that if a kernel panics
in a VM you only lose that one VM, and not the host. Also, the fact
that the whole thing is virtualized down to the hardware does let
VMWare do some tricks with moving guests around to different hardware
that I don't think Linux supports currently.
So, there are a bunch of pros and cons here. For linux guests you
would not be out of the mainstream to adopt containers.
--
Rich
___________________________________________________________________________
Philadelphia Linux Users Group -- http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug