> then one might wonder, could VMWare running on the head make use of nodes
> for some workstation applications; e.g. run an application on a node while
> the head CPU does mostly GUI? I dunno, I've never used VMware (or kvm etc)
you don't need vmware for that: run X on the head node and X clients.
or do it like ParaView, where the gui is an X client that communicates
via a socket to compute/render backend process(es).
vmware just virtualizes: allows you to multiplex multiple virtual machines
on one physical machine. in general, the virtual machines will be slower
than if they were running on bare metal (to the degree that they involve
the hypervisor and its bounding OS.) if the app in the VM is computebound
with a fairly static memory working set, the loss in performance will be
minimal.
in a clustering environment, the appeal of VM's is that it's a complete
container for a job, so can be moved around. I haven't heard of people
doing a VM containing multiple MPI jobs, running across multiple physical
nodes, but there's no reason it couldn't happen. if your userbase demands
specific OS images, VM's might be the ticket (my experience is that users
mostly don't care about the OS/distro as long as it works, and thankfully
MS Windows is a bit of a cognitive mismatch to HPC.)
I'm skeptical how much sense VM's in HPC make, though. yes, it would be
nice to have a container for MPI jobs: checkpoints for free, ability to do
migration. both these factors depend on the scale of your jobs: if all your
jobs are 4k cpus and up, even a modest node failure rate is going to make
agressive checkpointing necessary (versus jobs averaging 64p which are
almost never taken down by a node failure.) similarly if your workload is
all serial jobs, there's probably no need at all for migration (versus a
workload with high variance in job size, length, priority, etc).
regards, mark hahn.