>> Is it just me, or does HPC clustering and virtualization fall on
> opposite ends of the spectrum?
>
Gavin, not necessarily. You could have a cluster of HPC compute nodes
running a minimal base OS.
Then install specific virtual machines with different OS/software stacks
each time your run a job.
OK, this is probably more relevant for grid or cloud computing - I first
thought this would be a good idea when seeing
that (at the time) the CERN LHC Grid software would only run with Redhat
7.2
So you could imagine 'packaging up' a virtual machine which has your
particular OS flavour/libraries/compilers and shipping
it out with the job.
Another reason could be fault tolerance - you run VMs on the compute
nodes. When you detect a hardware fault is coming along
(eg from ECC errors or disk errors) you perform a live migration from
one node to another - and your job keeps on trucking.
(In theory, checkpointing needed etc. etc.)
The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy.