Joe Landman wrote:
> My apologies if this is bad form, I know Toon from his past
> participation on this list, and he asked me to forward.
>> -------- Original Message --------
Hi Toon, long time no type.
> Dear all,
> I've been working on hpux-itanium for the last 2 years (and even
> unsubscribed to beowulf-ml during most of that time, my bad) but soon
> will turn back to a beowulf cluster (HP DL380G6's with Xeon X5570,
> amcc/3ware 9690SA-8i with 4 x 600GB Cheetah 15krpm). Now I have a few
> questions on the config.
> 1) our company is standardised on RHEL 5.1. Would sticking with rhel 5.1
> instead of going to the latest make a difference.
That's kind of strange. So you never patch? A patched RHEL 5.1 box auto
upgrades to 5.4, doesn't it? Or is that something specific to CentOS? 5.1 is
rather old I'd worry about poor support for hyperthreading, and things like
GigE drivers when using a release from 2007, especially with hardware released
in 2009.
I believe the older kernels handle the extra cores rather poorly, and don't
even recognize the intel CPUs as NUMA enabled. You didn't mention hardware or
software RAID. I'd recommend RAID scrubbing, and if software that requires (I
think) >= 2.6.21, although (I think) Redhat back ported it into their newest
kernels in 5.4, or maybe 5.3. Definitely not in 5.1 though.
> 2) What are the advantages of the hpc version of rhel. I browsed the doc
> but unless having to compile mpi myself I do not see a difference or did
> I miss soth.
I've never seen the HPC version of RHEL, but I have build a cluster
distribution based on RHEL a few times. It's pretty common to need to tweak
the various cluster related pieces, like say tight integration which often
requires tweaks to the MPI layer and the batch queue. I suspect the biggest
advantage for the HPC version of RHEL is a cheaper per seat license. If you
end up layering things on top of RHEL yourself I recommend cobbler, puppet,
ganglia, and openmpi.
> 3) which filesystem is advisable knowing that we're calculating on large
> berkeley db databases
I've not seen a particularly big difference on random workloads typical of
databases. Are the databases bigger than ram? Does your 3ware have a
battery? Allowing the raid controller to acknowledge writes before they hit
the disk might be a big win (if your DB has lots of writes)? Can you afford a
SSD to hold the berkeley DB?