Benchmarks for R510 Greenplum Nodes

gpcheckperf results from hammering against a couple of our R510s. The servers are setup with 12 3.5 600GB 15k SAS6 disks split into four virtual disks. The first 6 are one group and 50GB is split off for an OS partition and the rest dropped into a data partition. The second set of six disks are setup in a similar fashion with 50GB going to a swap partition and the rest going to another big data partition. No Read Ahead, Force Write Back and a Stripe Elements Size of 128KB. Partitions formatted with XFS and running on RHEL5.6.

skahler

Comments ( 3 )

Thanks for posting these details! Can I ask what type of RAID configuration this is? We have a pretty similar configuration -- 6 15k RPM 600gb SAS drives in a single RAID10 volume on Dell R510s with 64 GB.
The system is under a lot of load now so I can't get a clean gpcheckperf, but the highest numbers we've seen in the past have been around 500MB/s read/write. We're using a PERC H700.
I'm kind of surprised that you can get to >1 GB/s write performance with 6 drives. I would expect something like 180 MB/s per drive * 3 drives worth of throughput if you're in RAID 10 = ~ 540MB/s. Unless you're using RAID0 and trusting GP mirroring to save you when a disk fails? (Maybe a smart idea...)
Maybe I need to try reconfiguring one of our servers..

Those were on R510s with 12 of the same disks you mention, they are setup in two RAID5 sets of six disks and I'm hitting both of those RAID sets in that benchmark. So if you are getting ~500MB with half the number of disks you are close to what I was seeing. We're going RAID5 because the space was a large concern, the four extra disks of space outweighed performance gained in RAID10.