My interest in what the SPC-1 benchmark has been used for - lately! Like in the last 3 years or so. Apparently BarryW has a significant amount of academic interest in a LOT of older results, from an era when cache was at a cost premium, and modular 2-controller systems were the new rage, and the benchmark results highlighted that. Many of the systems have been EOL'd, or superseded.

If anyone wants to harp on how the SPC-1 IOPS results can help an end user to decide whether to buy Fujitsu Eternus 3000 Model 300 or the Dell PERQ/3 QC today - have a blast! I just don't see the relevance.

And in with the new...

Welcome to 2007! My customers are likely to be using much more modern stuff, and thats MY interest. So I restricted my analysis (and stated so in my post that I am excluding older results) to roughly submssion dates post October 2004.

I missed a few SUN/STK systems in the cutoff (my bad! Add them in...won't change a thing), and I did include the SVC 1.1.1 from June 2004 ( I was curious about that!). Others are from another era - not relavant for the points I am making.

My observation is simple: for the results in the last 3 years, all the results reflect direct proportionality between the SPC-1 IOPS benchmark and spindle count. This is a significantly stronger statement than stating that there is a monotonic increase of SPC-1 IOPS with spindle count (which is intutive). The data speaks for itself.

However, thanks, BarryW, for reminding me to discuss the DS8300 family SPC-1 IOPS. During my data collection phase for this series of posts, I too noticed this very enigmatic puzzle.

Here were two systems, identical in HW configuration (same number of disks, same testing configuration, same amount of cache, processors, 32 channels) driven by exactly the same server, a P5 p595 Model 9119 32-way processor with 32 channels running AIX 5.3.

Yes, the adressable storage was a bit different (6.6TB for the DS8300 and 8.9 TB for the Turbo), but in true form like other vendors, the data only occupied about 32-36% of the total storage in the system, ostensibly to increase spindle count and thereby inflate the SPC-1 IOPS result (just curious: so what is the end customer supposed to do with the rest of the storage?). So that couldn't be the reason for the mystery below....

So it was intriguing to see the Turbo post 123,033 SPC-1 IOPS and the DS8300 post 101,102 SPC-1 IOPS! 22% better!

Could it be... that the microcode on the DS8300 Turbo was better, and the SPC-1 actually caught that? Wow! I mean, Gee Whiz! Holy Cow! Or...

Could it be... Satan?

The devil in the details

Let me ask this question:

Why is the driving host configuration different for the Turbo benchmark compared to the vanilla 8300?

Specifically, why were the queue_depth and max_transfer parameters changed from their defaults (20,256K) to higher values (64, 1024K) for the DS8300 Turbo benchmark? This is buried in the Full Disclosure report, in the link above.

The queue_depth parameter increase lets more IOs queue up at the host - a good thing for large arrays, where many disks make appear aggregated as one logical volume. This makes sure that the disks are not twiddling their thumbs, while the operating is working under the wrong assumption that the volume will be overwhelmed if more IOs are pushed. Set it too low, and the disk array seems to underperform, as not enough work is going its way. I have seen many instances when increasing operating system queue_depth gives significant gains in IO throughput, especially with internal striping in the array (like the RAID-10 the DS8300 uses).

The max_transfer parameter, makes sure that large IO's are broken up into bite sized chunks. Set it too small, and the operating system does a lot of work for nothing.

Could it be.... that the DS8300 test was actually throttled by the inability of the host to drive enough IOs? So actually the DS8300 and the Turbo might have posted the same result - except, the driving P595 was not queuing enough IOs for the older array.

Or was it really the SPC-1 suddenly becoming sensitive to something other than spindle count, just for the DS8300 family? Somehow, I don't think so...

I could be wrong... but on the other hand....

I know I am speculating - but right now, the SPC-1 IOPS FUll Disclosure report for both is no help in clarifying what reality is. Short of a new measurement with the new parameters for the vanilla DS8300, I don't see how one can argue that the storage performance under SPC-1 actually improved for the DS8300 Turbo compared to the DS8300. I would submit that this could equally be an artifact of a restricted IO driver. If two things changed, which one caused the difference in the measurement?

But, oh, didn't I hear somewhere that the DS8300 just got EOL'd?

So it is an interesting and anomalous discrepancy, but not one that can be resolved conclusively with the data at hand. I'm not buying it, BarryW.

Is the SVC really high-end?

Hmmm... the fact that the USP V and the SVC seem to perform the same means one of two things:

a) The SVC is truly on par with a USP V from an array capability point of view or

b) the SPC-1 benchmark has done great disservice to the USP V, and forced it down to the least common denominator - spindle count.

The USP V SPC-1 result did show one thing - that a full configuration shows no other component saturation effects. It is purely spindle bound - with no discernable choke points outside of that. My personal belief (not EMCs!) is that HDS sold their technology short by participating in this benchmark. Like the DMX, their microcode engineers have spent many hundreds if not thousands of man-years optimizing their systems for MF, multiple-host workloads, etc. The benchmark lets none of that shine through. I think they can do a lot better with a real life workload than the SVC.

But, as I said, thats one mans opinion..... the same man who is also positive that the DMX will do better that both the SVC and USP V with real customer multi-host multi-function workloads, with a boatload more functionality to boot.

Thanks for the welcome to blogosphere, BarryW! I am sure we will agree and disagree on many other topics over the coming years- and I hope to continue to learn from that as we go on.

October 29, 2007

When the Storage Performance Council SPC-1 benchmark was introduced several years ago, and the first few results hit the streets, I noticed something astounding: the SPC-1 IOPS scaled linearly with the number of drives in the tested storage array, independent of vendor, array, drive size or drive speed!

It was not astounding as a technical result - I would expect a cache hostile benchmark like the SPC-1 to be loosely dependent on spindle count. What was astounding was that no one ever called any attention to it!

Vendor Olympics:

I saw vendor after vendor claiming technical superiority based on the magnitude of the SPC-1 IOPS measurement they had made. I saw challenges to EMC and Netapp to participate in this. So I decided to look at the most recent results to see if the benchmark or vendor arrays had changes so significantly in the past few years.

I took "high-end" systems - with a lot of scalability, and plotted their SPC-1 IOPS against the number of drives they had. The results:

The equation is a simple straight line fit done with Excel. The R^2 value is the quality of the fit, 1 is considered the best, the 0.996 is pretty darned good.

What did this mean?

Well... I could have probably saved HDS a lot of money and testing! Just knowing the previous results with the DS8300 Turbo, SVC 3.1 and SVC 4.2, I could have predicted the SPC-1 IOPS number for the 1024 drive HDS USP V almost exactly! Pretty good for arithmetic!

The HDS USP V has a tested configuration (ASU Storage) of 26 TB, but had 150+ of raw storage in it. In fact, even discounting for the RAID-1 protection for the storage, the benchmark only used 34% of the storage in the array. But spread out over all 1024 spindles. The USP V had 146 GB 15K Drives. The press release says none of this! You have to read the Full Disclosure report to find this out. What gets the headlines is just the fact that they have 200,000+ SPC-1 IOPS for 26 TB!

Even more surprising, the SVC matches the performance almost exactly as well - a completely different architecture! And it had sub-arrays that used 73 GB 15K drives.

So all HDS has to do to beat the standing SVC record is to load up the USP V with 1536 drives.

Ooops - can't do that.

Or for the DS 8300 to match the USP V record - go from 512 to 1024 drives.

Regardless, the SPC-1 IOPS has no discriminatory power for any of these 4 systems. The benchmark results are completely determined by one thing: the number of physical spindles in the configuration.

Lets look at it all!

What if I included not just highly-scalable systems, but mid-sized systems too? Well... you be the judge.

Here we see two classes of storage - the scalable arrays, and the mid-range ones.The lines have the same slope, and the correlation is pretty linear in both the blue and red bands. For almost every system tested, the results can be predicted to within a few percent with just knowledge of the number of spindles. [Note: I have excluded some older results at the low end (red band) in the interest of time (mine!). This is an exercise I encourage everyone to do at home - the data is public at the SPC Home Page]

The real useful metric from the SPC-1 is $/SPC-1 IOPS - one that has unfortunately faded from prominence. The over configuration of the USP V makes the SVC give the best bang for the buck.

So I'd love to see what these platforms can do without the spindle count advantage. How about a test with 26 TB where there is 52 TB in a RAID-1 configuration in the array? I can predict the result: 1/3 of the current SPC-1 IOPS for the USP V with 1024 drives.

Why I am upset:

This is no magic folks - I cannot imagine that this is not a well known fact for vendors and the SPC - that the benchmark dumbs down ALL vendors to the level close to JBOD. If SPC-1 performance is predetermined by drive count, customer decisions on storage investments are purely an exercise in pricing - all vendors are the same from a performance standpoint.

If this was known - not drawing customer attention to it is, mildly put, disingenuous.

If this wasn't known - this is not rocket science, people. Thats hard to believe.

Lets work instead to see if there is a real way to benchmark these systems - which is actually useful to our customers. This is exactly why EMC pulled out of the SPC years ago.

My first venture into blogosphere... guess I should introduce myself. I have been described, through my years, as a particle physicist, CTO, architect and most recently an EMC Distinguished Engineer, and consider myself an honorary southerner (Yes - I did live in and love Louisiana in my past, and still miss the food). I like to think of myself as a curious soul who is ever in awe of what I don't know and what no one knows.

I grew up with real-time computing and loosely-coupled compute grids in data acquisition environments in high-energy accelerator experiments at Fermilab, DESY and CERN during the birth of HTTP, my baptism by fire in UNIX, which I grew to love. Now I am comfortable with most operating systems, and a smattering languages, databases, ERPs and networks, and of course, storage technologies.

Fair warning: I am an an employee of EMC Corporation and proud of it, however, any and all opinions expressed by me in this blog are mine and mine alone, and may not reflect EMC's official stance on anything. EMC is not responsible for any of this content, and no one controls my posts.

I find technology and data of any kind fascinating, and people even more so, and will be blogging quite a bit on both.

'Nuff for now, I do have some interesting observations to share on certain benchmarks in the storage industry, but more on that in my next few posts!