EMC has regained the SPEC filer benchmark crown, and it took 3,220 disk drives to do it.
Isilon tested 28-, 56- and 140-node S200 systems, with the last gaining the SPECsfs2008 crown for both NFS and CIFS file access. Isilon scored 1,112,705 NFS operations - 75 per cent higher than the previous best-performing system, a Huawei …

SFS97 is a retired benchmark.

The SPEC SFS_97 benchmark was retired by SPEC three years ago and replaced by the redesigned SFS2008 benchmark featuring improved benchmarking workloads.

As stated by SPEC themselves neither benchmark is comparable to the other, though it's interesting to note NetApp never submitted a CIFS benchmark with their NFS submission.

To show the linear scalability and system resource aggregation of OneFS 6.5, Isilon tested 7, 14, 28, 56 and 140 nodes for both SPECsfs2008_CIFS & SPECsfs2008_NFS. The results of which can be seen on SPEC.org.

Isilon inverse performance

Why is the performance of Isilon so bad upto 50 nodes? Majority of the customer needs less than 500000 IOPS and that's where Isilon response time is around 4+ ms. Also a simple Linux farm clustered with 3000+ disks without SSD can clock a million IOPS. Looks EMC is desperate for news clout and worried on SONAS/NetApp/BlueArc. Also by the way what enterprise features are there in Isilon. No quotas, primitive GUI, non-sync replication, no dedupe or compression. Poor Isilon- moving from tech To marketing is so fast. Also in 2006, looks that NFS benchmark of a million was achieved. So big deal still? Good luck

(untitled)

Saying that this result isn’t a big deal because someone else topped a million IOPS is a bit like saying that the 3 minute mile wasn’t a big deal because someone ran the 100m in 12 seconds and that's much faster. They’re very different tests and bear no relation to each other.

With regards to latency; the suitability of any platform depends on what you’re looking for in a system and what its being used for. Latency isn’t a concern for everyone. A user grabbing an MP3 from a different continent over a link with a latency of about 250ms, accessing a webserver, application server, IP network and then storage system isn’t going to give a stuff if getting the file takes 1ms or 4ms.

Biggest = Yes / Efficient = No

Huh?

Large but out of the box

A lot of interesting comments about this solution and its design but the thing to remember in my opinion is this is how the Isilon equipment is designed to work. Yes 140 nodes is a lot of nodes and that is a lot of HDDs and SSDs but all they did was take 140 of their performance oriented nodes with SSDs installed and put them together which is their big selling point. You can start with 3 nodes and grow to 140.

Anyone can hobble together two systems or three or more to hit specific performance levels but in this case all Isilon did was take their normal system and grow it to the logical if rather expensive conclusion and see what it could do.

And they did this with an file system that presented all of that storage as a single name space.

Not practical for everyone but then neither is a Ferrari or Lambo even though we might all wish we could afford one.

Devil is in the details

The only impressive thing here is that EMC/Isilon can spend the $$$ to put a system like this together. It sure would be nice if the Spec tests categorised these systems by performance value in terms of list price or average system cost.

So...if you do the math they got an average of 346 IOPS per disk/SSD....wow!

Same test, NetApp FAS620 gets 662 IOPS/disk, BlueArc M100 and M100 Cluster both get 500 IOPS/disk, Panasas 406 IOPS/disk and the other monster system by Huawei Symantec gets 363 IOPS/disk. They all get more IOPS out of spinning disk with NO SSD!