Iometer IOPS Performance

EDITORS NOTE 06/01/2009: Benchmark Reviews added the Iometer results to this article after it was originally published, as a result of reader requests and suggestions.

Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. Iometer does for a computer's I/O subsystem what a dynamometer does for an engine: it measures performance under a controlled load. Iometer was originally developed by the Intel Corporation and formerly known as "Galileo". Intel has discontinued work on Iometer, and has gifted it to the Open Source Development Lab (OSDL).

Iometer is both a workload generator (that is, it performs I/O operations in order to stress the system) and a measurement tool (that is, it examines and records the performance of its I/O operations and their impact on the system). It can be configured to emulate the disk or network I/O load of any program or benchmark, or can be used to generate entirely synthetic I/O loads. It can generate and measure loads on single or multiple (networked) systems.

Benchmark Reviews has resisted publishing Iometer results because there are hundreds of different configuration variables available, making it impossible to reproduce our tests without having our Iometer configuration file. To measure random I/O response time as well as total I/O's per second, Iometer is set to use 4KB file size chunks over a 100% random sequential distribution. The tests are given a 50% read and 50% write distribution. Our charts show the Read and Write IOPS performance as well as I/O response time (measured in ms).

Iometer was configured to test for 120 seconds, and after five tests the average is displayed in our benchmark results. The first tests included random read and write IOPS performance, where a higher I/O is preferred. In this test the single layer cell OCZ Vertex EX rendered 3106/3091 I/O's and outperformed all other products. A set of RAID-0 Vertex (v1.10 firmware) 120GB MLC SSDs performed at 1517/1515, which is just slightly ahead of a single Vertex SSD which renders 1197 for read and write IOPS. The OCZ Summit MLC SSD completed 730/733 IO's. All other products performed far beneath this group, and are not suggested for high input/output applications.

The Mtron MOBI 3000 performed 107 read and write IOPS, while the Western Digital WD5001AALS rendered 86 and the Seagate 7200.11 completing 77. The newer Mtron MOBI 3500 rendered 58 IOPS, which was worse than the older 3000 model. The OCZ Apex strugged to complete 9 IOPS, and its identically-designed G.Skill Titan managed o nly 8 IOPS. Clearly, the twin RAID-0 JMicron controllers are built for speed and not input/output operations. Next came the average I/O response time tests...

The Iometer random IOPS average response time test results were nearly an inverse order of the IOPS performance results. It's no surprise that SLC drives perform I/O processes far better than their MLC versions, but that gap is slowly closing as controller technology improves the differences and enhances cache buffer space. In our Read/Write IOPS performance the SLC OCZ Vertex EX achieves a dramatic lead ahead of the other SSDs tested.

OCZ's Vertex EX offered the fastest read and write response time, measuring 0.26/0.06ms, and showing strength in write requests. The RAID-0 set of Vertex MLC SSD's scored 0.58/0.07ms, dramatically improving the write-to response time over a single Vertex SSD which offered 0.42/0.77ms. The OCZ Summit responded to read requests in 0.78ms while write requests were a bit quicker at 0.59ms. These times were collectively the best available, as each product measured hereafter performed much slower.

The Mtron MOBI 3000 offered a fast 0.42ms read response time, but suffered a slower 8.97ms write response. Both the WD5001AALS and Seagate 7200.11 hard drives performed around 11ms read and 1.2ms write. Mtron's newer MOBI 3500 offered great read response times at 0.19ms, but suffered poor write responses at 17.19ms. The worst was yet to come, as the G.Skill Titan and OCZ Apex offered decent 0.42ms read response times but absulutely unacceptable 127ms write times.