Index

Iometer was made by Intel back in 1998 as an I/O subsystem measurement and characterization tool for single and clustered systems, but is open source as of 2001 and is considered a standard benchmark. It’s used for benchmarking SSDs as well, but whether it shows actual, real-life performance is debatable.

We moved on to key tests with varying queue depths, simulating Workstation and Database operation. Queue depth in simple terms means how many commands are stacked on the SSD in a given period of time. This determines how much IO the OS (or application) will ‘allow’ the underlying controller to optimize. By re-ordering and combining commands, physical IO can be streamlined or reduced by the controller. The more commands the controller can process in this way before needing to report something back to the OS, the better the ’hit-rate’ of such optimizations (and hence the higher the ultimate throughput), but the trade-off is latency, which can have a devastating impact on the user experience.