Disk Response Time

IOMeter is sending a stream of requests to read and write 512-byte data blocks with a request queue depth of 1 for 10 minutes. The disk subsystem processes over 60 thousand requests, so the resulting response time doesn’t depend on the amount of cache memory.

The read response time of each array is somewhat worse than that of the single disk. The difference is negligible with the RAID10 arrays, though. The controller’s lag is made up for by reading from what disk in a mirror can read data quicker.

The other arrays are not so good here. Some of them are worse by over 1 millisecond, which is a lot considering that the read response of the single disk is only 6 milliseconds. There is no pattern in the behavior of the arrays: the RAID6 proves to be the best among the eight-disk arrays, the four-disk arrays being considerably slower. There is something odd with the degraded arrays. It is unclear why the RAID6 without two disks shows the best response time among them.

The write response time is determined by the combined cache of the array and controller, and the mirrors of RAID10 arrays must be viewed as a single disk. This rule does not work for the checksum-based arrays because they have to perform additional operations besides just dumping data into the buffer memory. Anyway, their write response time is good. It is lower than that of the single disk, meaning that the checksum calculation is done without problems and with minimum time loss.

The RAID6 without two disks has a very high response time, but it might have been expected due to the increased load on this degraded array. The controller has to “emulate” the two failed disks by reading data from the live disks and computing what data should be stored on the failed ones. The result is not terrible, though. The Promise EX 8650 had write response times higher than 20 milliseconds when working without a BBU!