Performance in Intel IOMeter

Database Pattern

In the Database pattern the disk subsystem is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% with a step of 10% throughout the test while the request queue depth varies from 1 to 256.

You can find the numeric results in our previous reviews dedicated to each particular controller. Here, we will only work with graphs and diagrams.

We will check out the minimum load first. The request queue depth is 1.

You may wonder why the controllers differ with RAID0 at such a short queue. It is deferred writing that makes the difference. While the controllers are all alike at reading, at high percentages of writes it is necessary to quickly put a lot of requests into the cache and then throw these data out among the disks. The Adaptec is the winner here. The LSI and Promise are slower than the others.

It is similar with RAID10: we’ve got the same leaders and losers at high percentages of writes. The LSI is especially poor, failing to cope with pure writing.

We can see now some difference at reading as the controllers can choose a “luckier” disk in a mirror pair to read data from. The HighPoint is better here. The LSI is excellent at pure reading, but worse at mixed reads and writes.

Now we’ve got to rotated parity arrays. It is simple to write a single block of data to a RAID0 or RAID10 but when it comes to RAID5, each write operation actually translates into the reading of two blocks, two XOR operations, and two write operations. The Adaptec passes this test better than the other controllers. The 3ware is good at pure writing, but slower than its opponents at mixed reads/writes. The HighPoint has problems caching requests and the Promise does not have deferred writing at all. The latter’s performance hit is catastrophic.

The same goes for RAID6. The controllers behave exactly like with RAID5, but slower. The algorithm now includes the calculation and writing of a second checksum, but the controllers’ processors cope with that and their standings remain the same.

By the way, the Areca with 2 gigabytes of onboard memory performs almost exactly like with 512 megabytes. Won’t we see any performance records then?

Let’s increase the queue depth to 16 requests.

The controllers are surprisingly similar with RAID0: four graphs almost coincide. The Adaptec stands out with its more effective deferred writing, though. This cannot be explained by a large amount of cache memory because the 3ware and the Areca have as much of it (and the extra 1.5GB doesn’t show up at all, again). The latter controller even has an identical processor as well.

The LSI and Promise are lagging behind again, but the gap isn’t large.

The different combinations of deferred writing, requests reordering and disk selection techniques produce most varying results. The Adaptec is ahead at writing, again. This controller seems to insist that its deferred writing is the best! The LSI is downright poor at writing, having obvious problems with caching, which can hardly be explained by its having the smallest amount of onboard memory among the tested controllers.

However, the LSI is successfully competing with the HighPoint and 3ware for the title of the best reader from mirror-based arrays. Take note how much faster than their opponents these three controllers are.

So, some controllers are better for writing, others for reading. You should start planning your server disk subsystem by determining what kind of load it is going to cope with. There are universal controllers, too. The 3ware is stable at any percentage of writes.

When there is a queue of requests, the controllers can more or less effectively cache them or perform multiple operations simultaneously. How exactly effective are they, though? The Adaptec is good while the HighPoint is only half as fast as the Adaptec at writing (while having exactly the same processor). The gap is not as catastrophic as at the shortest queue depth, though. The Promise is depressingly slow due to the lack of deferred writing.

The overall picture doesn’t change much with RAID6. One thing can be noted: for all its excellent writing, the Adaptec is slower than any other controller at highest percentages of reads. We’ll see at harder loads if this is a coincidence or not.

Unfortunately, we cannot say anything special about the 2GB Areca: the increased amount of onboard memory does not show up at all. This is very odd.

The controllers all cope well, each in its own way, with the hardest load. The Adaptec still shows the best writing capability, the 3ware is ahead at reading while the Areca is right in between at mixed reads and writes, showing the most stable behavior.

Surprisingly enough, we’ve got similar standings with RAID10. The leaders are the same while the LSI turns in a poor performance. Its problems with writing show up under this heavy load as a serious performance hit at any percentage of writes.

The huge queue saves the day for the Promise. At such a long queue depth some sorting of requests is done by the driver before they reach the controller. As a result, the Promise is as fast as the HighPoint here.

The Adaptec is still superior at writing and, like with RAID6 at a queue depth of 16 requests, is not so good at high percentages of reads. This must be a characteristic trait of this controller. Its forte is in writing.

We’ve got the same leaders with RAID6 while the HighPoint is obviously slow. There must be some flaws in its firmware. Its resources are wasted somewhere.