The SSDs were tested with the generic OS drivers and formatted in NTFS (wherever formatting was required) as one partition with the default cluster size. 32-gigabyte NTFS partitions with the default cluster size were created for FC-Test (if the drive is smaller than 64 gigabytes, it is partitioned in two halves). The SSDs were connected to a mainboard port and worked with enabled AHCI. The sequence of tests is absolutely identical for each SSD, so all of them are under the same conditions.

The most dramatic change in our new test method is the transition from the outdated Windows XP to Windows 7. Windows 7 is especially good for SSD tests as it supports the TRIM command. As for the hardware, our testbed now includes a mainboard with an Intel ICH7 South Bridge. This controller is widespread and does not depend on the peripheral bus bandwidth as standalone disk controllers do.

There are some changes in the list of our tests, too, although it is based on the old one. First, we have finally got rid of PCMark 2004 and 2005, leaving the Vantage version only. These tests largely duplicate each other or other tests and produce similar results. Besides, we have some suspicions that the next version of this benchmark is about to come out, 3DMark 2010 having been announced already. Then, we have abandoned the Workstation pattern because PCMark Vantage provides a better picture of a disk subsystem’s workstation performance. We now use WinRar version 3.91 and have replaced Perfect Disk with the Disk Defragmenter integrated into Windows 7.

Finally, we have adjusted some of the IOMeter tests, but this is only important for hard disk drives. For example, we now test our disks in more detail under random-address loads, using a step of 2 rather than 4. The maximum data block size is now 2 megabytes, the largest that modern Windows OSes employ. If the disk request is even larger, the performance becomes influenced by the HDD’s sequential speed. However, for our SSD tests we will use the older method for a while, although we do not compare SSDs’ performance on large data blocks anymore.

We will also use the old method of testing multithreaded performance with a distance of 8 gigabytes between the threads (we have increased this distance to 100 gigabytes in our HDD tests). The reason is simple: SSDs have not yet reached such capacities as to allow for four separate 100 GB data threads. As a matter of fact, the distance between the threads is unimportant for SSDs. While it affects the speed of an HDD (because the angle of turning of the read-write heads depends on it), an SSD only has to cleverly handle this load and read from all the controller channels in parallel.