iSCSI arrays are not created equal

Posted on November 01, 2006

By Jacob Farmer

How do you differentiate one iSCSI storage array from another?

The marketplace for iSCSI storage devices is getting very complicated very quickly. Whereas a year ago there were only a handful of iSCSI vendors, today there are more than 50 by my count. Many of the iSCSI products represent the state-of-the-art in storage networking technology and are suitable for the most demanding of enterprise applications. Many others, by contrast, are only suitable for simple workgroup applications, and there are plenty of products in between. The tricky part is that the high-end and the low-end often look identical from a cosmetic standpoint and remain indistinguishable even when you’re comparing spec sheets and user interfaces.

The challenge to end users is that the magic of iSCSI disk arrays is almost entirely in the software. Just as you can run MS Access or Oracle on the same server and see markedly different database results, so too, can you have two iSCSI devices with similar underlying hardware display extraordinary differences in reliability and performance.

The only way to tell iSCSI products apart is to look under the hood and ask a lot of good questions. Here are some things to look for:

I/O processing capabilities: Most users get fixated on bandwidth and fail to scrutinize the actual ability of the storage device to send and receive data at high transfer rates. Many iSCSI storage products run on x86 hardware and rely on software-based networking stacks. These products are inherently limited in the amount of I/O they can process. You can add more and more Gigabit Ethernet ports, but these boxes are eventually going to cap out on I/O processing overhead.

Controller fault tolerance: Many iSCSI products are based on commodity white-box servers with off-the-shelf RAID cards. While there is nothing inherently wrong with white-box servers (and there’s a lot right, I might add), you need to look at the software architecture and see how it is going to achieve a suitable level of fault tolerance. Ideally, there will be no single point of failure, and all critical components will be intrinsically fault-tolerant.

Spindle management: As individual hard drives get bigger and bigger, users tend to fixate on their capacity needs without paying adequate attention to their disk spindle performance needs. Remember: Hard drives are mechanical devices with innate latency. If you don’t have enough spindles or if your spindles are not managed properly, your performance will suffer. Imagine taking 10 servers-each with five direct-attached SCSI hard drives and dedicated RAID controllers-and consolidating that storage into a single shelf of 14 Serial ATA (SATA) disks. You would have gone from 50 10,000rpm disks and 10 RAID controllers down to 14 7,200rpm disks and one or two RAID controllers. You don’t need a storage expert to tell you that this could be problematic!

One answer to this problem is intelligent spindle management. There are a handful of vendors that have developed intelligent storage controllers that optimize disk spindles. There are different techniques, but the gist is that as many spindles as possible are put to work simultaneously. Most importantly, the complex logic of these systems is masked from the storage administrator. Administrators only have to specify how much storage capacity they need for a given volume. Note that many iSCSI storage systems are based on conventional, off-the-shelf RAID controllers, so do not expect any spindle magic from these systems. These arrays will be fine for small workgroups, but will not be suitable for enterprise applications.

Another solution is to take some of the money you save on connectivity and parlay it into enterprise-class drives. Consider using iSCSI for the host connection and either Fibre Channel or Serial Attached SCSI (SAS) drives in the disk array. Of course, the best solution is to combine enterprise- class drives with intelligent spindle management.

Caching: Caching is the secret sauce in most storage arrays, but don’t fixate on the size of the cache. For iSCSI arrays based on commodity servers, it’s very easy to cram the motherboard with RAM and call that cache. However, if the cache is not “intelligent,” the additional size will not result in better performance.

One feature to look for is a segmented cache, where caching logic is applied on a volume-by-volume basis. Segmenting the cache gives more-consistent results and ensures a single host going hog wild will not kill performance for the rest of the servers on your iSCSI SAN.

Finally, pay particular attention to the fault-tolerance model around the caching. At the very least, you want a battery backup for the cache. For systems that have redundant controllers, make sure there is support for cache coherency between the two controllers. That means that the contents of controller A’s cache are mirrored into controller B before the write from the host is acknowledged. This is a basic fault-tolerance technique used by enterprise-class storage controllers, but it is often absent in storage systems based on commodity hardware.

Conclusion

All iSCSI devices are not created equal. iSCSI can be a great solution and some day may replace Fibre Channel as the dominant interface for storage networking. In the meantime, buyer beware.

Click here to enlarge image

Jacob Farmer is chief technology officer at Cambridge Computer (www.cambridgecomputer.com)in Waltham, MA. He can be contacted at jacobf@cambridgecomputer.com.

Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.