Flash storage has matured, not into a single product that sits in a single location, but as a solution to fast I/O needs that can sit in many locations in the storage stack.

Flash storage is like magic dust that speeds up storage I/O. Sprinkle it on and, hey presto, reads and writes occur at warp speed. But where exactly should you sprinkle it to get the most magic?

You can add some flash storage magic to a server, put it in a storage controller, place it in the storage array where it pretends to be a super-fast disk, add just a dab to individual disk drives, pour some into a storage brick that presents itself as a single super-disk, or use it on its own as a flash storage array.

That's six choices, but there is no single best one; each has its own pros and cons.

Adding flash to a server means it acts as a subsidiary layer of memory between DRAM and disk. It can function as cache, holding the most recent I/Os from disk so they can be re-accessed at cache speed, or it can hold an entire working set from a database in memory so applications get extremely fast access to data where time is money, such as in stock arbitrage operations.

A variation on this is to put the flash in a storage array controller and have it cache array I/Os for all accessing servers, making it a shared resource. NetApp's Flash Cache does this for reads. The cache is not logically part of the storage array's capacity, being invisible from the provisioning storage point of view.

The next way of using the flash storage is to make it part of the array's storage capacity and organise it so that specified data can be stored in it. The array's storage media is arranged into a hierarchy of tiers, from slow (relatively) cheap and capacious drives (nowadays, 2 TB 7,200,rpm SATA) through, say, 10,000 rpm drives and on to 15,000 rpm Fibre Channel drives, which are the fastest-possible disk drive storage when short-stroked. Faster still and as a so-called Tier 0 is flash storage, which has prompted storage analysts and product engineers to think of having just two tiers in future: SSDs for speed and bulk SATA disk for capacity.

One consequence of the use of SSDs as a storage tier is that array software is needed to detect the most active data and move it into the SSD storage. EMC has FAST, IBM has Easy Tier, and most other suppliers have equivalent technology now.

The next location on our SSD placement spectrum involves flash disappearing from general view inside a "super disk." There is only one supplier using this particular approach as far as we know, and that is Xiotech with its Hybrid ISE (Intelligent Storage Element).

The ISE is a sealed enclosure of 20 hard drives that presents itself to an array controller as a single super disk. It can be carved up into virtual allocation areas (LUNs). The ISE itself is self-managing and self-healing to the extent it can work around failed hard drives.

Xiotech's Hybrid ISE uses a combination of SSD and disk drives with data blocks placed either on disk or on flash according to internal ISE controller algorithms that monitor the I/O status of individual blocks of data. The net result, Xiotech claims, is that Hybrid ISE is faster at responding to I/O requests from servers while being cheaper than a separate flash array and doesn’t need array controller-level automated tiering software such as EMC's FAST or IBM's Easy Tier.

So far no other supplier is taking this approach. However, the Hybrid ISE is very new, and if it works as promised, other suppliers might take the same tack.

There is one other potential flash storage location, and that is inside a hard drive's enclosure. It functions as a drive-level cache, accelerating read I/Os from the drive. Like the “flash inside a super disk” approach, there is only one supplier doing this: Seagate with its Momentum XT 2.5-inch notebook drive. Seagate is bullish about its prospects, claiming it delivers something like 80 percent of the performance of an SSD but at a fraction of the cost. So far no significant OEM contracts for the supply of this drive have been announced, and no other disk drive manufacturers have announced plans for making a comparable product.

Which approaches are best? They all have their own advantages and are being proposed by flash and SSD suppliers to both server and storage suppliers. IBM, Dell and HP now sell flash-enhanced servers, and all mainstream storage array manufacturers have flash as a storage tier in their arrays. NetApp, with its Flash Cache, has taken an individual approach, while the remaining storage array suppliers prefer the flash tiering approach, even though it needs automated data movement software. As for Seagate’s method of putting the flash storage inside the hard drive enclosure, there is a question mark over the prospects for that product. An OEM win or two would move perception of the product into a more positive zone.

Flash isn't taking over from disk drives in general; far from it. But it is steadily replacing the fastest disk drives where stored data I/O speed is needed, and it's also going into servers as a subsidiary memory tier where disk drives simply can’t be used because they are too slow.

Flash memory is storage pixie dust, and the stuff is revolutionising data storage in servers, storage array controllers, storage tiers and beyond. Incidentally, whoever came up with the name "flash memory" was a genius. NAND doesn't have the same onomatopoeic ring to it at all, flash sounding exactly like what it does.

Start the conversation

0 comments

Register

I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.