Server way of locked-in storage

It is kind of interesting when every vendor out there claims that they are as open as they can be but the very reality is, the competitive nature of the game is really forcing storage vendors to speak open, but their actions are certainly not.

Confused? I am beginning to see a trend … a trend that is forcing customers to be locked-in with a certain storage vendor. I am beginning to feel that customers are given lesser choices, especially when the brand of the server they select for their applications will have implications on the brand of storage they will be locked in into.

In the announcement, it was reported that HP is coming out with its 8th generation ProLiant servers. As quoted:

“The eighth generation ProLiant is turbo-charging its storage with a Smart Array containing solid state drives and Smart Caching.”

“It also includes two Smart storage items: the Smart Array controllers and Smart Caching, which both feature solid state storage to solve the disk I/O bottleneck problem, as well as Smart Data Services software to use this hardware”

From the outside, analysts are claiming this is a reaction to the recent EMC VFCache product. (I blogged about it here) and HP was there to put the EMC VFcache solution as a first generation product, lacking the smarts (pun intended) of what the HP products have to offer. You can read about its performance prowess in the HP Connect blog.

The idea is very obvious. Put in a PCIe-based flash caching card in the server, and use a condescending caching/tiering technology that ties the server to a certain brand of storage. Only with this card, that (incidentally) works only with this brand of servers, will you, Mr. Customer, be able to take advantage of the performance power of this brand of storage. Does that sound open to you?

HP is doing it with its ProLiant servers; Dell is doing it with its ExpressFlash; EMC’s VFCache, while not advocating any brand of servers, is doing it because VFCache works only with EMC storage. We have seen Oracle doing it with Oracle ExaData. Oracle Enterprise database works best with Oracle’s own storage and the intelligence is in its SmartScan layer, a proprietary technology that works exclusively with the storage layer in the Exadata. Hitachi Japan, with its Hitachi servers (yes, Hitachi servers that we rarely see in Malaysia), already has such a technology since the last 2 years. I wouldn’t be surprised that IBM and Fujitsu already have something in store (or probably I missed the announcement).

NetApp has been slow in the game, but we hope to see them coming out with their own server-based caching products soon. More pure play storage are already singing the tune of SSDs (though not necessarily server-based).

The trend is obviously too, because the messaging is almost always about storage performance.

Yes, I totally agree that storage (any storage) has a performance bottleneck, especially when it comes to IOPS, response time and throughput. And every storage vendor is claiming SSDs, in one form or another, is the knight in shining armour, ready to rid the world of lousy storage performance. Well, SSDs are not the panacea of storage performance headaches because while they solve some performance issues, they introduce new ones somewhere else.

But it is becoming an excuse to introduce storage vendor lock-in, and how has the customers responded this new “concept”? Things are fairly new right now, but I would always advise customers to find out and ask questions.

Cloud storage for no vendor lock-in? Going to the cloud also has cloud service provider lock-in as well, but that’s another story.

Share this:

About cfheoh

I am a technology blogger with 20+ years of IT experience. I write heavily on technologies related to storage networking and data management because that is my area of interest and expertise. I introduce technologies with the objectives to get readers to *know the facts*, and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress.
I am involved in SNIA (Storage Networking Industry Association) and as of October 2013, I have been appointed as SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I was previously the Chairman of SNIA Malaysia until Dec 2012.
I have recently joined Hitachi Data Systems as an Industry Manager for Oil & Gas in Asia Pacific. The position does not require me to be super-technical (which is what I love) but it helps develop another facet of my career, which is building communities and partnership. I think this is crucial and more wholesome than just being technical alone.
Given my present position, I am not obligated to write about HDS and its technology, but I am indeed subjected to Social Media Guidelines of the company. Therefore, I would like to make a disclaimer that what I write is my personal opinion, and mine alone. Therefore, I am responsible for what I say and write and this statement indemnify my employer from any damages.

Hp are leveraging the tech off LSI (Hp smartarray cards are really LSI 92xx oem’d) and their caching softwar in card to do so. The cache works with sas or sata SSD’s rather than a faster through to pcie based card.

As for vfcache, as I understand it, it’s vendor/array agnostic for the time being.

Essentially, the Hp or LSI card is basically a raid controller card with sas/sata ports on a pcie card and as such, it utilizes sas or sata ssd drives to act as a cache over and above the dram cache.
In turn, this Hp/LSI raid card is limited to the port speed of the controller card and drive (6Gb/s or about 600MB/s at best) and is subject to the overheads of sata and sas as well as cable length and any sas switching in the drive array.

Where as VFcache is based on the micron p320h pcie ssd cars which is straight through from the pcie bus to the controller chip and onto the flash chips with no protocol conversion or other bottle necks to the data.
Thus, the vfcache/micron p320h can deliver data at 3GB/s for read and 2GB/s for write versus a best rate of 600MB/s (1200MB/s with 2 drives) with the Hp/LSI smart array pcie card, and not suffer performance losses due to the protocol conversion and scsi overhead of sas. This also means that it can do so with lower latency and deliver higher iops than a sata/sas solution.

Having said that, I personally have the LSI 9265-8i card for my own use and use sata ssd drives on that as part the cache and i can tell you, it’s a fantastic card.

I’d also like to confirm it’s vendor agnostic, you can just google search to find out, if you don’t have a powerlink account.

Essentially, the Hp or LSI card is basically a raid controller card with sas/sata ports on a pcie card and as such, it utilizes sas or sata ssd drives to act as a cache over and above the dram cache.
In turn, this Hp/LSI raid card is limited to the port speed of the controller card and drive (6Gb/s or about 600MB/s at best) and is subject to the overheads of sata and sas as well as cable length and any sas switching in the drive array.

Where as VFcache is based on the micron p320h pcie ssd cars which is straight through from the pcie bus to the controller chip and onto the flash chips with no protocol conversion or other bottle necks to the data.
Thus, the vfcache/micron p320h can deliver data at 3GB/s for read and 2GB/s for write versus a best rate of 600MB/s (1200MB/s with 2 drives) with the Hp/LSI smart array pcie card, and not suffer performance losses due to the protocol conversion and scsi overhead of sas. This also means that it can do so with lower latency and deliver higher iops than a sata/sas solution.

Having said that, I personally have the LSI 9265-8i card for my own use and use sata ssd drives on that as part the cache and i can tell you, it’s a fantastic card.

I’d also like to confirm it’s vendor agnostic, you can just google search to find out, if you don’t have a powerlink account.

Very, very sorry for the late reply. There are a lot of things going on.

Thanks for sharing more info about the HP/LSI card. The comparison between HP and EMC’s VFcache offering is important because that is the whole point of what I am doing here. To share correct information the best that I can and my objective is to educate.

I believe you are very much like that too, to educate the people out there to make the right decisions based on the right information.