HDS has merged its high-end array code with its low-end HUS hardware. The unified file, block and object storage HUS 100 array is barely two months old and now has a larger brother.
HUS VM in HDS array range
The HUS VM is the enterprise version of HUS, and combines microcode from HDS's enterprise VSP array with the HUS …

COMMENTS

totally unique...

For IBM, this would be like takeing the DS8000 code (on a SVC) and bundle it with DS3524. Great idea, now we just need a name... V for virtual (obviously), and a number a tad below the 8000... 7000 maybe?

Yes, I know it is not that easy and IBMs storage still sucks, yet...

Fun aside, I want to have a look at the HUS VM, we are really happy with our AMS 23s.

Re: totally unique...

Why does IBM storage suck? DS8800 is solid high end, V7000 is solid mid-range, and XIV is awesome in my opinion, although others disagree. ProtecTier's in-line, bit level dedupe is better than anything from EMC or NetApp. SONAS is more than respectable as a high end, distributed NFS platform.... IBM's mid-range offerings did suck when they just had that LSI rebrand DS5000/3000 stuff, but with XIV and V7000 they are arguably the mid-range storage portfolio in the industry.

AMS was pretty good many years ago, but it is ancient these days. Copy on write snaps and RAID 5/6... can't believe anyone still does that. Hitachi realizes it. HUS is the replacement for AMS.

small ones (DS3) are rebranded LSI, too stupid to order their own spares, arbitately limited capabilities, worthless firmware (but a different one every other week, so there still is hope).

medium (DS5) are hideously expensive relative to their performance.

Enterprise (DS8) not as quick as other vendors arrays, they broke the GUI two years ago and it starts to show its heritage from the 2105-700 it decends from. Actually, I have Hitachi AMS which are consistently faster than my (granted older) DS81xx and better to administer. For a 8800 this may be different, though.

Though it is not all tears, the sddpcm is nice and blends (rather unsurprisingly) really good into AIX, I do like the svc, and the XIV shows potential.

It is still less than I would expect from IBM, and this is frustrating.

Yeah, it will. XIV hasn't used SATA or had a 79TB maximum per frame since the previous generation. They use near line SAS now, Infiniband interconnects... the maximum per frame is 350TB or in that range. They still use the same 1MB block mirroring superstructure, but that is the all part of their RAID 10 derivative. It makes it really resilient without any RAID planning whatsoever.

Also, 79TB out of 180TB (or 350 out of 600) isn't that bad in real world settings as you can use all of the usable disk. XIV is inherently thin provisioned and there are no RAID groups to plan. I have yet to see a DMX/VMAX or AMS that is more than 40% utilized. Theoretically they could have a higher utilization rate with weak RAID 5, but it never actually happens unless people can layout their storage once and never change it.... XIV, as compared to VSP, 3PAR, VMAX, etc is really inexpensive disk as well. If you do a usable TB to usable TB price vs. any of the previously mentioned, XIV is half the price.

OK, so XIV is really easy to configure. It has an on/off button and that's it. Which is OK if you have an environment specifically suited for what XIV can offer, but most customers need flexibility and in that sense XIV is a midrange, one-trick-pony disk solution. EVAs bigger, but not more capable, brother.

XIV is a mistake and a dead end system which is why EMC/HDS don't even bother to mention it when they come over to trash talk their competition.

The only reason the XIV hasn't fallen completely off the map is that people like you(or is it just you?) brings it up every time there's a disk array discussion.

Hmmm doesn't use SATA anymore but instead now uses SAS mid line drives, 7,200 RPM I believe as per the previous generation SATA, drives. So in reality not any faster really, just a new I/O interface and a few tweaks to command queuing etc. Now uses infinband at the backend, hurrah should be a bit cheaper to produce and more scalable than those pesky Cisco ethernet switches you had before, but oh no you still only support 180 disks ? No it doesn't have a 79TB capacity limit anymore but your raw to usable is still sub 50% as XIV basically mirrors everything and XIV's upgrade path has been Gen1 180 x 1TB drives, Gen2 180 x 2TB drives and Gen3 180 x 3TB drives.

I don't think that calling XIV a more capable version, much more capable, of EVA is completely unfair. EVA sold thousands and thousands of systems until the technology went out of date. Comparing EVA with XIV today is unfair as XIV is a completely different architecture with completely different functional capabilities, but in the sense of a on/off switch subsystem, fair enough.

Will XIV be the answer for people that need super high IOPS and response time, no it will not be the solution. Ok, lets put those workloads in memory or SSD on the server where they belong for that performance. Now lets talk about the other 95% of the workloads. For the 95%, XIV will meet their performance needs and is far and away the easiest SAN system to manage at the lowest cost. For most people, XIV will meet all the requirements for all of their workloads without any problems.

"XIV is a mistake and a dead end system which is why EMC/HDS don't even bother to mention it when they come over to trash talk their competition."

EMC and, to a lesser extent, HDS trash talk XIV constantly. See Chuck's, EMC, blog. They hate the idea of offering a lower cost system that will meet 90 plus percent of the customers needs at a fraction of the cost instead of competing on benchmarks. EMC Sym, HDS VSP sell to people for their 1% workloads, i.e. lets create a storage environment for the exception workloads. It is like someone putting every workload on mainframe because they have a workload that requires mainframe. XIV creates a storage environment for the 90% and treats the exceptions which won't work in XIV, if there are any, as exceptions.

"What HDS has done – adding high-end array code to a low-end array architecture – appears to be unique. If EMC were to do the same it would be akin to running VMAX code on the VNX hardware platform."

Err well if you check out the spec sheet, http://www.hds.com/assets/pdf/hitachi-line-card-storage-family-matrix.pdf what they've really done is scaled back the existing VSP architecture. This is nothing like what Chris is suggesting above as they aren't even using the HUS / AMS architecture. Downsize the VSP and then weld in some BlueArc NAS and hey presto you have another unified solution.

BTW I quite like the look of it, but will it sell ? seems to be lots of functional and capacity overlap with the existing AMS / HUS range and HDS highend management has never been for the faint of heart.

For your HDS needs

1. It's not AMS nor is it VSP for Hardware, it's based on a new custom ASIC with new controllers. Thus, HDS has 3 platforms now to manage and Nigel has serious concerns about scalability

2. "HDS are marketing this with a capacity sweet spot of between 20-180TB" which leads to why you would ever compare it to a VMAX. This box belongs against 3PAR, not a true high end enterprise (NIgel uses Midrange 1.5). Or, once again HDS is trying to invite comparison to the higher end product versus letting their sales guys sell where the product should be sold

3. Nigel points out that doing the NFS duties is a pair of BluArc NAS file servers, so this is "unified" just like an EMC VNX, or in other words, "unified marketing". Still going to be upgrading two different code sets.

Re: For your HDS needs

Re: For your HDS needs

I have to imagine with the VSP code running on it that they're going to price and position it as a VSP Lite, for those that want the virtualization features but don't want or need to purchase a full VSP (Think VMAXe or VNXe from EMC), or the folks that were going to integrate a HUS with a Bluearc.

The worry about midrange and virtualization is being able to push the data through the engine to the virtualized storage fast enough, so when you're doing it you're going to keep your local storage capacity down. The USP-V had limitations especially with cache that led to HDS telling people not to put databases on virtualized storage (not sure if it made it to the VSP)

Appears to be unique?

I was going to point out that IBM has loaded high end code onto lower end hardware with the Storwize V7000 (running SVC code), but I see I've been beaten to it :)

And regarding IBM's storage sucking...they were contracted to build a 120PB, 200,000 disk storage array for an unnamed customer (probably the military or some such), and they're also getting the business for the Square Kilometre Array storage, which will have to ingest AN EXABYTE A DAY (and sift through that to figure out WHAT to store, and what to get rid of, and THEN possibly deduplicate and/or compress what it WILL store). No tenders or bids were put out, they just went straight to IBM...so how much do they really suck then?