Hyperconverged Flash – Scaling

Scale Computing Briefing Note

There are generally three ways one can use flash in a storage system. From a data management perspective, the all-flash array is the easy approach as one does not need to decide what will go on flash and what will go on disk. The second, and more common, hybrid approach uses flash as a cache when reading and writing to disk. The final method is to use flash as a tier, moving data that needs faster IOPs to flash while putting the data demanding fewer IOPs on disk. With these last two approaches you have to give some thought as to which data should and should not be cached. My colleague, George Crump, details the nuances between these three approaches in his article “What’s The Difference Between Tiering And Caching?”

Scale Computing felt the best approach is to use the tiering approach. It is releasing two new nodes; the HC2150 and the HC4150. The HC2150 uses a combination of 400 to 800 GB of SSD and 3 to 18TB of 10K RPM SAS drives, and starts at $20,500. The HC4150 uses a combination of 800 to 1600 GB of SSD and 6 to 12TB of 10 RPM SAS drives, and starts at $35,875. The HC4150 also comes with 16 to 20 cores, versus 8 cores in the HC2150.

Workloads can be “pinned” to disk or to flash, by specifying a value on a slider. Setting the slider to 1 pins the workload to rotational disk. Setting it to 11 pins it to flash. Yes, their slider goes to 11. Setting the slider to any number between 1 and 11 tells Scale Computing a rough percentage of the workload that you’d like to reside on flash.

Scale Computing is also announcing a new Disaster Recovery as a Service (DRaaS) offering. It allows each VM to replicate into the Scale Computing cloud, at intervals defined by the customer, via VPN and SSH secure connections.

Scale states that its customers can failover to the cloud or back to on-site in minutes. Scale also supports snapshots for point-in-time recoveries. This is important since sometimes the events that take out a data center also end up corrupting the most recent replicated copy. A snapshot should allow for rollback to the last known good state. The replication is also controlled through the Scale Computing HC3 interface.

One concern that came up during the briefing is that the pricing seemed not quite fully baked. Starting at $100 per VM seemed a little vague, with no clear articulation of long term total cost for hosting a customer’s VMs while in their cloud. The phrase “starting at” with no written end price is particularly troubling. We expect that Scale will work out the exact details of the pricing structure soon. When they do we believe capability is impressive and should be of value to many of its customers.

StorageSwiss Take

The approach that Scale takes with flash will allow customers to begin using flash in their HC3 environments by simply adding a new node to the architecture, no replacement of existing nodes. Time will tell if it will meet customers’ demands with the current offerings that consist mainly of rotational disk with a relatively small amount of flash. But given that the architecture work is already done, it would be a simple matter of packaging to be able to add more flash to the mix.

The DRaaS offering is interesting, and is important since products based on KVM have historically been criticized for not having a VM-friendly DR methodology. This will allow customers to easily add a DR component to their HC3 systems. Hopefully Scale is able to offer this service at a price that customers will find appealing.

Share this:

Like this:

Related

W. Curtis Preston (aka Mr. Backup) is an expert in backup & recovery systems; a space he has been working in since 1993. He has written three books on the subject, Backup & Recovery, Using SANs and NAS, and Unix Backup & Recovery. Mr. Preston is a writer and has spoken at hundreds of seminars and conferences around the world. Preston’s mission is to arm today’s IT managers with truly unbiased information about today’s storage industry and its products.