Tag Archives: Flash

Post navigation

(Excerpt from original post on the Taneja Group News Blog)

It’s time to start thinking about massive amounts of flash in the enterprise data center. I mean PBs of flash for the biggest, baddest, fastest data-driven applications out there. This amount of flash requires an HPC-capable storage solution brought down and packaged for enterprise IT management. Which is where Data Domain Networks (aka DDN) is stepping up. Perhaps too quietly, they have been hard at work pivoting their high-end HPC portfolio into the enterprise space. Today they are rolling out a massively scalable new flash-centric Flashscale 14KXi storage array that will help them offer complete, comprehensive single-vendor big data workflow solutions – from the fastest scratch through the biggest throughput parallel file systems into the largest distributed object storage archives.

(Excerpt from original post on the Taneja Group News Blog)

In 2015 we finally saw VVOLs start to roll out, yet VVOL support varies widely and so far hasn’t been as impressive as we’d have thought. Perhaps VMware’s own Virtual SAN stole some of their own show, but more likely spotty VVOL enhancements just haven’t leveled the playing field with enterprise grade VM aware storage like that from Tintri. And in fact Tintri is still running away with the ball having rolled out fast all-flash solutions earlier this year (at 72 and 36TB effective capacity).

An IT industry analyst article published by SearchStorage.

Everyone is now onboard with flash. All the key storage vendors have at least announced entry into the all-flash storage array market, with most having offered hybrids — solid-state drive-pumped traditional arrays — for years. As silicon storage gets cheaper and denser, it seems inevitable that data centers will migrate from spinning disks to “faster, better and cheaper” options, with non-volatile memory poised to be the long-term winner.

But the storage skirmish today seems to be heading toward the total cost of ownership end of things, where two key questions must be answered:

How much performance is needed, and how many workloads in the data center have data with varying quality of service (QoS) requirements or data that ages out?

Are hybrid arrays a better choice to handle mixed workloads through advanced QoS and auto-tiering features?

All-flash proponents argue that cost and capacity will continue to drop for flash compared to hard disk drives (HDDs), and that no workload is left wanting with the ability of all-flash to service all I/Os at top performance. Yet we see a new category of hybrids on the market that are designed for flash-level performance and then fold in multiple tiers of colder storage. The argument there is that data isn’t all the same and its value changes over its lifetime. Why store older, un-accessed data on a top tier when there are cheaper, capacity-oriented tiers available?

It’s misleading to lump together hybrids that are traditional arrays with solid-state drives (SSDs) added and the new hybrids that might be one step evolved past all-flash arrays. And it can get even more confusing when the old arrays get stuffed with nothing but flash and are positioned as all-flash products. To differentiate, some industry wags like to use the term “flash-first” to describe newer-generation products purpose-built for flash speeds. That still could cause some confusion when considering both hybrids and all-flash designs. It may be more accurate to call the flash-first hybrids “flash-converged.” By being flash-converged, you can expect to buy one of these new hybrids with nothing but flash inside and get all-flash performance.

We aren’t totally convinced that the future data center will have just a two-tier system with flash on top backed by tape (or a remote cold cloud), but a “hot-cold storage” future is entirely possible as intermediate tiers of storage get, well, dis-intermediated. We’ve all predicted the demise of 15K HDDs for a while; can all the other HDDs be far behind as QoS controls get more sophisticated in handling the automatic mixing of hot and cold to create any temperature storage you might need? …

(Excerpt from original post on the Taneja Group News Blog)

As machines are available with ever more memory in them, we’ve been seeing that memory put to a lot of good uses lately. Today Pernix Data released FVP 2.5 which updates their big 2.0 release that brought memory into their server-side storage acceleration solution along with the use of flash. Imagine if you could pool server memory across the virtual cluster and use it for a very fast, and protected persistent storage tier. That’s pretty much what Pernix FVP does with their Distributed Fault Tolerant Memory (DFTM) design.

(Excerpt from original post on the Taneja Group News Blog)

NexGen was one of the first real flash/hybrid with QoS storage solutions, and it leveraged PCIe flash (i.e. Fusion-IO cards) to great effect. Which we suppose had something to do with why Fusion-IO bought them up a couple of years ago. But whatever plans were in the works were likely messed up when SanDisk in-turn bought Fusion-IO because we haven’t heard form NexGen folks in awhile – not a good sign for a storage solution. Well, SanDisk has now spun NexGen back out on its own. While it may be sink or swim time for the NexGen team, we think it’s a good opportunity for all involved.

Today at #snyc18 learning about the latest in #serverless. Opportunity is huge (iRobot is 100% serverless and loving it @ben11kehoe ), but not a panacea, lots of work to do to build up full production applications yet according to Kelsey Hightower (google) @kelseyhightower .