Sub menu

Stories by Topic

Story by Date

The model that Pure Storage outlined as their path forward is not a net-new, as it is built on technology like last year’s FlashBlade//X and Gartner’s Shared Accelerated Storage category. It is, however, a differentiated strategy within the industry, that Pure believes will drive it to great heights.

That’s not bad resolution in Pure’s Opening Keynote. It’s a sea of orange confetti that accompanied their announcement of the elimination of price differential on their FlashArray//X

SAN FRANCISCO – Storage innovation is the key to the future of computing. Pure Storage’s technology holds the key to that innovation. Pure’s data-centric architecture, which unites networked SAN and direct attacked DAS storage in what Gartner calls Shared Accelerated Storage, is that key. And it will all be in the hands of customers within the next quarter or two. That, in a nutshell, was the core vision and strategy that Pure’s top executives laid out in the opening keynote at their Pure Accelerate 2018 event here on Wednesday.

“Our mission is to power innovation,” Pure’s CEO Charlie Giancarlo said, in kicking off the event. “At Pure, we share a conviction that storage is a really important part of making that innovation happen.”

Giancarlo compared digital technology to a three-legged stool – compute, networking and storage – with all three needing to advance for computing to advance as a technology. He said that while the pace of computing has slowed from the axiom of Moore’s Law, to multiplying by 10x over the last ten years, it has made up for the slowing of the rate of growth by massive scaling in the data centre. Networking has continued to grow at a slightly faster pace – 10x in eight years. Storage, however, has been the outlier.

“Data has exploded and puts pressure on the other two systems,” Giancarlo said. “If storage doesn’t keep up, it will hold the whole industry back. Pure allows storage to advance as rapidly as networking and compute. That’s why I came to Pure, because I firmly believe that only Pure can restore this balance.”

Giancarlo said that Pure’s growth has been impressive, becoming a billion dollar company in revenue five years after the introduction of their first product, an extremely high growth rate for a B2B company. Still, he acknowledged that customers want to know about the future, not as much about the past, and want to know where Pure is going next.

“Data centre architectures have had to adjust to deal with different growth rates of compute networking and storage,” he indicated. “In the 1980s and 1990s, application environments were large and immobile, and had small amounts of data compared to the application stack. Today we have a whole new scale-out model that lets apps be started and stopped at the touch of a button, under software control, and operating at petabyte scale. Applications are now small, and data is big. It’s a complete inversion of the model, and that calls for a new architecture. That architecture needs to make it easier for multiple applications to access data and get it in real time.”

To do that, Giancarlo stressed, the data centre has to become more data-centric in its architecture.

“The architecture that we have developed is a data-centric one, which marries on-demand stateless compute with data as-a-service,” he said. “Dedicated storage and stateless compute is the next logical step in datacentre evolution. We just did a study with the MIT Technology Review. It found that at the C Suite, over 80 per cent believed that going forward, the speed of analyzing data would be one of their biggest competitive issues. There is a big gap with the perceived ability of IT to catch up. We believe that data-centric architecture is one of the new super powers you can use to make that happen.

“This will all be available now, or in the next quarter or two.” Giancarlo stated. “That’s why ‘new meets now’ is the theme of our show.”

“These new technologies will let us rewire your data centre,” said Matt Kixmoeller, Pure’s VP of Strategy. “They bring SAN, NAS and DAS architectures together into a data-centric architecture to provide the right amount of storage for each application when it needs it. Gartner calls this Shared Accelerated Storage.”

That’s the future, Kixmoeller stressed, and its something that NVMe over-Fabric will facilitate. However, he said that competitors are missing the boat there, introducing new Tier 0 storage arrays.

“This new Shared Accelerated Storage is the alternative,” he said. “It’s a big pool of diskless stateless elastic compute, with big pools of shared accelerated block, file and object storage – with open full stack orchestration that lets you run it all. It’s DAS and SAN living together in harmony, and scaling storage and compute independently.”

Kixmoeller said that Pure started down this path years ago with FlashBlade, and are now doubling down on it with the new FlashArray//X, an enhanced version of the NVMe-based platform they introduced a year ago.

“Today, we are going much, much deeper,” he said. “We are all-in on Shared Accelerated Storage, with a new family of Shared Accelerated Storage that can bring this to every workload in your data centre. This year, we are all-in on FlashArray//X – not the SATA-based FlashArray//M.”

That includes a major change in pricing, which eliminates the differential between the //X and the //M entirely.

“With no premium for //X over //M there is no reason now not to go all NVMe,” Kixmoeller stated. “Only Pure can deliver that for mainstream deployments.”

Pure’s other major announcements at Accelerate, including the AIRI Mini collaboration with NVIDIA, which is designed for AI workloads, a new reference architecture for an all-flash Platform-as-a-Service that combines Pure with the Red Hat OpenShift Container platform, and their reboot of their Evergreen program with the Pure Evergreen Storage Service [ES2], all explicitly fit within this same strategy.

“The clear road map for Pure is to add greater simplicity to the storage model and the compute model that is added to it,” Giancarlo said. “I spent over 30 years in the communications business, and we came up with new architectures every few years. So does the compute industry. I think it says a lot about the storage industry that it had not come up with new architectures.”