Advances in enterprise storage for years were largely relegated to higher-capacity drives and arrays and occasional boosts in throughput. Established players including EMC Corp., Hewlett-Packard Co., Hitachi Data Systems, IBM Corp. and NetApp had set the agenda for high-end storage with numerous others serving the mid-range and low end of the market and remain formidable forces. But look who's now putting the squeeze on these players. They're facing competition from companies such as Fusion-io Inc., GridStore Inc., Nasuni Corp., Nimble Storage Inc., Nutanix, Pure Storage Inc., SimpliVity Corp., SolidFire, Unitrends Inc. and Violin Memory Inc., plus many others that only a few years ago had yet to release their first products.

Even Microsoft has become a key player. Having acquired storage array supplier StorSimple two years ago and InMage last month, the company took a major step forward toward trying to disrupt the status quo of how enterprises store and protect their data. In addition, Microsoft continues to advance its Storage Spaces technology in Windows Server that virtualizes commodity disks. VMware Inc. has competing software for its virtualization suite called Virtual SAN (vSAN).

These are the storage disruptors because these players are reshaping how IT decision makers are planning their storage architectures and road­maps. Moreover, they're changing the economics of storage as data growth reaches an all-time high. Many of the newer storage providers have respectable revenues and customer bases and have attracted huge interest from venture capital investors. Some have gone public or have become M&A targets.

SanDisk Corp. in June agreed to acquire flash storage vendor Fusion-io, led by CEO and former HP CTO Shane Robson, for $1.1 billion. Over the past year, Nimble and Violin have gone public though both of their shares declined sharply before starting to bounce back more recently. Pure Storage in April raised $225 million bringing its total funding to $470 million and a stated market value of $3 billion. Perhaps seeing how Nimble and Violin have struggled out of the gate, CEO Scott Dietzen recently was quoted as saying Pure Storage isn't ready to go public.

There are three main catalysts driving these changes. First is the growing presence of flash-based solid-state drive (SSD) arrays in datacenters providing application performance unthinkable a few years ago, as noted in last fall's cover story ("Flash Invasion," November 2013). Most of the players have both pure flash arrays and hybrid systems that accommodate both hard disk drives (HDDs) and SSDs. Established players including EMC and NetApp also now offer flash in their portfolios. EMC last month bolstered its XtremeIO arrays with the ability to instantly create in-memory snapshots for petabyte-scale applications that require real-time performance. The company also optimized the flash option on its high-end VMAX platform. And in June NetApp released its largest scale flash array to date, the FAS8080 EX, which it says scales to approximately 4 million IOPS.

The second driver is cloud storage, which is emerging as a tier that in many cases is replacing tape or secondary locations with drive arrays. Almost every storage hardware and software vendor is enabling the cloud as a storage tier or a target for backup and recovery and long-term archiving. Third is the vision every hardware and software infrastructure provider is evolving toward -- software-defined datacenters (SDDCs). Components of the SDDC are software-defined networking (SDN) and software-defined storage (SDS). SDS also powers the growing crop of converged systems, combining compute, storage and networking into a single system. Cisco, Dell Inc. and HP offer converged systems along with newer players Nutanix and SimpliVity.

Software-Defined Storage
It's early days for SDS, which uses software to decouple the storage functions from specific physical hardware, in some ways like server virtualization has done for creating virtual compute infrastructure and orchestration. Experts say there are many ways to look at SDS. One is around disaggregating the traditional SAN and storage arrays around system-level storage such as Microsoft Storage Spaces in Windows Server or VMware vSAN.

Others describe SDS as software bundled with hardware that can orchestrate and intelligently manage the tiers using APIs that leverage applications and plug-ins to OSes and virtual machines (VMs). Because SDS is still emerging, like any technology, there's still plenty of hype around it, experts warn. There are also a lot of nuanced interpretations.

Underscoring that point, analyst Anil Vasudeva, president and founder of IMEX Research, compared SDS to server virtualization during a recent webinar panel discussion presented by Gigaom Research. "Software-defined storage is a hypervisor for storage," Vasudeva says. "What a hypervisor is to virtualization for servers, SDS is going to do it for storage. [Of] all the benefits of virtualization, the reason why it took off was basically to create the volume-driven economics of the different parts of storage, servers and networks under the control of the hypervisor."

Prominent storage expert and fellow panelist Marc Staimer, president and chief dragon slayer of Dragon Slayer Consulting, had a somewhat different view. "In general, server virtualization was a way to get higher utilization out of x86-hardware," counters Staimer. "The concept of a hypervisor, which originally came about with storage virtualization, didn't take off because of what happened with storage virtualization [and] the wonderful storage systems that were being commoditized underneath a storage virtualization layer. What you're seeing today is commoditizing the hardware with software-defined storage."

Siddhartha Roy, principal group program manager for Microsoft (which sponsored the Gigaom Research webinar), says it's early days for SDS, especially among enterprises. "Enterprises will be a lot more cautious for the right reasons, for geopolitical or compliance reasons. It's a journey," Roy says. "For service providers who are looking at cutting costs, they will be more assertive and aggressive in adopting SDS. You'll see patterns vary in terms of percentages but the rough pattern kind of sticks."

SDS deployments may be in their early stages today, but analyst Vasudeva says it's going to define how organizations evolve their storage infrastructures. "Software-defined storage is a key turning point," he says. "It may not appear today, but it's going to become a very massive change in our IT and datacenters and in embracing the cloud."

Both analysts agree that the earliest adopters of SDS in cloud environments, besides service providers, will be small and midsize businesses. For Microsoft, its Storage Spaces technology in Windows Server is a core component of its SDS architecture. Storage Spaces lets administrators virtualize storage by grouping commodity drives into standard Server Message Block 3.0 pools that become virtual disks exposed and remoted to an application cluster.

"That end to end gives you a complete software-defined stack, which really gives you the benefit of a SAN array," Roy says. "We were very intentional about the software-defined storage stack when we started designing this from the ground up."

Microsoft Makes Aggressive Push into Storage

Microsoft is aiming to commoditize hardware with software-defined storage hardware as a gateway to the cloud.Not that Microsoft is new to storage. Redmond has long offered storage replication in data protection and support for storage interfaces such as iSCSI. Microsoft has also partnered with leading storage vendors including EMC Corp., Hewlett-Packard Co. and NetApp. But now between the enhancements to Storage Spaces in Windows Server 2012 R2 and its push into hardware, Microsoft is becoming a storage company itself.

Microsoft's push into hardware promises to draw more users to its Microsoft Azure Storage cloud service. It remains to be seen how much emphasis the company will place on advancing its storage ambitions. "Microsoft has to determine if it really wants to push this," Enterprise Strategy Group analyst Mark Peters says. "It's such a big company with so many marketing mixes."

In recent months, Microsoft has indicated it has big plans for StorSimple, a provider of storage appliances it acquired two years ago. And at press time, Microsoft announced it has acquired InMage, which provides turnkey replication, backup and business continuity appliances designed for hybrid cloud environments. Microsoft will integrate InMage into its Azure Recovery Manager product line, giving it a more extensive line of storage hardware.

Azure Recovery Manager first debuted earlier this year as Hyper-V Recovery Manager, which initially provided disaster recovery to a secondary datacenter. Microsoft renamed it at TechEd when it added to the new offering the ability to connect to the Azure cloud as an alternative to requiring another datacenter. The service, released to preview in late June and set for general availability later this year, monitors clouds with System Center Virtual Machine Manager remotely and continuously. All links with Azure are encrypted in transit with the option for encryption of replicated data at rest. Microsoft says administrators can recover VMs in an orchestrated manor to enable quick recoveries, even in the case of multi-tier workloads.

With the InMage Scout offering now added to the mix, Microsoft says Azure Recovery Manager will get a major boost. Scout continuously captures changes to data in real time from production servers in memory before it's written to disk and either backs it up or replicates it. It's designed to provide recovery of data for operations with near-real-time recovery time objectives. It supports Windows Server, Linux, and Unix physical machines and Hyper-V, VMware ESX, and Xen virtual machines.

Following the closing of the acquisition of the company last month, Microsoft pulled the InMage-4000, a converged compute storage system, from the lineup. That appliance was available with up to 48 physical CPU cores, 96 threads, 1.1TB of memory and 240TB of raw storage capacity. It supported 10GigE storage networking and built-in GigE Ethernet connectivity. Microsoft said it will reintroduce a similar device at an unspecified date. Meanwhile, when combined with Azure Recovery Manager, Scout will allow customers to use its public cloud or a secondary site for disaster recovery.

Giving its StorSimple line a boost, Microsoft in July announced it is launching the Azure StorSimple 8000 Series, which consists of two different arrays that offer tighter integration with the Azure public cloud. While the Microsoft StorSimple appliances always offered links to the public cloud including Amazon Web Services S3, the new Azure StorSimple boxes with disks and flash-based solid-state drives (SSDs) only use Azure Storage. Azure Storage becomes an added tier of the storage architecture, enabling administrators to create virtual SANs in the cloud just as they do on-premises. Using the cloud architecture, customers can allocate more capacity as needs require.

"The thing that's very unique about Microsoft Azure StorSimple is the integration of cloud services with on-premises storage," said Marc Farley, senior product marketing manager for StorSimple at Microsoft, during a press briefing to outline the new offering. "The union of the two delivers a great deal of economic and agility benefits to customers."

Making the new offering unique, Farley explained, is the two new integrated services: the Microsoft Azure StorSimple Manager in the Azure portal and the Azure StoreSimple Virtual Appliance. "It's the implementation of StorSimple technology as a service in the cloud that allows applications in the cloud, to access the data that has been uploaded from the enterprise datacenters by StorSimple arrays," Farley said.

The StorSimple 8000 Series lets customers run applications in Azure that access snapshot virtual volumes, which match the VMs on the arrays on-premises. It supports Windows Server and Hyper-V, as well as Linux and VMware-based VMs.

The aforementioned new StorSimple Manager consolidates the management and views of the entire storage infrastructure consisting of the new arrays and the Azure Virtual Appliances. Administrators can also generate reports from the console's dashboard, letting them reallocate storage infrastructure as conditions require.

Farley emphasized that the new offering is suited for disaster recovery, noting it offers "thin recoveries." Data stored on the arrays in the datacenter can be recovered from copies of the data stored in the Azure Virtual Appliances. With the acquisition of InMage, it appears Microsoft will emphasize those appliances for pure disaster recovery implementations.

The StorSimple 8000 arrays support iSCSI connectivity as well as 10Gb/s Ethernet and inline deduplication. When using the Virtual Appliance, administrators can see file servers and create a virtual SAN in the Azure cloud. "If you can administer a SAN on-premises, you can administer the virtual SAN in Azure," Farley said.

Microsoft is releasing two new arrays: the StorSimple 8100, which has 15TB to 40TB of capacity (depending on the level of compression and deduplication implemented) and the StorSimple 8600, which ranges from 40TB to 100TB with a total capacity of 500TB when using Azure Virtual Appliances.

The StorSimple appliances are scheduled for release this month. Microsoft hasn't disclosed pricing but the per-gigabyte pricing will be more than the cost of the Microsoft Azure Blob storage offering, taking into account bandwidth and transaction costs.

Microsoft is extending its push into others areas of storage, too, such as disaster recovery and file management. Microsoft recently released Azure Site Recovery preview, which lets organizations use the public cloud as an alternate to a secondary datacenter or hot site and it has introduced Azure Files for testing. Azure Files exposes file shares using SMB 2.1, making it possible for apps running in Azure to more easily share files between VMs using standard APIs, such as ReadFile and WriteFile, and can be accessed via the REST interface to enable hybrid implementations (see "Windows File Shares in the Cloud,").

Flash Expansion
Less than a year after our cover story on flash, its presence in the datacenter appears to be extending rapidly. While flash arrays are far from pervasive, they're not a novelty in the datacenter, either. Even a growing number of midsize organizations are deploying flash arrays. Take Lifescript, a Web publisher focused on women's health. The Mission Viejo, Calif.-based content producer generates terabytes of storage and because of the volume of e-mail newsletters it produces and distributes, performance is critical.

Lifescript is an early adopter of flash drives in its datacenter. A longtime EMC shop, Lifescript started using 3PAR storage about six years ago, when it was an early upstart that was able to provide Web scale performance with its hybrid array of a small amount of flash and Fibre Channel drives. Lifescript Chief Technology Officer Jack Hogan found the datagrowth from all of the content its properties produce was starting to require more capacity and greater performance. Knowing that some all-flash SSD arrays were emerging, Hogan decided to consider his options.

At the time, 3PAR storage wasn't available in a pure flash array configuration suited for Lifescript's requirements, though Hogan says it has since come out with one. Lifescript ended up deploying Pure Storage's FA-420 all-flash array, equipped with 22TB of raw storage and with compression and deduplication it can store close to 100TB, according to Hogan.

"We are now running 100 percent of our production storage on Pure Storage all-flash SAN arrays," Hogan says. "Ultimately, we were able to pay for the implementation of that by consolidating our datacenter [and] removing the 3PAR storage on the floor. It took up a lot of power and space and to add more throughput, we needed more spindles, which meant we would need more space and power."

It has markedly improved the throughput of its Exchange Server and business intelligence (BI) applications based on SQL Server Enterprise Edition, according to Hogan. For example, BI jobs that took six hours to process in the past now complete in 20 minutes. "We have really been able to throw some heavy, heavy workloads at these all-flash arrays, and by every measure it has far exceeded our expectations," he says. With the SSD capacity Lifescript acquired, the company is utilizing about 30 percent, Hogan notes.

"What we figured out when we started the company five years ago was that by coupling low-cost flash with data-reduction technologies, you can actually make it a viable mainstream storage technology," says Matt Kixmoeller, a vice president at Pure Storage.

Many storage vendors are extending their APIs to enable SDS or more automation of storage, such as provisioning specific tiers based on specific policies, conditions and applications. Pure Storage has APIs for OpenStack-based cloud infrastructure and supports VMware APIs. While it doesn't support Microsoft Storage Spaces to date, Kixmoeller says it supports Microsoft Volume Shadow Copy Service (VSS) for integration with applications such as Exchange, SharePoint and SQL Server. It also supports Microsoft Multipath I/O (MPIO), where it can automate via CLI. "But there's no official plug-in to the Microsoft management stack yet," Kixmoeller says.

One flash array vendor that does boast tight integration with Windows Server and Hyper-V environments is Violin. Microsoft and Violin forged a close technical partnership two years ago, where the team in Redmond helped co-develop the Windows Flash Server. Microsoft wrote custom code in Windows Server 2012 R2 and Windows Storage Server 2012 R2 that interfaces with the Violin Windows Flash Array, launched in April.

The Windows Flash Array, designed to ensure latencies of less than 500 microseconds, comes with an OEM version of Windows Storage Server. "Customers do not need to buy Windows Storage Server, they do not need to buy blade servers, nor do they need to buy the RDMA 10GB embedded NICs," says Eric Herzog, CMO and senior vice president of business development. "Those all come prepackaged in the array ready to go and we do Level 1 and Level 2 support on Windows Server 2012 R2," Herzog says.

The April launch of the Violin Windows Flash Server carried a 64TB configuration with a street price of $395,000. Violin last month added a new entry-level 16TB system, which starts at $140,000. "We've added all of these capacities, which lets us go after smaller companies and departmental-level deployments," Herzog says. "It can easily fit two SQL Server databases, maybe three, depending on the size of the database. You can pay as you grow. You don't have to add new hardware, just purchase a license key." The entry-level system can scale in capacity to 64TB.

While Pure Storage and Violin emphasize pure flash arrays, most others -- including those that offer traditional HDD-based systems -- are offering hybrid solutions that have a mix of spinning disks and flash-based SSDs. Companies are largely competing on their software IP. Yet, as the economics of flash continue to become more appealing, expect suppliers to tip the balance of their hybrid arrays toward SSDs.

Seeing increased demand for flash, Nimble Storage, a fast-growing storage provider that offers hybrid arrays, in June launched its CS700 array available with an all-flash shelf. The company says this new system, which incorporates the Nimble Cache Accelerated Sequential Layout architecture and a cloud management platform it calls InfoSight, is suited for VDI and high-transactional databases. Each node supports 16TB and it can scale to four nodes, or 64TB. With the full array, it can scale to a petabyte of capacity.

Nimble claims the new array supports 500,000 IOPS. "It's not just about the specs, it's how we get there," says Ajay Singh, vice president of product management at Nimble. "Candidly, not that many workloads need 500,000 IOPS and tens of terabytes of capacity. In a typical environment you might have a small percentage of your data that needs that. Another big chunk that needs a balance of IOPS and capacity, and then another big chunk that just ends capacity, so one way we're different is with the same architecture -- we let you get all of those workloads in one system."

Web Scale and Converged Infrastructure
Also driving SDS and changing the enterprise storage scene is converged infrastructure, which consists of network, server and storage in a combined system. Cisco made a big push into converged infrastructure with its UCS platform adding server blades to its network gear. Cisco's push into the server market wasn't taken kindly by Dell and especially onetime partner HP, which both followed suit with their own converged infrastructure offerings.

Dell has made its share of acquisitions of storage and networking companies over the years but recently has turned to Nutanix to develop the Dell XC Series of Web-scale Converged Appliances set for release in the fourth quarter. Nutanix had a large presence at the Microsoft TechEd conference in Houston back in May where it showcased its Virtual Computing Platform. Powered by the Nutanix OS (NOS), the Web-scale appliances have a control fabric with multi-cluster management that's the basis of the converged compute, storage and networking system.

"We have eliminated the storage tier all together," says Laura Padilla, Nutanix director of strategic alliances. "There's no SAN and there's no external NAS. As customers go to more virtualized workloads, the whole concept of networked storage is somewhat of a mismatch of virtualization. The reason being is in a networked storage model where there's an external network tier, the hardware can become a bottleneck to performance and security."

Earlier this year, Nutanix added support for Windows Server 2012 R2 and Hyper-V in its offering. It already had supported VMware and KVM-based hypervisors. Gridstore is another up-and-coming provider of Web-scale storage infrastructure that supports Hyper-V. In fact, the company earlier this year decided to focus exclusively on the Microsoft datacenter platform. "We fit into Windows as a device driver, it's very clean and easy to fit in that level," says Kelly Murphy, founder and CTO of Gridstore. "With VMware and ESX, it is not easy and clean, so it was that technical aspect, as well."

Like Nutanix, Gridstore describes its infrastructure as Web scale, though it hasn't adopted a converged systems approach. Gridstore hardware is pure storage arrays that combine both HDDs and SDDs. At the Microsoft Worldwide Partner Conference (WPC) last month in Washington, D.C., Gridstore launched a larger 48TB node that can scale up to 250 systems, or 12PB. The company also says its hardware will integrate with Microsoft System Center and its Cloud OS platform.

Microsoft's Expanding Storage Portfolio

Key components of Microsoft's storage offerings and technology
Azure StorSimple 8000 Series: These latest SAN arrays due out this month are equipped with both HDDs and SSDs. Enterprise customers can deploy the arrays for primary storage, which are tightly integrated with the Microsoft Azure public cloud service. In Azure, customers can use the StorSimple Virtual Appliance for tiered storage. The arrays support inline deduplication, compression and automatic tiering with a choice of 10Gbps Ethernet and iSCSI connectivity. The arrays sit between physical and virtual application servers and the StorSimple Virtual Appliance in the Azure cloud. The new rollout contains the following components:

StoreSimple 8100: 15TB to 40TB local, 200TB with cloud

StoreSimple 8600: 40TB to 100TB Local, 500TB with cloud

InMage Scout: Microsoft last month announced it has acquired InMage, a supplier of Scout, which are cloud-based business continuity appliances. Microsoft said it will integrate it with its new Azure Recovery Manager. Scout continuously captures changes to data in real time from production servers in memory before it's written to disk and either backs it up or replicates it. It's designed to provide recovery of data for operations with near-real-time recovery time objectives. It supports Windows Server, Linux, and Unix physical machines and Hyper-V, VMware ESX, and Xen virtual machines.

Cloud As a Target
As the price of cloud storage declines almost monthly, more and more companies are using the cloud as a second or third tier of their storage infrastructures. All of the major storage software and backup and recovery providers support cloud storage in some way, though most have preferred providers or methods of connectivity.

Unitrends Inc., perhaps the most established of the disruptors, is among a number of vendors that offer storage appliances linked to the public cloud for disaster recovery. Its current offering uses the company's own cloud service. Chief Technology Officer Mark Campbell hints that Unitrends will offer connectivity to other clouds down the road. While Campbell says with its own cloud it can provide 24x7 "white glove" service, many customers will want lower-cost alternatives, especially for less-critical systems and data.

"The cost advantage we perceive will continue to get better with the public cloud with services such as Azure," Campbell says. So we want to make sure we're offering different levels of SLAs and different levels of providing for our customers going forward."

Whether it's SDS, flash, the growth of Web scale and converged systems or the cloud, few will argue that storage in the enterprise isn't going through a transition. Who will win and lose and set the agenda for next-generation storage is very much in play, but it's clear that unconventional players like Microsoft are moving aggressively into storage (see "Microsoft Makes Aggressive Push into Storage"), hoping to rearrange the deck. As some say traditional SAN and NAS are becoming legacy systems, others are giving them a new lease on life.

"A few years ago, storage was as boring as heck," says Enterprise Strategy Group analyst Mark Peters. "Hearing about new developments was like the release of new tires for cars. Now we have all kinds of fun things going on."