I have been working for technology companies for 33 years now, so I don't know why I'm always surprised at the technology myths that proliferate. For example, there is a popular notion running around that Solid State Drives (SSDs) will replace Hard Disk Drives (HDDs) as the dominant storage media. So, which will win? Let me give you my answer upfront before I take you through my arguments: both win, and the market size for both continues to grow.

As I offer up this data and this perspective, I'll repeat some advice I received early in my career. A mentor once told me, when searching for the truth in business and technology "look to the economics" (maybe my version of "go to the mattresses" for you "Godfather" fans).

Here is what I read and hear on a regular basis regarding the battle of HDDs and SSDs:

SSDs run circles around HDDs for performance

SSDs will soon replace HDDs

The improving density of SSDs will collapse the current cost premium to HDDs

Sound familiar? Well, as with most claims of this ilk, there are some elements of truth to them but each of these statements goes too far when viewed in light of some data.

PerformanceYes, it's true that SSDs are fundamentally faster than HDDs. This is particularly true of random I/O data reading (as with databases). With large block sequential data (i.e., rich media like video), however, this difference tends to be very small.

Replacing HDD?Here's a claim where the data just doesn't add up. While the attractiveness of the SSD technology would lead one to believe this, the economics really don't hold up. I should probably point more specifically to NAND flash here (the "solid state" in SSDs) because another solid state technology might actually achieve this, but that appears to be years and tens of billions of dollars away.

Let's talk for a minute about the world's demand for storage. Last year approximately 2500 exabytes of data was created and/or replicated. And...this is doubling about every two years. This needs to be serviced largely by HDDs and SSDs. Last year, the NAND flash industry produced somewhere between 30 and 40 exabytes of storage; with somewhere around 3.5 exabytes finding its way into SSDs (the balance in phones, tablets, cameras, etc.). What is that, 1.4% of our total storage need? But production is ramping up... at a capital cost of about $1.5 billion per Exabyte (semiconductor fabs are expensive)! So, from an available supply perspective, SSDs replacing HDDs seems implausible.

Improving Technology Makes SSDs CheaperAgree. The NAND flash technology point is at 21nm line widths with plans to move to 19, then 16 nm. Storage density is further improving with the use of multi-level cell (MLC) capabilities vs single-level cell (SLC). This is bringing down the cost of solid state storage in much the same way the areal density increases seen in HDDs brought down the cost of hard drive storage.

CIO, CTO & Developer Resources

Notice how I said "brought" down (i.e., past tense)? Because a real density growth in the hard drive industry has slowed to a crawl, the rapid erosion of cost/Gb has also slowed to a crawl. Now, the HDD industry needs to move to its next technology (HAMR?) to continue to take cost out and HAMR is a number of years away. HAMR will require significant capital investment by the HDD companies. Significant capital investment will be required anyway to keep up with storage demand (even more so with slowing areal density growth). This all spells out a flattening of HDD costs for the foreseeable future. Some would speculate that a renewed interest in improving the utilization of HDD capacity is an artifact of these economics.

Here's what's preventing the complete collapse of the price difference between the two technologies. The NAND flash suppliers have a similar problem as the HDD manufacturers... the implications of shrinking technology geometries. As the line widths shrink, the ability of flash to sustain multiple write-erase cycles declines. To make up for this deficiency, sophisticated error-correction algorithms and "brute force" overprovisioning (to allow some cells to wear out) are being used. This and the above mentioned semiconductor fab costs tend to mitigate the progress SSDs are making in closing the cost gap to HDDs.

Having made my arguments that SSDs won't take over the world, I have to say that IT solutions need this technology. I think you can see that SSDs aren't the answer to all our storage needs, but they allow us to address a crying storage requirement. At a high level, storage is called on to produce two key deliverables:

Make data available to an application or user in an appropriate timeframe (i.e., performance) and in today's environment this need is growing.

Store data reliably (i.e., capacity) and in today's environment this need is growing

By and large, to date, systems with hard drives have been architected to deliver both of these capabilities. It's hard to argue that hard drives haven't done a good job of delivering affordable capacity. But to deliver against the performance requirements of IT solutions, hard drives:

Have been developed with higher speeds, but arguably little progress has been made in the last ten years

Have been "short stroked" (a technique limiting the stroke of the actuator to improve performance)

Have been grouped together to allow striping of data across a large number of drives to aggregate their performance.

With each of these approaches the user suffers from higher power requirements, and with the last two approaches their system has been overprovisioned (leaving stranded capacity) to deliver performance. This all adds up to significant system cost that can be avoided with new storage architectures.

Here's where SSDs come in. SSDs are proving to be a technology answer to the new generation of storage needs, both the growing performance and the growing capacity requirements. Here's the simple, logical way to think about SSDs and HDDs and their role in storage solutions. Use the right tool for the job. That is, take advantage of SSDs for performance (particularly small block, random I/O) and HDDs for capacity.

You might challenge me now and ask, "Does adding SSDs (Flash) to IT solutions make economic sense?" SSDs are expensive, but, used appropriately, SSDs can also minimize the number of HDDs required in a given solution. The secret is in optimizing the use of both, that is, avoid overprovisioning of both. Thin provisioning of capacity has become popular as a cost saver. Similarly, thin provisioning of performance is a similar cost saver. (This is a benefit of virtualized caching or tiering capabilities of newer storage solutions.)

I'll close with a few proof points. Look to the latest economically sensible storage solutions that are answering today's performance and capacity calls. The vast majority of affordable "Ultrabooks" incorporate both flash and hard drive technology, as does Apple's latest "Fusion" drive.

Tom Major is President of Starboard Storage. He joined Starboard from Seagate Technology, where he was senior vice president of Product Management and Business Operations. Before Seagate, Major worked at LeftHand Networks as chief strategy officer. Previous positions included vice president and general manager, Disk Business Unit, at StorageTek and vice president of Network Storage Marketing at HP, where he spent 21 years.

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...

Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...

DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...

The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point.
In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...

The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...

If your cloud deployment is on AWS with predictable workloads, Reserved Instances (RIs) can provide your business substantial savings compared to pay-as-you-go, on-demand services alone. Continuous monitoring of cloud usage and active management of Elastic Compute Cloud (EC2), Relational Database Service (RDS) and ElastiCache through RIs will optimize performance. Learn how you can purchase and apply the right Reserved Instances for optimum utilization and increased ROI.

TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...

Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development organizations.

In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...

You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time.
In his session at 19th Cloud Expo, Mark Allen, General Manager of...

With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.

The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.

While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balanc...

The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can be...

Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through ...

As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.

The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how the...

CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build fr...

René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterpris...

Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We wil...

Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their...

Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.

DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their D...

Microservices Journal focuses on the business and technology of the software architecture design pattern, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs.

Cloud computing budgets worldwide are reaching into the hundreds of billions of dollars, and no organization can survive long without some sort of cloud migration strategy. Each month brings new announcements, use cases, and success stories.