Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.

Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.

Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.

Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.

Inside System Storage -- by Tony Pearson

Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
)

Two years ago, the folks at University of Toronto asked me to help their graduate students build a "Watson" running entirely on IBM SoftLayer to see if this would be a worthwhile class project. Needless to say, it was more difficult than they expected, but we managed to pull it off during that summer, able to answer a handful of simple questions from a single page corpus.

Last month, [Industry Leaders Establish Partnership on AI], combining the talents from Amazon, DeepMind/Google, Facebook, IBM and Microsoft, to form a non-profit to explore best practices and ethical questions related to Watson and other Artificial Intelligence applications.

Since data is at the core of any Artificial Intelligence, IBM is pleased to announce today that IBM Cloud Object Storage System is now available on IBM SoftLayer. This is based on the Cleversafe technology IBM acquired last year.

While other cloud service providers have offered data storage in the cloud, this new offering also allows hybrid configurations with geographically dispersed erasure coding. Unlike RAID which protects against the loss of one or two drives, erasure coding can protect against a larger number of concurrent failures. For example, using an Information Dispersal Algorithm of "7+5", where seven pieces of data are encoded on twelve independent disks, the system can lose up to five disk drives without losing any data.

Click graphic to view larger

Combining this with Geographically Dispersed Configuration across three or more sites means that you can lose an entire data center, four of the twelve disks, and still have instant full access to all of your data from eight drives at the other locations. In the graphic, you see two on-premise data centers combined with a third location in IBM SoftLayer.

Today, I met with Teresa Ferraro and Mike Buttrum from FirstRain in their Manhattan office in downtown New York City. IBM recently contracted FirstRain to provide IBMers like myself with analytics on publicly-available news to keep us informed for business meetings. Here's how IBMers can get the most out of this service.

Basically, FirstRain takes a list and generates the best summaries of publicly-available news that are most relevant. You can organize into different channels. Here I have seven channels.

Companies to watch refer to existing or prospective clients that I plan to be talking with soon. Some of my colleagues are assigned to specific clients, so they can set this up once and enjoy the news for the rest of the year. I, on the other hand, meet with different clients every week, so I will be updating this list on a frequent basis.

I have divided the Competitors between major ones, and smaller startups. Since I am often working with business partners and distributors, I made that a separate channel as well.

For conferences where I don't know which companies will attend, such as the IBM Technical University, I can set up information by territory. Here is one for Brazil.

I also attend industry-oriented events, so I can pick those vertical markets that might be helpful with dinner conversations. In this example, I chose Energy, Electric Utilities and Gas Utilities.

Once you have your channels configured, you get your results in various sections:

Management Changes lists any changes in top C-level positions, who left the company, who got recently hired.

Key Developments indicates news like mergers and acquisitions and government regulations.

First Reads prioritizes the top six articles for your channel. You can access more, but these six will get you started as you have your morning coffee.

First Tweets gives you the six most relevant tweets, if those articles above were just "TL;DR"

A section on Business Influencers and Market Drivers is interesting to see who the big players are, and what topics are driving the most conversation. Here's an example from my Energy/Electric/Gas channel:

The Most Talked About section covers quotes and commentary about the most talked about companies in your channel.

With most news sources focused on politics, weather and celebrity gossip, it is nice to have a quicker, more focused approach to get the news I need to prepare for my client briefings. Special thanks to my hosts Teresa and Mike for their hospitality!

This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year.

Day 4, the last day of the conference, is only a partial day, and many people opted to leave on Wednesday evening, or Thursday morning instead. The breakfast and lunch meals had fewer people than the previous days. Here is my recap of day 4 Thursday breakout sessions.

Supermicro is more than happy to customize these, upgrading the CPU, RAM, disk or networking connectivity as needed. This solution is roughly half the price of Nutanix, and offers a better Next-Business-Day/9am-to-5pm support package .

The last time I was in Las Vegas, I presented this topic at [IBM Interconnect conference]. Back then, I was given only 20 minutes, was placed on the Solutions Expo showroom floor, competing with the noise and traffic of attendees going to lunch.

This time, it was much better, a large room, and a bigger-than-expected audience given that it was scheduled on Thursday morning.

Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".

I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods. I wrapped up the session covering the various storage solutions that IBM offers for all four Cloud Storage types.

However, they have their own set of problems, so IBM is developing software that can be included in IBM Spectrum Accelerate, Spectrum Scale, and Spectrum Virtualize to optimize their utility.

The concept of Log-Structured Array has been around since 1988. The IBM RAMAC Virtual Array back in the 1990s used it. NetApp's Write-Anywhere File System (WAFL) is an implementation of the [Log-Structured File System] general concept.

SALSA combines Log-Structured Array with enhancements borrowed from the IBM FlashSystem design, that I covered in my Monday and Wednesday presentations, to enhance write endurance by as much as 4.6 times!

Tomer Carmeli, IBM Offering Manager for the A9000 and A9000R presented. He presented an overview of these models on Monday, so this session was focused on the data footprint reduction technologies.

Basically, it is a three step process. First, all "standard patterns" are removed. IBM has identified some 260 standard patterns that are 8KB in length, such as all zeros, all ones, or all spaces, and replaces these blocks immediately with a pattern token.

Second, [SHA-1] 20-byte hash codes are computed on 8KB pieces on a rolling 4KB alignment boundary. In other words, if a 64KB block of data is written, bytes 0-to-8KB are hashed an compared to existing hash codes. If no match, then bites 4KB-to-12KB are hashed, and so on. This approach nearly doubles the likelihood of finding duplicates. When a block match is found, the algorithm can replacing them with pointer and reference count.

Third, any unique data that still remains is compressed using Lempel-Ziv algorithm. This is done using the [Intel® QuickAssist]. This co-processor can compress data 20 times faster than software algorithms running on general-purpose x86 processors.

Do you want an estimate of how much "reduction ratio" you may achieve? IBM has developed two estimator tools to help. The first tool is a complete scan for data expected to be dedupe-friendly. It is a slow process, taking 8 hours per TB. This would be ideal for Virtual Desktop Infrastructure or backup copies.

The second tool is the infamous [Comprestimator] that IBM has had for awhile to help estimate compression savings for IBM Spectrum Virtualize storage solutions like SVC, Storwize and FlashSystem V9000. This tool is very fast, looking at only a statistically-valid subset of the data.

The results of both tools are merged, and the result is within five percent accuracy. This allows IBM to offer guidance on which data to place on these new A9000 and A9000R models, as well as offer a "reduction ratio" guarantee.

A client asked me why I bother to attend other sessions, when I probably know most of the material they present. I explained that I can always learn from others. I can honestly say that I learned something new and useful at every session I attended.

This week, I am in Las Vegas for [Edge 2016], IBM's Premiere IT Infrastructure conference of the year. Here is my recap of Day 3 Wednesday.

Become your own Storage Consultant

Gary Graham, IBM Field Technical Specialist for Storage, and Brian Pioreck, IBM Client Technical Specialist for Storage, co-presented this session. This session explained how to use IBM's 30-day free trial of IBM Spectrum Control Storage Insights, a cloud-based services offering.

(Note: 15 years ago, I was the chief architect of version 1 of what we now call IBM Spectrum Control. I am pleased to see how well this product has evolved over the years.)

Storage Insights provides a reporting-only subset of the popular IBM Spectrum Control Standard and Advanced editions. It reports on IBM storage devices, as well as any non-IBM devices that are virtualized behind IBM Spectrum Virtualize products like SAN Volume Controller (SVC), Storwize, and FlashSystem V9000.

If you are a storage administrator, consider trying this out for 30 days, get some immediate results. Since it is cloud-based, you only need a Windows, Linux or AIX system to install a "collector" on site. This collector sends data up to the Cloud at one of IBM SoftLayer facilities. The installation process takes only 30 minutes, and you can download the code from the Internet.

If you find Storage Insights valuable, helping you reclaim some unused space, or provide other insight that saves your company money, consider buying the service, for only 250 US Dollars per 50 TB monitored. If you want more than just monitoring and reporting, consider one of the on-premise solutions like IBM Spectrum Control Standard, or IBM Spectrum Control Advanced edition, which provide provisioning and configuration capabilities as well.

Last November 2015, [IBM acquired Cleversafe] for $1.3 Billion US dollars because Cleversafe has the brand name recognition as the #1 Object Storage vendor the past two years in a row (2014 and 2015). On July 1 of this year, the transformation was complete, and their flagship product was officially renamed to the IBM Cloud Object Storage System, which some abbreviate informally as IBM COS.

Since then, IBM has been busy integrating IBM COS into the rest of the storage portfolio. I explained how IBM COS can be used for all kinds of static-and-stable data, but not suited for frequently changed data, such as Virtual machines or Databases.

Object storage can be access via NFS or SMB NAS-protocols using a gateway product, like IBM Spectrum Scale, or those from third-party partners like Ctera, Avere, Nasuni or Panzura. It can also be used as an alternative to tape for backup copies, and is already supported by the major backup software like IBM Spectrum Protect, Commvault Simpana, or Veritas NetBackup.

A few years ago, I explained to a client that Converged and Hyperconverged were like a pendulum swinging back. Over the past few decades, we have gone from internal disk, to externally attached disk, to SAN and LAN networks.

Each time, we gained more flexibility, greater connectivity and longer distances. Then I explained that Converged and Hyperconverged is like going backwards, the pendulum swinging back to the days of internal and direct-attached storage. The analogy was a hit, and thus this session was born!

IBM also offers Hyperconverged solutions. IBM Spectrum Accelerate allows the compute, storage and network functions run on 3 to 15 VMware ESXi hosts to form a cluster. The cluster can then make iSCSI-based volumes available to other virtual machines running on these same hosts. The volumes can also be made available to servers outside the cluster, such as bare metal servers or other Hypervisors. This is available as software-only, or you can get pre-built system called the Supermicro Hyperconvergence Appliance.

IBM Spectrum Scale provides a clustered file system that allows the compute, storage and network functions to run on 3 to 16,000 machines. Formerly called General Parallel File System (GPFS), IBM Spectrum Scale has been around for over 18 years. Over 200 of the world's largest "Top 500" supercomputers run IBM Spectrum Scale today.

IBM Spectrum Virtualize and IBM Storwize Birds-of-a-Feather

Barry Whyte, fellow blogger and IBM Master Inventor, presented an overview of the latest features, and where IBM is headed in 2017 for the IBM Spectrum Virtualize family of products. Barry now works in Advanced Technical Skills for Storage Virtualization Asia/Pacific Region.

The group then moved to another room offering delicious food and drink, as Eric Stouffer, IBM Director, Storwize Offering Manager and Business Line Exec, presented the future areas that IBM is consider for this product family.

All of this was done under Non-Disclosure Agreements (NDA), preventing me from blogging any details. Back in 2003, Las Vegas started a marketing campaign ["What Happens in Vegas, Stays in Vegas"]. Coincidentally, this is the same year IBM introduced the IBM SAN Volume Controller, the first product in the IBM Spectrum Virtualize family.

This was a long day, but was pleased with the large audiences I had at my sessions.

The A9000 is an 8U high pod that can fit into existing racks. It comes in 60TB, 150TB and 300TB effective capacity.

The A9000R includes its own 42U rack. The rack is organized as two to six "grid elements" combined with two InfiniBand switches. Grid elements come in 150TB and 300TB effective capacities, giving you up to a whopping 1.8 PB in a single rack!

Similar to the IBM XIV and IBM Spectrum Accelerate offerings, the A9000 and A9000R support Hyper-Scale features. Hyper-Scale Manager lets you manage up to 144 devices on a single pane of glass. Hyper-Scale Mobility lets you move volumes (LUNs) non-disruptively from one device to another.

Different data compresses or dedupes at different ratios. Your mileage may vary. Unless you are evaluating a JBOF (just a bunch of flash) device, there is a great difference between raw, usable, and effective capacity. Raw capacity can be calculated by the size of each chip, times the number of chips. Usable capacity factors out RAID, and any spare capacity set aside for RAID rebuild and garbage collection. Effective capacity indicates the amount of information that can be stored by taking advantage of data footprint reduction technologies, such as compression or data deduplication.

Competitive Match -- If a competitor had run their own set of estimator tools, IBM might be able to match the reduction ratio, without repeating the analysis, by just reviewing the competitor results.

"Sight unseen" -- without analyzing your actual data, reduction ratio is determine by the type of data (DB2, Oracle, SQL server, etc.), based on experience with similar data at other data centers.

Both A9000 and A9000R models are published at 250 microsecond latency, about 30 times faster than traditional spinning disk, although some workloads actually can run even faster than that. Assuming 5.26:1 reduction, these sell for about $1.50 per effective GB.

In 2012, Microsoft Research and University of California San Diego published ["The Bleak Future of NAND Flash Memory"], 8 pages, by Laura M. Grupp, John D. Davis, and Steven Swanson. Here is an excerpt:

"The technology trends we have described put SSDs in an unusual position for a cutting-edge technology: SSDs will continue to improve by some metrics (notably density and cost per bit), but everything else about them is poised to get worse. This makes the future of SSDs cloudy: While the growing capacity of SSDs and high IOP rates will make them attractive in many applications, the reduction in performance that is necessary to increase capacity while keeping costs in check may make it difficult for SSDs to scale as a viable technology for some applications"

IBM disagreed with this bleak assessment, announced it was investing $1 billion US Dollars into this technology, acquired Texas Memory Systems, and has deployed flash throughout its product line. For the past three years, IBM has been the #1 vendor for Flash storage systems.

Patricia offered the following example. What would it take to run 20 million IOPS? Here's a comparison:

Media

Racks

Energy

FlashSystem V9000

1

19 kW

Disk systems 15K rpm

315

2200 kW

Disk systems 7200 rpm

630

4500 kW

How to migrate from SONAS to IBM Spectrum Scale/ESS using Active File Manager

Paul Schena, IBM Senior IT Specialist, presented his experiences migrating existing SONAS data to new IBM Spectrum Scale or Elastic Storage Server (ESS) deployments. SONAS is going End-of-Service (EOS) on April 30, 2018, so it is never too soon to start this migration.

Paul gave two different methodologies. The first used Active File Management (AFM):

Use Robocopy and/or Rsync as appropriate to move the data to the new system

Decommision the SONAS

Having it all: Hybrid Cloud Storage Services for Block, Power and Backup

Clint Parish, Director of Enterprise Solutions and Services for VSS, and Marc The'berge, Business Development for Supermicro, co-presented this session.

VSS offers POWER8-based Cloud services. They consider themselves "boutique" with POWER8 servers, able to run AIX, IBM i and Linux on POWER applications, but not at the scale and size of larger x86-based clouds like Amazon Web Services or Microsoft Azure.

For IBM i, they attach to IBM Storwize V7000. For AIX and Linux on POWER, they use IBM Storwize V7000 and/or Supermicro Hyperconverged Appliance, a pre-built system based on IBM Spectrum Accelerate.

Supermicro offers three "tee-shirt sizes", their small systems have six nodes, medium with 9 nodes, and large with 15 nodes. Unlike other Hyperconverged systems, the ones from Supermicro include a rack, and are pre-cabled with all the necessary Ethernet switches necessary to make a complete solution.

In the evening, we were treated with a concert with Train, known for songs like "Meet Virginia", "Hey Soul Sister", "Calling all Angels" and "Drops of Jupiter". They played all of these, plus covered some songs by Led Zeppelin, Journey, Queen and Aerosmith,