Blogs

This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )

Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.

Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.

Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.

Tony Pearson's books are available on Lulu.com! Order your copies today!
Featured Redbooks and Redpapers:

Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.

Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.

Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.

Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.

Well, it's Tuesday again, and you know what that means? IBM Announcements! There were lots of announcements today, so I have split this up into two posts. One for the Tape and Cloud announcements, and the other for the Spectrum Storage family.

IBM TS7700 Virtual Tape System

IBM TS7700 release 4.1.1 now supports seven- and eight-way grids with approved RPQs. Before this, grids could only have up to six TS7700 systems connected together.

IBM also plans to extend the capacity of the TS7760 base frame to over 600 TB, and to extend the capacity of a fully configured TS7760 system to over 2.45 PB, before compression, by supporting 8 TB disk drives. This is a huge increase over the 4TB and 6TB drives used today.

IBM offers the IBM Cloud Object Storage System in three ways: as software, as pre-built systems, and as a cloud server on IBM Bluemix (formerly known as SoftLayer).

For those not familiar with IBM Cloud Object Storage (IBM COS), consider it "Valet Parking" for your storage. In a valet parking environment, you have valet parking attendants that drive the cars, parking garages that hold the cars, and a manager that oversees the operation. With IBM COS, you have Accesser® nodes that receive and retrieve your data like valet parking attendants, you have Slicestor® nodes that store your objects like cars in a parking garage, and you have IBM COS Manager to oversee the operation.

Today, IBM announced new HDD options for their S01, S03 and S03 models of Slicestor nodes. These are all 7200 rpm, 3.5-inch Nearline drives, at capacities of 4 TB, 6 TB, 8 TB and 10 TB.

In addition, a short-range 40 GbE SFP+ transceiver is available for ordering on IBM Cloud Object Storage Accesser models A00, A01, and A02, and IBM Cloud Object Storage Slicestor models S01 and S02. This improves the performance of data transfer between the Accesser nodes and the Slicestor nodes. Think of it like shortening the distance valet parking attendants have to drive your car to the garage and run back.

I have been presenting Cloud Storage for nearly 10 years now. People are often shocked to learn that most of the major cloud providers -- including Amazon, Google, Microsoft -- do not offer "Data at Rest" encryption on their storage offerings.

Why not? Because it would mean investing in Self-Encrypting Drives, Key management software, and other related technology to make it happen. Instead, Cloud Service Providers (CSPs) expect you to encrypt the data in software. Most users encrypt data before it lands on the cloud, but what if you create the data in the cloud?

IBM solved this by offering IBM Cloud Object Storage in its IBM Cloud (formerly known as SoftLayer). It has integrated encryption software that takes care of this for you.

This new product, IBM Multi-Cloud Data Encryption V1.0, enables you to encrypt files, folders, and volumes in any cloud while maintaining local control of encryption keys. It integrates with IBM Security Key Lifecycle Manager (SKLM). This is designed to allow you to move cipher data between clouds that are running Multi-Cloud Data Encryption without decrypting and re-encrypting the data.

For example, you can use IBM Multi-Cloud Data Encryption to protect your data on Amazon, Google or Microsoft, then later realize that you can save a ton of money moving to IBM Cloud instead, and you are now able to move the data over seamlessly!

IBM is in a transition from being a "Systems, Software and Services" company, to become the leading "Cognitive Solutions and Cloud Platform" company. IBM has been in this transformation for the past three years or so, and [over 40 percent of its revenue] now comes from these strategic initiatives.

The purpose of AI and cognitive systems developed and applied by the IBM company is to augment human intelligence. Our technology, products, services and policies will be designed to enhance and extend human capability, expertise and potential. Our position is based not only on principle but also on science.

Cognitive systems will not realistically attain consciousness or independent agency. Rather, they will increasingly be embedded in the processes, systems, products and services by which business and society function -- all of which will and should remain within human control.

Transparency:

For cognitive systems to fulfill their world-changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear:

When and for what purposes AI is being applied in the cognitive solutions we develop and deploy.

The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions.

The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built, often through years of experience. We will work with our clients to protect their data and insights, and will encourage our clients, partners and industry colleagues to adopt similar practices.

Skills:

The economic and societal benefits of this new era will not be realized if the human side of the equation is not supported. This is uniquely important with cognitive technology, which augments human intelligence and expertise and works collaboratively with humans.

Therefore, the IBM company will work to help students, workers and citizens acquire the skills and knowledge to engage safely, securely and effectively in a relationship with cognitive systems, and to perform the new kinds of work and jobs that will emerge in a cognitive economy.

(As IBM is focused on its transformation from a "Systems, Software and Services" company to a "Cognitive Solutions and Cloud Platform" company, it seems appropriate to highlight my 1,000 blog post on the concept of cognitive solutions.)

A lot of people ask me to explain what exactly does IBM mean by "cognitive", which is a fair question. Let's start with the [Dictionary definition]:

cognitive

of or relating to cognition; concerned with the act or process of knowing, perceiving, etc.

of or relating to the mental processes of perception, memory, judgment, and reasoning, as contrasted with emotional and volitional processes.

What exactly does IBM mean by Cognitive? IBM has taken this definition, and focused on four key strategic areas:

Understanding

In the summer of 1981, I spent a summer debugging a "Pascal" compiler at the University of Texas at Austin. I wasn't told that was what I was doing. Rather, I was tasked with writing sample Pascal programs that would demonstrate the features and capabilities of the language.

Every day, I would come up with a concept of a program, punch up the cards, run it through the CDC hopper, and verify that it would work properly. If I didn't have it working by lunch, I would take it to the "help desk", they would look it over, and tell me how to fix it after I got back.

Most of the time, it was a mistake in my software. A few times, however, it was a flaw in the compiler itself. My programs were basically test cases, and the Pascal Compiler development team was fixing or enhancing the compiler code every time I had a problem.

Compilers basically work by parsing the program text, looking for fixed keywords that are entered in a specifically prescribed order to make sense. Other keywords may represent data types, variables, constants or pre-defined macros.

But compilers are not cognitive. Cognitive solutions can understand natural language, and have to handle all the ambiguity of words not being in the correct order, or different words having different meanings.

Reason

As an Electrical Engineer, I had to take many classes on classical analog signal processing. In fact, all computers have some amount of analog components, where threshold processing is used to differentiate a zero (0) from a one (1).

For example, if a "zero" value was represented by 1 volt, and a "one" value by 5 volts, then you can set a threshold at 3 volts. Any voltage less than 3 would be considered a "zero" value, and anything 3 volts or greater a "one" value.

But threshold processing is not cognitive. Cognitive solutions also use thresholds, but their thresholds are dynamically determined, through advanced analytics and statistical mathematical models, and may adjust up and down as needed, based on machine learning over time.

Learning

IBM Research is proud to have developed the world's most advanced caching algorithms for its storage systems. Cache memory is very fast, but also very expensive, so offered in limited quantities. Caching algorithms decide which blocks of data should remain in cache, and which should be kicked out.

Ideally, a block in read cache would be kicked out precisely after the last time it was read, with little or no expectation for being read again anytime soon. Likewise, a block in write cache would be destaged to persistent storage precisely after the last time it was updated, with little or no expectation for being updated again anytime soon.

Traditional approach is "Least Recently Used" or [LRU]. Cache entries that were read recently or updated recently, would be placed on the top of the list, and the least referenced would be at the bottom of the list. When space is needed in cache, the entries at the bottom of the list would be kicked out.

But caching algorithms, including IBM's Adaptive Cache, are not cognitive. These algorithms respond pragmatically based on the current state of the cache. Cognitive solutions learn, and improve with usage. This is often referred to as "Machine Learning".

Interaction

The human-computer interface (HCI) has much room for improvement in a variety of areas.

Take for example a snack vending machine. In college, we had assignments to simulate the computing logic of these. We had to interact with the buyer, receive coins entered into the slot--nickels, dimes and quarters representing 5, 10 and 25 cents--determine a total monetary balance, and then dispense snacks of various prices and return an appropriate amount of change, if any. There is even a [greedy algorithm] designed to optimize how the change is returned.

But vending machines are not cognitive. Like the caching algorithms, vending machines interact based on fixed programmatic logic, treating all buyers in the same manner. Cognitive solutions can interact with different users in different ways, customized to their needs, and these interactions can improve over time, based on machine learning.

IBM is exploring the use of Cognitive Solutions in a variety of different industries, from Healthcare to Retail, Financial Services to Manufacturing, and more.

Well, it's Tuesday again, and you know what that means? IBM Announcements!

(Yes, OK, it's actually Thursday. I wrote this post weeks ago, but was embargoed until Jan 10, and then was asked to wait until Jan 12 so that the IBM Marketing team could translate my text into 15 different languages.)

This week, the IBM DS8000 team announces a new High Performance Flash Enclosure (HPFE-Gen2) and a series of All-Flash Array DS8880F models that exploit this new technology.

New High Performance Flash Enclosure (HPFE-Gen2)

The original HPFE was 1U high with 16 or 30 flash cards, and could support RAID-5 or RAID-10. Most used RAID-5, resulting in four array sites of 6+P each, leaving two cards for spare. These 1.8-inch cards were only 400 or 800 GB in size, so the maximum raw capacity was only 24TB per 1U enclosure.

The new HPFE-Gen2 enclosure is a complete re-design, consisting of two Microbays and two TeraPacks. The I/O Bays attach to the Microbays via PCIe Gen3. The Microbays in turn attach to both TeraPacks via redundant 6 Gb or 12 Gb SAS.

Each TeraPack holds 24 flash cards each. Since the TeraPacks come in pairs, you can install 16, 32 or 48 flash cards per enclosure. Each 16-card set represents two array sites, for a maximum of six array sites per HPFE-Gen2.

RAID-5 for 400/800 GB. Two 6+P arrays, four 7+P arrays, and two spares.

RAID-6 for 400/800/1600/3200 GB. Two 5+P+Q arrays, four 6+P+Q arrays, and two spares.

RAID-10 for 400/800/1600/3200 GB. Two 3+3 arrays, four 4+4 arrays, and four spares.

(Technically, these new "Flash cards" are 2.5-inch Solid State Drives (SSD) placed into the HPFE Gen2 connected to the PCIe Gen3 interface, with 50 percent additional capacity to tolerate up to 10 drive-writes-per-day (DWDP). IBM will continue to call them "Flash Cards" for naming consistency between the two generations of HPFE.)

The new HPFE-Gen2 enclosures are substantially faster, offering up to 90 percent more IOPS, and up to 268 percent more throughput (GB/sec). The Microbays use a new flash-optimized ASIC to perform the RAID calculations.

New All-Flash Array DS8880F models

IBM introduces the DS8884F, DS8886F and DS8888F that are based entirely on the HPFE-Gen2 enclosures described above.

Business Class

Enterprise Class

Analytic Class

DS8884
Hybrid - HDD/SSD/HPFE mix

DS8886
Hybrid - HDD/SSD/HPFE mix

DS8888
AFA - HPFE only

DS8884F
AFA - HPFE-Gen2 only

DS8886F
AFA - HPFE-Gen2 only

DS8888F
AFA - HPFE-Gen2 only

New zHyperLink connection

Also, as a "Statement of Direction", IBM intends to deliver field upgradable support for zHyperLink on existing IBM System Storage DS8880 machines for connection to z System servers. zHyperLink is a short-distance, mainframe-attach link designed for lower latency than High Performance FICON.

Typical latency with FICON/zHPF is around 140-170 microseconds, and this new zHyperLink is estimated to reduce this down to 20-30 microseconds, but is limited to 150 meter fiber optic cable distance. zHyperLink is intended to speed up DB2® for z/OS® transaction processing and improve active log throughput.

Fellow blogger Chris Mellor from The Register has an interesting post titled [It's a ratchet: Old storage guard face incoming tech squeeze]. Chris opines that the big traditional storage vendors -- which he refers to as the "old guard": Dell EMC, HDS, HPE, IBM and NetApp -- are being squeezed out by startups with new technologies.

Last week, I saw the play [Fiddler on the Roof], a musical production by Arizona Theater Company (ATC), and thought of various parallels with Chris's post.

For those not familiar, the story centers around a father named Tevye and his wife trying to stick to tradition, with five daughters who are open to breaking with tradition to get married. The family lives in a small rural town, back in a time long ago when people were persecuted for their religious and ethnic background. Aren't you glad we live in [more enlightened times]!

Back to Chris Mellor, he writes in his post:

"This old guard has so far failed to squash newcomers in the all-flash array, hyperscale, object and software-defined storage areas. This is despite the established firms adopting these technologies and acquiring some startups."

Should the old guard try to squash newcomers? Often, these startups provide much needed innovations that move the IT industry forward.

In the play, Tevye wants to stick to tradition, whereby the town's matchmaker would find a husband for each daughter, and he, as father of each bride, would then provide his permission and blessing to the match.

Obviously, these startups are neither asking the old guard for their permission nor their blessing. While I can't speak for the rest of the "old guard", IBM is leading in these various spaces. Let's look at each of these new trends.

All-Flash Arrays (AFA)

The category of "All-Flash Arrays" include both purpose-built hardware as well as traditional devices based on solid-state drives (SSD). While the R&D investment needed for purpose-built hardware can limit this to some of the largest vendors, nearly any startup can slap commodity SSD into traditional HDD controllers and call it AFA.

IBM offers the world's fastest AFA, and has been a leader in the AFA category for the past three years, investing over $1 Billion USD on its FlashSystem, DS8000, Elastic Storage Server (ESS), SVC and Storwize product families.

Software-Defined Storage (SDS)

While the definition for SDS is still in a bit of flux, IDC has tried to identify three characteristics:

Federation between the underlying persistent data placement resources to enable data mobility of its tenants between these resources

IBM has been ranked [Number 1 in Software Defined Storage] for several years now, investing over $1 Billion USD in its IBM Spectrum Storage family. This collection of software is implemented in a variety of offerings, including pre-built systems, software that you can deploy on commodity off-the-shelf servers, and in the Cloud.

Object storage

Object storage breaks tradition with block and file-based storage solutions. Rather than reading and writing files using POSIX, NFS or SMB protocols, objects are accessed via HTTP GET and PUT requests. The two most common protocols are Amazon S3 and OpenStack Swift.

Object storage is ideal for static and stable data that either never changes, or changes infrequently. A lot of new workloads are based on unstructured data that falls in this category, such as Big Data Analytics, High-performance Computing (HPC), and active archives.

"Hyperscale leverages commodity servers and a software-defined approach, scaling the resources needed for applications and storage separately. As storage needs grow, companies can add servers running software-defined storage (SDS) to the storage tier to expand capacity... Data is automatically distributed across the entire cluster of storage servers as new nodes are added to the system... With hyperscale, .. cluster nodes network together to form a storage resource pool."

This breaks from the tradition of dual-controller high-end arrays, which scale-up, rather than scale-out. IBM offers its IBM Spectrum Accelerate, IBM Spectrum Scale, and IBM Cloud Object Storage System to fill this hyperscale requirement.

In the play, Tevye realizes the world is changing all around him, he can either fight these changes and stick to tradition, or accept that he must change also, and move on. After 105 years, IBM continues to lead the IT industry, primarily by adopting new trends and technologies, moving to new business opportunities as they present themselves.