“Transformation” is a commonplace concept in the tech industry, and for good reason. Since the very beginning of computing, servers, personal computers, storage and networking systems and other technology products and services have been employed to fundamentally alter the ways that people live and work. Transformational IT tools and solutions have helped organizations achieve goals and successes that would have been unthinkable a generation or two ago.

But while these business transformations are compelling and even inspiring, how exactly does the process work? That subject was in the spotlight at last week’s Lenovo Accelerate 2019 partner conference in Orlando, Florida, and central to the Transform 3.0 event and industry analyst council hosted by its Data Center Group (DCG). Let’s consider what transformation means to the company and its partners, and how it is helping Lenovo DCG shift the competitive balance in numerous markets.

Transformation’s three “I’s”

The transformation central to Lenovo’s events is the intelligent rather than organic evolution that companies proactively pursue for specific economic, competitive, organizational and business benefits. The process typically involves three steps:

Intention – involves a recognition of the need for and decision to pursue specific changes

Implementation – includes exploring various options, then organizing and executing the strategic plan

It’s hardly a surprise that business transformations often fail. People resist change even when they know it’s necessary. Executives talk a better game than they play. Selfishness and gamesmanship erode the best intentions. Habits are hard to break. Cultural resiliency can quickly morph into pigheaded resistance.

Then again, numerous organizations have succeeded in fundamentally transforming the ways they approach, pursue and compete in commercial markets. That includes Lenovo which, through both internal development efforts and strategic acquisitions, has grown from being a China-focused PC maker to becoming a truly international vendor of end-to-end solutions and services. Along the way the company has become the #1 PC vendor and enjoyed five consecutive quarters of DCG revenue growth.

In Lenovo’s most recent quarter (announced on February 20), the company posted its highest group revenues in four years of $14 billion, up 8.5% YOY. Lenovo DCG saw a 31% YOY increase in revenues to $1.6 billion, and recorded YOY revenue growth in all geographies, including triple-digit growth in North America, and double-digit growth in Asia-Pacific, EMEA and Latin America. In particular, sales of high margin hyperscale and software-defined infrastructure (SDI) solutions are far outpacing the market. In addition, Lenovo retained its #1 position on the Top500 list of global supercomputers and substantially increased its lead over competing vendors.

Lenovo Accelerate and Transform 3.0

The Accelerate conference for Lenovo’s business partners was held separately from the first two DCG Transform events so it’s worth asking why the two were integrated this year. The simple answer is that it’s due to the Data Center Group’s increasing importance to value added resellers (VARs), system integrators and other strategic partners.

Prior to Lenovo’s 2014 acquisition of IBM’s Intel-based System x products, organization and intellectual property, the company’s modest server offerings mainly focused on small business applications and use cases. The IBM assets immediately supercharged Lenovo’s system portfolio, but the company stumbled a bit out of the gate and failed to capitalize on the global enterprise clients IBM cultivated. That faltering slowed and then reversed course after the 2016 hiring of Kirk Skaugen as president of Lenovo DCG.

It would be difficult or impossible to find a senior executive with a deeper understanding of or experience with Intel-based systems and markets. Prior to joining Lenovo, he spent 24 years with Intel, including a decade managing organizations in the company’s data center group. Skaugen put his deep experience to good use by assembling an exceptional management team and overseeing a build-out of the DCG portfolio, including substantial expansions of both traditional and leading-edge server and storage solutions, and key strategic partnerships.

The two previous Transform events highlighted substantial expansions of Lenovo’s server (2017) and storage (2018) portfolios, and the latter also included the announcement of a high-profile strategic partnership with NetApp. By growing DCG’s solution portfolios and effectively pursuing new and emerging markets, Skaugen and his team have enabled Lenovo to engage with businesses of every kind and size, in virtually any global market. In turn, those same efforts have considerably expanded commercial opportunities for Lenovo’s strategic partners to pursue.

So, what was new and different at this year’s conferences? Central to both events was a notable broadening of Lenovo’s vision across all workspaces, data centers and clouds, to the edge of enterprise IT infrastructures. In fact, Kirk Skaugen’s keynote highlighted expanded edge-to-cloud portfolio and service offerings, led by a new wireless-enabled edge solution, the ThinkSystem SE350 Edge Server.

Designed for easy installation and management, the ThinkSystem SE350 can streamline the collection and analysis of data collected from remote sensors, cameras and solutions, including Lenovo’s customer validated ThinkIoT offerings. The new system will also complement solutions for specific vertical markets and use cases the company is developing with partners, like Pivot3 (video surveillance) and Scale Computing (future ready retail).

Skaugen also discussed plans to expand commercial engagements with telco customers to take advantage of emerging 5G business opportunities. Given Lenovo’s success in hyperscale markets (the company already counts six of the top 10 global hyperscale companies as customers), developing solutions for the extreme conditions of telco data centers is a logical step.

Finally, Skaugen welcomed Franz Faerber, EVP and Global Head of SAP Technology and Innovation for Big Data who announced that SAP had named Lenovo as the Pinnacle Partner award winner in the infrastructure category for 2019. The companies have long worked closely together (SAP utilizes Lenovo’s ThinkSystem platforms as the reference architecture for its HANA in-memory database solutions) and the Pinnacle Partner award underscores the continuing value and vitality of their relationship. In addition, Skaugen announced the Lenovo Intelligent Insights with SAP Data Hub which addresses corporate challenges with managing, orchestrating and governing data.

Lenovo runs Lenovo

The Accelerate audience was both appreciative and enthusiastic. Not surprising since Lenovo’s new products and services, along with its longer-term strategies, are effectively putting more arrows into partners’ quivers and preparing future targets to hunt. But there was also a deeper subtext in the Lenovo keynotes that rounded out the new products/new opportunities messaging.

Art Hu, Lenovo’s SVP and CIO touched on those points during his keynote by focusing on the profound, constant, never ending change that is a central element of 21st century business. The central question, Hu noted, is, “How to capture value and opportunity during times of extraordinary change.” For Lenovo, an obvious focal point is to continue exploring, expanding and exploiting new technologies and opportunities associated with broader market evolution.

But Hu also noted that the company recognizes and respects, “The value of trusted partners to drive the next stages of smart business transformation.” In other words, transformation isn’t merely a cliché—it is elemental to effective competition and success, and strategic partners have critical roles to play in the process. At the same time, business evolution is seldom, if ever, happenstance. Instead, it requires everyone—vendors, partners and customers—to step-up, excel and be leaders or enablers of intelligent transformation.

Final analysis

Why should Lenovo be regarded a leader in business transformation? Why should business partners trust the company to deliver on its promises?

Because Lenovo has been there and done that itself, time and again. From the company’s founding 35 years ago to its emergence as a maker and then a leading vendor of PCs for China, to its acquisitions of IBM’s PC and Intel-based server organizations, to becoming the global leader in PC sales to nearing annual revenues of $60B to focusing on IT solutions that extend across workspaces to data centers to clouds to the edge and beyond.

As Hu noted in his closing remarks, “Lenovo runs Lenovo.” That is, the company clearly understands that transformation comes from within. Plus, it has the history to show what intelligent transformation, successfully intended, fully implemented and continually inspired can accomplish for a company, its business partners and customers.

Is it any wonder that the attendees at Accelerate 2019 and Transform 3.0 are ready, willing and able to trust and follow Lenovo’s lead?

Reports of the imminent demise of IBM’s Z mainframes, the company’s flagship enterprise system platform, have been floated – only to plummet ignominiously earthward – for over a quarter century or nearly half of the time the mainframe has been commercially available. Such rumors initially arose among IBM’s competitors in the early 1990s when the company was on the ropes, reeling like a punch-drunk boxer past his prime, until Lou Gerstner’s sober management got it back in fighting trim.

You can understand why some vendors would willingly spread garden variety fear, uncertainty and doubt (FUD), attempting to undermine faith in a platform they didn’t have a snowball’s chance in hell of besting. But how and why has IBM proved them, along with countless numbers of doubtful analysts, reporters and industry experts so wrong, so regularly for so long? The answer is fairly simple: Along with being the industry’s most stable, resilient and secure enterprise system, the IBM Z is also more flexible and adaptable than other platforms.

In essence, the reason that the mainframe has thrived for well over a half century is because IBM has reinvented it time and again to support the evolving needs and business requirements of its enterprise customers. That ability to evolve in order to support the evolution of others is clear in the Tailored Fit Pricing for IBM Z offerings that the company announced this week.

Continually evolving the mainframe

So how exactly has IBM altered the mainframe over the years? For the first three decades, the company’s path was fairly conventional. The mainframe, after all, began as a digital complement to the mechanical calculators and other transaction-focused business machines that were central to IBM’s success. Over time, new technologies, including increasingly powerful database and middleware offerings, were used to extend the mainframe’s ability to support and extend emerging business applications.

Then in the mid- to late-1990s, IBM began exploring uncharted territory with its decision to formally and financially support Linux and other open source technologies, beginning with its (then named) zSeries mainframes. The decision was not universally popular—in fact, some IBM board members believed Linux would destroy the mainframe’s value. History proved those naysayers to be as utterly wrong as they were shortsighted.

Other adaptations soon followed, including co-processors for robustly supporting Java (zAAP), integrated information solutions, including DB2 (zIIP) and cryptographic (4767) workloads without measurably impacting the mainframe’s core capabilities. There were also new mainframe form factors, such as lower capacity (and priced) “midrange” mainframes (z13s and z14 ZR1), and the introduction of a Z mainframe solution sporting an Intel-based IBM BladeCenter extension (zBX) for enhancing and managing multi-platform environments

Linux continued to be a major driver for mainframe customers, eventually becoming the operating environment of choice for over half of IBM’s Z workload sales. Shortly thereafter, the company introduced LinuxONE mainframe that was provisioned entirely with and for Linux-based workloads. LinuxONE, in combination with robust new secure service containers is central to IBM’s blockchain solutions and IBM Cloud’s blockchain services.

The need for new mainframe pricing

These points aside, an area where IBM was somewhat less forward thinking was in how it charged enterprises for using mainframe software. In short, software license costs are based on how much processing systems perform in a given time period.

For years, license costs have been calculated according to a “rolling four-hour average” (R4HA) peak, which set the monthly bill for IBM’s clients. Last year, IBM also began offering flexible “container” pricing for less predictable mainframe use cases, like development/testing efforts and modern applications.

Neither of these models were fully satisfactory for mainframe customers because they did not address existing production workloads, which remained R4HA-based. Not surprisingly, clients’ IT staff focused on rigorously managing mainframes to avoid peak R4HA consumption and charges, reducing their availability for critical work and projects. Plus, the monthly billing model made budgeting unpredictable, at best—a critical issue for IT departments under constant pressure to rein in or reduce costs.

Finally, they inhibited customers from fully leveraging their mainframes for modern applications and related commercial opportunities, like mobile payment processing.

Tailored Fit Pricing – Dressed to thrill Z customers

The last thing any vendor wants to do is to become an impediment to customers pursuing new business. So how is IBM addressing these issues? With Z mainframe software pricing that one company executive described as “Two sizes fit most.” They are:

Enterprise Capacity Model – Designed for clients that require operational simplicity and complete cost predictability but who also expect substantial workload growth. Pricing is based on past usage and growth patterns and is priced at a consistent monthly rate. Essentially, this model is discounted full capacity.

Moreover, IBM is offering considerable flexibility in how its new pricing models are designed and configured. For example, a Consumption Model deployment can support separate containers supporting specific use cases. So, a customer might utilize one container/pricing model for legacy applications leveraging IBM middleware solutions (DB2, IMS, WS MQ) and another for modernized workloads.

IBM noted that it has deployed Tailored Fit Pricing solutions during the past year for 25 Z mainframe customers (16 Consumption Models and 9 Capacity Models). Though customers chose the new pricing offerings for a variety of technical, budgeting, financial and market reasons, most saw benefits in simplified operations, greater flexibility in dealing with usage spikes and easier provisioning of resources for dev ops. Moreover, some reported notably improved peak performance and enhanced planning capabilities.

Final analysis

With its new Tailored Fit Pricing models, IBM is clearly onto something good. A notable point about both models is that discounted growth pricing is offered on all workloads – whether they be 40-year old Assembler programs or 4-day old JavaScript apps. This is in contrast to previous models which primarily rewarded only brand-new applications with growth pricing. By thinking outside the Big Iron box, the company has substantially eased the pain for its largest clients’ biggest mainframe-related headaches. That’s important from a customer satisfaction perspective but it also impacts the competitive and business value of IBM’s flagship enterprise computing platform.

By making access to Z mainframes more flexible and “cloud-like,” IBM is making it less likely that customers will consider shifting Z workloads to other systems and environments. As cloud providers become increasingly able to support mission critical applications, that’s a big deal. The new pricing models should also help companies derive greater value from their IT organizations. That’s great financially but it is also a critical point considering the shortages in and demand for workers with enterprise IT skills.

Overall, the Enterprise Capacity Model and Enterprise Consumption Model offer more proof of IBM’s ability and willingness to adapt the Z mainframe to support the changing requirements of its clients. Sometimes that results in substantial new and improved technologies, such as those we’ve seen time and again in emerging generations of IBM Z. Other times, like now, it simply involves IBM listening to its customers and then determining how the mainframe can be evolved to meet their needs.

There was a time when most personal computers (PCs) for business had a specifically utilitarian look and feel: clunky, durable, built for the long haul—not for speed. It was more about practicality than a dedication to any specific design aesthetic. While consumers tended to replace their PCs every 3 to 4 years, it wasn’t unusual to see commercial organizations squeezing 4 to 5 or even six years out of workplace PCs.

Things began to change in the mid-2000s with the advent of Bring Your Own Device (BYOD) trends among younger workers who preferred highly mobile solutions to tethered office PCs and phones. Their employers and IT vendors followed close behind with generations of ever more powerful, sleek client devices, including notebooks, tablets and smart phones. But it would be a mistake to think that client devices alone define workplace computing. Equally or even more important are the related deployment and PC lifecycle management (PCLM) services vendors offer commercial customers.

Dell Latitude – a longitudinal view

With over two decades of successful mobile PCs under its belt, Dell has a deep understanding of what businesses need from commercial notebooks. This longitudinal viewpoint has allowed the company to both respond to and to anticipate customers’ requirements as it develops new Latitude notebooks, including those announced last week. So, what are customers looking for in mobile PCs?

Ever lighter notebooks that still deliver maximum performance in a range of form factors and price points

Longer battery life, faster charging times and wireless features that enable workers to stay fully productive wherever they happen to be

Integrated security features and services that keep notebooks and business data secure from increasing numbers and kinds of threats

Durable and functional, yet stylish designs

In essence, organizations want mobile PCs to deliver maximum business value, enabling employees to be fully productive. But they also want the latest, greatest features in terms of performance, security and good looks.

Dell’s response: Not a problem.

Dell’s 10th gen Latitude features and models

The new Dell Latitudes offer numerous new features and technology options. All support the latest 8th gen Intel Core processors, as well as optional Intel Core vPro chips and Intel Wi-Fi 6 (Gig+) solutions. The new systems also feature Dell’s ExpressCharge (providing up to 80% battery charge in one hour), ExpressCharge Boost (up to 35% charge in 20 minutes) and ExpressConnect (intelligently chooses and connects to the strongest available WiFi network) technologies.

Security features include optional (for some models) fingerprint readers built into the power buttons and Windows Hello-capable IR cameras for biometric authentication. Many systems can be equipped with Dell SafeScreen (which allows more privacy in public settings), new camera privacy shutters, FIPS 201 contacted Smart Card readers or contactless Smart Card Readers with SafeID to protect user credentials. New Latitudes also support Dell’s new SafeBIOS utility which verifies systems’ firmware integrity via a cloud-based service.

Finally, the 10th gen Latitude portfolio is optimized for Dell’s new Unified Workspace service. More on that in a bit.

The new systems include:

Latitude 7000 series – These include the Latitude 7400 2-in-1 Dell announced at CES in January, new 13- and 14-inch Latitude notebooks and the Latitude 7200, a 12-inch detachable 2-in-1. All can be equipped with up to 32GB of memory. Select configurations deliver up to 20 hours of battery run time—up to 25% more than previous systems. The 7000 series also offers the industry’s first narrow border 4X4, CAT16 cellular antenna for gigabit LTE connectivity.

Latitude 5000 series – According to Dell, its new 5000 series offers the smallest mainstream business notebooks in its class. Systems are available in 13-, 14- and 15-inch configurations and offer up to 20 hours of battery run time. Available displays include narrow border HD, Full HD and touch screen configurations. Dell is also introducing the new Latitude 5300 2-in-1 which features a 360° hinge and a Corning Gorilla Glass touch screen with anti-glare coating. The 5300 can be configured with up to 32GB or memory and up to 1TB of storage.

The Latitude 3000 series – These are entry-level notebooks with enterprise capabilities. The 3000 series is available in updated 14- and 15-inch models, along with a new 13-inch solution that Dell calls “the world’s smallest and lightest essential business notebook.”

Three new commercial modular docking stations that offer upgradeable connectivity options, including Thunderbolt 3, dual USB-C or single USB-C. The new solutions support Dell’s new ExpressCharge and ExpressCharge Boost technologies. The upgradable power and connectivity options are designed to enable customers to adapt to and support the changing needs of their workforce for several generations of Latitude systems.

Dell Technologies Unified Workspace

Dell’s Unified Workspace offering integrates solutions across Dell’s device and service offerings, as well as solutions provided by VMware, Secureworks and CrowdStrike to provide workers highly personalized and secure endpoint devices and services while also simplifying device lifecycle processes. In other words, Dell’s new offering is designed to take traditional PCLM processes to an entirely new level.

Unified Workspace qualifies as a significant expansion and enhancement of the Provisioning for VMware Workspace ONE services that Dell announced last fall. That solution enabled customers to have Dell notebooks, desktop PCs and workstations preconfigured at the factory with specific applications and settings so that systems are ready to be put to work as soon as they are unboxed with minimal effort required by a company’s IT staff.

How is Unified Workspace different? It starts in the planning stage with Dell analytics providing insights on how individual employees are using PCs to help customers choose the right systems and applications. After PCs are deployed, an array of new Dell solutions can be implemented to help secure them and the customers’ data resources.

These include, Dell SafeBIOS – an off-host BIOS verification utility integrated with VMware Workspace ONE, Secureworks and CrowdStrike (and also available as a standalone download). The solution stores untampered BIOS information away from devices so that security operations can compare settings and quickly detect and defend against BIOS attacks.

Finally, Dell Unified Workspace deployment, management, security and support solutions can be extended across and integrated with business environments regardless of the devices, operating systems and cloud providers that customers prefer. Just as importantly, customers can freely choose which Unified Workspace elements and services they prefer, as well as when and how to implement them.

Final analysis

So, what are we to make of all this? There are several points worth considering. First, Dell’s Latitude announcements demonstrate how fully its commercial client organization continues to develop and drive innovations that matter deeply to the company’s business customers. As workers and workplaces evolve, vendors need to provide PC offerings that help their commercial customers adapt to and profit from those changes.

Dell’s new solutions clearly fit into this mold with notebooks that are considerably more powerful and more power-efficient than the previous nine generations of Latitude systems. With three levels of offerings—the 7000, 5000 and 3000 series—the company has produced a unified portfolio of mobile PC endpoints and docking solutions that can address, support and fulfill virtually any business process or challenge.

An associated but little discussed issue is the degree to which Dell’s consumer PC division has become an engine of innovation that also drives the company’s commercial PCs. How so?

The aesthetic and materials innovations that have been central to the XPS line’s notable success have steadily found their way into Dell’s Latitude and Inspiron solutions, resulting in client portfolios that reflect broader trends in business and consumer PCs, and resonate with the people who use them at work and at home. I hope to write more on this topic at a future date.

Finally, Dell’s new Unified Workspace shows how the company is driving workplace innovations whose impact extends well beyond individual endpoints. By vastly simplifying PC lifecycle management, personalizing worker endpoints and ensuring that PCs and the data they contain are secured against external attack, the company is helping its business customers efficiently address and effectively manage their top-of-mind issues and concerns.

Moreover, the ability of Dell’s Unified Workspace to agnostically support heterogeneous devices and cloud platforms demonstrates the depth of Dell’s understanding of modern work environments and its dedication to putting its customers and their workplaces #1. That customer-focused approach is central to Dell’s new Latitude and Unified Workspace solutions and was a core message that reverberated throughout Dell Technologies World 2019.

At Dell Technologies World 2019 last week, Virtustream—Dell Technologies’ enterprise-class cloud business—announced a pair of new initiatives worthy of consideration. The first was an expanded collaboration between Virtustream and Equinix, a provider of private networking solutions for directly connecting enterprise customers with cloud computing platforms.

The second was a major update of the Virtustream Healthcare Cloud designed to greatly simplify the planning, deployment and migration of electronic healthcare records (EHR) systems hosted in the cloud. However, both announcements reflect more substantial issues: what constitutes “enterprise-class” cloud computing and how does it differ from commonplace cloud services? Moreover, why or when do organizations need these services?

Let’s consider how Virtustream’s announcements reflect on these larger points.

The case for enterprise-class cloud

So exactly what does “enterprise-class” mean in terms of cloud? In large part, it concerns the ability of a platform or vendor to support mission-critical workloads and information. That is, the applications and data without which large enterprises would be dead in the water.

Numerous internal issues can negatively impact mission-critical performance, ranging from simple operator errors or inexperience to system errors and network glitches to faulty or postponed patches. Then toss in external factors, including weather events, seasonal or unusual network traffic, natural and manmade disasters, criminal and governmental cyber-attacks and Murphy’s Law events that are impossible to anticipate.

Plus, let’s not forget that applications and data in highly-regulated industries, like healthcare and finance and some global markets need to follow compliance rules that load further complexities onto already-strained mission-critical processes.

Enterprise system vendors know how to build solutions with the resiliency, availability and scalability (RAS) and security features necessary to assure the five 9s (99.999%) or more of system availability that enterprises typically demand in quality of service (QoS) agreements. If they didn’t, they would have gone out of business long ago.

Cloud platforms and providers are in a somewhat similar, somewhat different position. Though most work with specific work groups or organizations within enterprises and some claim to be able to support enterprise-class functionality, their customers aren’t exactly rushing to deploy mission-critical workloads on those public cloud platforms. In fact, it’s arguable that the evolution of hybrid- and multi-cloud is, at least, partly due to businesses’ distrust of public clouds and their preference for limiting the data and applications they employ cloud platforms to support.

That said, a handful of cloud vendors do develop and deliver enterprise-class, mission-critical services and solutions and have the history and experience to prove it. Virtustream is one of those select vendors and numerous enterprises have entrusted the company with their mission-critical applications.

Those include NIBCO which wanted to modernize its IT infrastructure and redirect resources to projects that added value to the business. That was a challenging proposition given the time and resources allocated to data center operations and SAP support. After collaborating with SAP experts at Virtustream, NIBCO chose Virtustream Enterprise Cloud as its solution. By turning over responsibility for day-to-day SAP operations to Virtustream, NIBCO was able to achieve greater flexibility, agility and speed of execution.

Virtustream and Equinix

So, what’s new about Virtustream and Equinix? While the companies have worked together in the past, their new expanded collaboration enables Virtustream nodes in the U.S. and Europe to directly connect to the Equinix Cloud Exchange (ECX) Fabric thus increasing functionality, automation, speed-to-deployment and other service options for Virtustream customers. The new enhancements cover all workloads, including mission-critical applications supporting sensitive data, such as financial details, and personal information, like healthcare records.

Healthcare Cloud

Virtustream’s major update to its enterprise-class Healthcare Cloud features advanced architecture components that improve the platform’s flexibility and scale. Customers can employ improved automation tools to simplify the deployment and migration of EHR systems hosted in the Virtustream Healthcare Cloud. Virtustream also now supports the use of VMware Horizon for secure and flexible application access. Overall, the update should improve healthcare customers’ business agility and will also enable them to consider and adopt new solutions and services from Dell Technologies.

Final analysis

Like most technology products and processes, enterprise-class computing didn’t arrive in the market fully-formed and functional. Instead, what we now consider to be enterprise-class or mission-critical computing evolved in stages as large-scale organizations adopted computational tools and adapted them to their discrete business requirements. As a result, enterprise-class isn’t so much a product or service as it is a statement of a vendor’s capabilities—assuring large customers that the vendor has their back and can support their needs however sizable and complex, whatever their requirements and wherever they do business.

Similarly, since enterprises and their businesses are always evolving, there is little rest for the vendors that supply such services, including cloud platform providers. In order to thrive themselves, vendors need to ensure that their customers have access to the technologies, tools and services that best suit their requirements. That’s the central message behind Virtustream’s expanded collaboration with Equinix and the major update to its Healthcare Cloud.

Both will enable Virtustream’s cloud customers to achieve measurably better results than they did before. Both will enable enterprises to become measurably better businesses. Those are the central tenants and values of enterprise-class solutions, including those provided by Virtustream.

Corporate acquisitions nearly always float on rising tides of optimism. The soon to be merged businesses and their executives believe their lives, futures and potential will be brighter if conjoined than apart and spend significant resources to that end. However, there is still considerable work ahead after a deal is done.

Products, divisions, leadership positions great and small all need to be considered, rejiggered and sometimes replaced. Old faces depart, new hires arrive. At the end of what is often a years-long process, the whole is, hopefully, greater than the sum of the separate parts.

But how often is that true? In the case of large-scale tech industry deals, middling results or outright failure is all too often the case. Just look at HP’s Compaq, EDS and Autonomy acquisitions. But there have been some notable successes, including Dell’s 2016 purchase of EMC for $67B, the largest such deal in tech industry history.

The fully mature results of Dell’s effort were on display in Las Vegas this week at Dell Technologies World 2019. Let’s consider what the company has accomplished, where it is going and what that means for its customers, partners and competitors. Continue reading →

Artificial Intelligence (AI) is a cause célèbre inside and outside the IT industry, inspiring often heated debate. However, a point that many—especially AI focused vendors—make is that cloud-based computing offers the best model for supporting AI frameworks, like Caffe, PyTorch and TensorFlow, and related machine learning processes.

But is that actually the case?

Gyrfalcon Technology (GTI) would argue that delivering robust AI at far edges of networks and in individual devices is both workable and desirable for many applications and workloads. In fact, the company offers a host of AI inference accelerator chips that can be used for those scenarios, as well as cloud-based server solutions for AI applications.

Now GTI is licensing its proprietary circuitry and intellectual property (IP) for use in System on Chip (SoC) designs. As a result, silicon vendors will be able to enhance and customize their own offerings with GTI’s innovations.

Let’s take a closer look at what Gyrfalcon Technologies is up to.

AI in the cloud

Why do most AI solutions focus on cloud-based approaches and architectures? You could call it an extreme case of “When all you have is a bulldozer, everything looks like a dirt pile” syndrome. The fact is that until fairly recently, the cost of AI far outweighed any practical benefits. That changed with new innovations, including cost-effective technologies like GPUs and FGPAs.

Some of the most intriguing and ambitious AI projects and commercial offerings, like human language processing, were undertaken by cloud vendors and infrastructure owners, including Amazon, Google and IBM, supported on the silicon side by NVIDIA, Intel and chipmakers. They had the compute and brain power to take on large-scale efforts where data accumulated by edge devices, like smart phone conversations and commands, is relayed to cloud data centers.

There, the data is used for training and enabling AI-based services, such as language translation and transcription, and products like smart home speakers.Are there any problems with this approach? Absolutely, with data privacy and security leading the charge. AI vendors uniformly claim that they are sensitive to their customers’ concerns about privacy and have tools and mechanisms in place to ensure that data is anonymized and safe. But Facebook, Google and others have been regularly dinged for mishandling or cavalierly maintaining customer data.

Cloud-based AI can also suffer latency issues, especially if network traffic is snarled. That might not be a big deal when you’re asking Alexa to recommend a good restaurant but it’s more problematic if it involves AI-enabled self-driving cars. There’s also the matter of using energy wisely. With the percentage of electricity consumed by data centers continuing to rise globally, building more IT facilities to support occasionally frivolous services seems like a literal waste.

AI at the edge

Gyrfalcon Technologies would argue that while cloud-based AI has an important role, it isn’t needed for every application or use case. Instead of a bulldozer, some jobs require a shovel or even a garden trowel. To that end, GTI offers a range of AI inference accelerator chips that support AI Processing in Memory (APiM) via ultra-small and energy efficient cores running GTI’s Matrix Processing Engine (MPE).

As a result, GTI’s solutions, like its Lightspeeur 2801 AI Accelerator can deliver 2.8 TOPS while using only 300mW of power. That makes it a great choice for edge-of-network devices, including security cameras and home smart locks. After being set up, chip adaptive training functions allow devices to learn from their surroundings. For example, a smart lock might use arrival and departure patterns to identify the residents of a home.

Enabling AI at the edge means that devices will be able to perform many functions autonomously or, if cloud connectivity is required, will be capable of vastly reducing the amount of data that needs to be transmitted. That lowers the costs, complexity and network traffic of AI implementations.

For cloud-based applications GTI offers the Lightspeeur 2803 AI Accelerators which are used in concert with GTI’s GAINBOARD 2803 PCIe card. For example, a single GAINBOARD card delivers up to 270 TOPS using 28 Watts for 9.6 TOPS/Watt, or about 3X greater efficiency than what competitors’ solutions offer.

Final analysis

The IT industry rightfully focuses on the value that innovative technologies and products provide to both consumers and businesses. Such solutions regularly come from massive Tier 1 vendors with decades of experience and billions of dollars in annual R&D funding. But oft times, innovative products and approaches are the brain children of smaller vendors like Gyrfalcon Technologies that are unawed by conventional wisdom.

With its AI Processing in Memory (APiM) and Matrix Processing Engine (MPE) technologies, GTI has enabled clients, including LG, Fujitsu and Samsung to reimagine how artificial intelligence can be incorporated into new consumer and business offerings. By licensing its Lightspeeur 2801 and 2803 AI Accelerators circuitry and intellectual property (IP) for use in System on Chip (SoC) designs, GTI is offering existing and future clients remarkable autonomy in determining how AI can best serve their own organizations and their end customers.

Continual product evolution is one of the tech industry’s best and longest running selling points. It’s the foundational truth underlying technical chestnuts, like Moore’s Law and provides the subtext for innumerable marketing and promotional campaigns. But an often unaddressed yet valuable point to consider is the top-down way in which this evolution usually proceeds.

Developing new products costs money – lots, in fact, when it comes to business solutions. So not surprisingly, new products are initially designed to address the needs of large enterprises and other organizations that can afford to foot the bill and are willing to pay a premium for the new features, capabilities and benefits those solutions provide.

But eventually – often, fairly quickly – what were once enterprise-specific technologies find their way into more affordable, yet still innovative products designed for smaller companies and the channel/business partners that serve them. These points are clear in the new and updated additions IBM recently made to its Storwize V5000 family of solutions. Continue reading →

Issues of trust seldom arise in discussions about modern computing systems. It’s not that hardware and software are perfect. In fact, publications and online forums contain tens of thousands of posts hashing out the relative merits of various PCs, workstations and servers. But those products have been so commonplace for so long that their essential “rightness” as well as the results they provide are hardly ever questioned.

However, that wasn’t always the case, and a similar dynamic applies to most all emerging technical and scientific breakthroughs, including commercial artificial intelligence (AI) solutions designed for businesses and other organizations. Considering the inherent complexity of machine learning, neural networks and other AI-related processes, customers’ confusion about AI isn’t all that surprising. But what can be done to assuage their misgivings and bring AI into the mainstream?

Vendors, including IBM are tackling the problem with solutions designed to make AI processes and results more explainable, understandable and trustworthy. That should satisfy clients’ doubts and accelerate the adoption of commercial offerings, but explainable AI also yields other significant benefits. Let’s consider why explainable AI is so important and how IBM’s innovations are impacting its customers.

The problem of “black box” AI

A lack of clarity or understanding is usually problematic. When it comes to inexplicable artificial intelligence, three potential issues can arise:

Most importantly, a lack of transparency leaves users uncertain about the validity and accuracy of results. That is, the essential value of AI projects and processes is undermined.

In addition, if AI projects are inexplicable, it’s possible that their results might be contaminated by bias or inaccuracies. Call this a problem that you can’t really be sure you have.

Finally, when AI processes are not explainable, troubleshooting anomalous results is difficult or even impossible. That is, a lack of transparency leaves organizations unable to fix what’s broken.

How are AI-focused vendors addressing these issues? Unfortunately, often with fixes that worsen the situation, including “black box” solutions. These purport to deliver all the benefits of AI but fail to provide adequate transparency into how they work, how customers can determine the accuracy of results or how problematic issues can be addressed.

These solutions also encourage perceptions of AI as a mystery whose capabilities can’t be understood by mere mortals. In other words, whatever modest benefits “black box” AI may offer, leaving customers in the dark is detrimental to their work and goals.

The benefits of explainable AI

Is there a better way to proceed? Absolutely. How can organizations explain AI successfully? With a holistic approach that addresses several stages of the AI lifecycle:

Central to making AI projects more explainable is making AI models explainable instead of the black boxes many currently are.

Organizations must clearly and transparently articulate where and how AI is being used in business processes and for what ends.

They must also allow for analysis of AI outcomes and provide hooks to alter and override those outcomes as necessary.

To these ends, expert users and managers should employ technologies and solutions that vendors have designed to enhance AI transparency. These methodologies can speed the understanding of AI, which is great. It is also a critical issue for IT, marketing, sales and customer care organizations, especially those in highly regulated industries, such as banking and insurance.

This process can occur organically, as people experience AI and come to understand how it affects the business and them personally, thus impacting the very culture of an organization. Or it can be pursued proactively with the best tools and solutions currently available. Whichever way a company proceeds, people need to keep in mind the vast potential of AI. Why so? Because a time will come when AI is as essential to an organization’s success as the business technologies that are commonplace today.

The business benefits of explainable AI

Why is explainable artificial intelligence such an important issue and undertaking? It goes to the practical roots of how organizations do business. If they are to adopt and adapt to AI processes, they need to know that results are accurate. Otherwise, how can they assure customers and partners that AI-impacted decisions are valid and dependable? Consider two examples:

In financial services, accurate results are obviously critical for maximizing business outcomes and customer interactions. However, like other businesses in highly regulated industries, banks and other financial organizations must be able to prove that AI-impacted processes comply with government and industry rules or risk significant sanctions and penalties. That’s bad enough, but inexplicable AI might also damage client relationships. If customers seeking loans, credit cards or other services are denied by an AI-related system, company officials must be able to explain why that determination was made and how the client might address or correct problematic issues.

Global supply chain management is another promising area for AI because the complexity, volume and diversity of supply chain data make it extremely difficult for people to effectively track and adjust for real-time changes in demand. AI can enhance Forecast Value Added (FVA) metrics—learning from past successful and unsuccessful forecasts to help planners make better adjustments. But unless supply chain teams can easily monitor the accuracy of AI models, they can’t be certain that systems are really delivering the benefits they promise.

In light of these and other points, it’s difficult to see why vendors would develop or customers would consider inexplicable AI solutions.

What IBM is doing to open “black box” AI

IBM is working in numerous areas to develop and deliver explainable AI and advanced analytics solutions. The impetus for the company’s efforts was underscored in a recent blog by Ritika Gunnar, VP of IBM’s Watson Data and AI organization. “As humans, we’re used to the idea that decisions are based on a chain of evidence and logical reasoning anyone can follow. But if an AI system makes recommendations based on different or unknown criteria, it’s much more difficult to trust and explain the outcomes.”

Central to the company’s efforts is the Watson OpenScale platform that IBM launched in 2018. Designed to “break open the ‘black box’ at the heart” of AI models, Watson OpenScale simplifies AI processes, including detailing how recommendations are being made and automatically detecting and mitigating bias to ensure that fair, trusted outcomes are produced.

IBM is leveraging both existing open source technologies and proprietary algorithms developed at IBM Research to explore, enhance and explicate AI decision-making.

LIME (Locally Interpretable Model-Agnostic Explanations) is a widely used open source algorithm designed to explain predictions made by AI systems by comparing an explanation to an easily interpretable model.

Developed by IBM Research, MACEM (Model Agnostic Contrastive Explanations Method) goes well-beyond the capabilities of LIME by identifying both pertinent features that are present in a piece of data and those that are absent, enabling the construction of “contrastive explanations”.

One scenario for contrastive explanations is in banking, where it could be used to analyze loan application data. The system would alert the bank to issues, including poor credit ratings, but it could also spot and highlight missing documents, like an incomplete credit report. The bank could then notify the customer about the reasons for its decision and provide constructive advice.

In essence, solutions that deliver more accurate, transparent and trustworthy AI results, such as IBM Watson OpenScale, can help businesses make better decisions and enhance their services for and relationships with customers.

Final analysis

People are often concerned about new technologies, especially those that are highly complex or difficult to understand. Overcoming those doubts is central to technologies becoming widely trusted and commercially successful. In fact, without fostering understanding of and insights into emerging technologies, it’s unlikely that new solutions will find a place among the people and organizations they might otherwise benefit.

By making technologies like artificial intelligence and AI-based solutions and services clearly explainable, vendors can reduce the time required for new offerings to enter the mainstream. That’s why explainable AI offerings, like IBM Watson OpenScale are so important.

By breaking open the “black box at the heart of AI” to make processes and results fully explainable, IBM is aiding its customers and partners and furthering its own market strategies. More importantly, IBM’s explainable AI efforts should help establish the essential “rightness” of these solutions as entirely valid and wholly valuable business technologies.

Overall, IBM’s work in explainable AI should improve the mainstream understanding, acceptance and adoption of artificial intelligence among individuals and organizations worldwide.

The tech industry so reflexively worships shiny new startups that folks often forget or even disparage larger, well-established vendors. That’s unfortunate for any number of reasons but high on the list is that such attitudes miss a critically salient point: Self-reinvention, not stasis is the key to long term survival in technology.

PR-minded venture firms continually trumpet the “unique” innovations of whatever IPO-bound company stands to fill their pockets. In contrast, established firms develop and deliver more substantial innovations month after month, year after year, decade after decade. At least the successful ones do.

That point was in clear view during Intel’s recent launch of its 2nd generation Xeon Scalable processors (aka Cascade Lake) and other new ”data-centric” solutions. Let’s consider these Intel offerings, why they are important and how they will help keep Intel and its OEM customers riding high in data center markets.

The view from Cascade Lake

So, what exactly did Intel reveal during its launch?

General availability of its new 2nd-Generation Xeon Scalable processors (aka Cascade Lake), including over 50 workload-optimized solutions and dozens of custom processors

The new offerings include the Xeon Platinum 9200 processor which sports 56 cores and 12 memory channels. According to Intel the Xeon Platinum 9200 is designed to deliver socket-level performance and memory bandwidth required for high performance computing (HPC), artificial intelligence (AI) and high-density data center infrastructures. Both IBM Cloud and Google Cloud announced plans to deliver services based on Xeon Platinum 9200 silicon.

Among the custom silicon offerings are network-optimized processors built in collaboration with communications service providers (SPs). These solutions are designed to support more subscriber capacity and reduce bottlenecks in network function virtualized (NFV) infrastructures, including 5G-ready networks.

The Xeon D-1600 processor, a system on chip (SoC) designed for dense environments, including edge computing, security and storage applications.

New features built into the 2nd-Gen Xeon Scalable processors include integrated Deep Learning Boost (Intel DL Boost) for AI deep learning inferencing acceleration, and hardware-enhanced security features, including protections against side channel attacks, such as Spectre and Foreshadow

Next generation 10nm Agilex FPGAs that will support application-specific optimization and customization for edge computing, networking (5G/NFV) and data center applications

Pricing for the new solutions was not revealed during the launch. Product availability details can be found at intel.com.

Why it matters

Intel’s launch of its 2nd-Generation Xeon Scalable processors and other data center solutions comes at an odd time for the company. Conventional wisdom often depicts Intel as an overly complacent behemoth harassed and bloodied by swifter, agiler (often far smaller) foes. Critics claim the company is reacting too slowly to marketplace shifts and is being surpassed by more innovative technologies and vendors.

In some cases, Intel didn’t do itself any favors. The company unexpectedly dismissed its CEO, Brian Krzanich, then took months to find a new chief executive within its own ranks (Robert Swan, who joined Intel in 2016 as its CFO and led the company on an interim basis after Krzanich departed).

But in other cases, the company was at the top of its game. For example, while discovery of the Spectre, Foreshadow and other side channel vulnerabilities could have been a public relations nightmare, Intel’s willingness to take responsibility and transparently detail its efforts to repair the issues kept the situation from blowing out of proportion.

Despite these challenges, Intel continued to deliver steady, often impressive financial performance. While the company was no investor’s idea of a high flyer, it also had less distance to fall, as well as considerable padding to help mitigate unplanned impacts. The value of Intel’s conservative approach became apparent when GPU-dependent NVIDIA and AMD saw their fortunes swoon and share prices plummet when crypto-currency markets unexpectedly tumbled.

So, what do the new 2nd gen Xeon Scalable processors and other solutions say about Intel and how it sees its customers and competitors? First, consider the sheer breadth of technologies involved. Along with the latest/greatest features and functions that you’d expect in a next gen Xeon announcement, the inclusion of new SoC, FPGA and Optane memory and storage solutions, as well as workload-specific technologies, like Intel DL Boost demonstrate how and how effectively Intel is spreading its data center bets.

In addition, the depth of available solutions is impressive. That’s apparent in the 50+ SKUs that feature 2nd gen Xeon Scalable silicon and is also highlighted by the 56 core/12 memory channel Xeon Platinum 9200. It’s interesting that both IBM Cloud and Google Cloud proactively announced plans to develop offerings based on the chips since both focus on the needs of enterprise cloud customers and are sticklers for top-end hardware performance. Their support suggests that Intel’s claims about the Xeon Platinum 9200’s HPC, AI and high-density capabilities are fully justified.

Finally, the flexibility and customizability of the new chips is worth considering. Intel’s focus on workload-optimized solutions is one example of this since it reflects OEMs’ (and their customers) increasing focus on integrated, optimized systems. In fact, the new network-optimized processors that resulted from Intel’s collaboration with communications service providers offer intriguing insights into how Intel can and likely will continue to add discrete new value to its silicon portfolio.

Final analysis

Overall, the 2nd-Generation Xeon Scalable processors and other new data center technologies demonstrate both Intel’s ability to deliver substantial new innovations and its lack of complacency. That’s sensible from a tactical standpoint since the company’s competitors aren’t standing idly by – for one, AMD’s launch of its new Epyc “Rome” data center chips is just weeks away. However, it’s also strategically important for Intel to show how it became a leader in data center solutions in the first place and why it deserves to remain there.

The tech industry’s passion for perky start-ups and shiny new objects isn’t likely to fade any time soon. However, it would be a mistake to assume that leading-edge imagination and innovation reside solely in smaller organizations. Intel’s new “data-centric” 2nd-Generation Xeon Scalable, Agelix and Optane solutions prove that oft times, the most innovative vendor you can find is the one you’re already working with.

A little discussed benefit of industry standard microprocessors and related components is the predictability they provided system vendors and enterprise customers. By leveraging the “tick-tock” of Moore’s Law-derived innovations that Intel (and, less frequently, AMD) provided, vendors could focus their attention and investments on enhancing server design, operational functions and facilities issues.

But that predictability also created room for a doubt: What would happen to the industry and Intel when steady improvements hit the brick wall of material limitations? Intel answered those concerns pretty clearly in this week’s launch of its new 2nd-Gen Intel Xeon Scalable platform (aka Cascade Lake). I’ll have more to say about that subject in next week’s Pund-IT Review. For now, let’s consider what 2nd Gen Xeon Scalable means to Lenovo’s Data Center Group (DCG) and its customers.

Refreshed/enhanced

Like other vendors, Lenovo announced that it will use Intel’s new Xeon solutions to refresh its DCG portfolio, including 15 ThinkSystem servers and five ThinkAgile appliances. As a result, Lenovo can offer clients the incremental-to-significant performance, power efficiency and workload-specific enhancements that are a predictable part of Intel’s next-gen silicon launches.

Those are important for supporting traditional business applications and conventional processes. However, they will also facilitate the development and adoption of new, rapidly evolving workloads, including advanced analytics, artificial intelligence and high-density compute infrastructures.

In addition, one specific feature of 2nd-Gen Intel Xeon Scalable chips—their support for Intel’s Optane DC persistent memory technology—could be particularly critical for and valuable to Lenovo customers. Why so?

According to Intel, Optane DC will enable customers to transform critical data workloads – from cloud and databases to in-memory analytics and content delivery networks by:

Reducing system restarts from minutes to seconds

Supporting up to 36 percent more virtual machines per system

Increasing system memory capacity by up to 2X, or as much as 36TB of memory in an eight-socket system

That last point is particularly important for Lenovo due to the company’s longstanding leadership in memory-intensive computing. The company has long provided the reference architecture for SAP’s HANA in-memory database solutions and technologies, and Lenovo is a leading provider of SAP HANA solutions. The importance of this point was reflected in Lenovo’s announcement that its ThinkSystem SR950 will be the industry’s first eight-socket server to support Optane DC and its 36TB of memory capacity option, making it ideal for demanding SAP HANA environments.

Lenovo is also using 2nd Gen Xeon Scalable silicon to develop new engineered systems for key workloads, including SAP HANA, Microsoft SQL Server and Red Hat OpenShift Containers. These solutions will be verified as Intel Select Solutions, signifying their ability to support superior workload performance, ease of deployment and simplified evaluation. The company also expects to introduce new Intel Select Solutions for workloads, including VMware vSAN, Network Function Virtualization Infrastructure (NFVI), Blockchain Hyperledger Fabric, & Microsoft Azure Stack HCI.

Compliments of Lenovo

This is not to suggest that the benefits of Lenovo’s refreshed portfolio are due entirely to Intel. The new solutions all benefit from the company’s ThinkShield which secures Lenovo devices with oversight of development, supply chain and lifecycle processes. Along with having unique control over its own global supply chain, the company is also aligned with Intel’s Transparent Supply Chain, allowing customers to locate the source of components in their new systems. In addition, Lenovo oversees the security of suppliers that build intelligent components, making sure they conform to Trusted Supplier Program guidelines and best practices.

Finally, while the new and refreshed solutions can be purchased directly, they are also available through Lenovo’s TruScale, the company’s recently announced consumption-based as-a-service program. TruScale enables customers to use and pay for Lenovo data center hardware solutions without having to purchase the equipment.

Final analysis

To some folks in IT, predictability is a mundane topic that is easily superseded by whatever shiny new object falls off the industry turnip truck. That attitude ignores the fact that for customers, especially businesses that depend on data center systems and other solutions, IT predictability can mean the difference between succeeding, faltering or failing.

Knowledgeable vendors deeply understand that point and do their best to utilize their own and their strategic partners’ innovations to ensure that their products are fully, predictably capable of supporting both existing applications and emerging workloads. Lenovo obviously isn’t the only vendor benefitting from Intel’s 2nd-Gen Xeon Scalable chips. However, Lenovo’s new/refreshed ThinkSystem and ThinkAgile offerings, and the company’s creative use of Intel’s Cascade Lake enhancements, provide excellent examples of how this process works and will deliver often profound benefits to Lenovo customers.