Building a best-of-breed multicloud strategy

Best-of-breed strategies have long since fallen out of favor in the enterprise, because the work required to stitch together the components proved to be too difficult. But best of breed is back with cloud. Companies today are hell-bent on buying the ideal SaaS, PaaS, and IaaS cloud services for the job, and while APIs make the integration work easier, the resultant cloud silos create a new challenge: How do you assure service performance in this multi-cloud world?

The short answer: By maintaining global knowledge of what is happening (and where) across IT infrastructure, applications, and services. But we’ll get back to that.

Companies use eight cloud providers on average, according to IHS Markit Ltd., a research firm in London. IHS’ survey of 155 companies in a range of industries shows that number swelling to 11 within two years. When you include any and all SaaS services, the average number of cloud applications that companies use explodes to almost 1,500, by some counts.

While those numbers will vary, it is safe to assume that most companies already wrestle with multiple clouds. Consider Volkswagen. In a recent Wall Street Journal interview, Volkswagen CIO Martin Hofmann said the company uses public cloud services from all of the big guys—Amazon, Google, IBM, and Microsoft. “The idea is we’ve always had a policy of vendor independence. We want to be the ones picking the cloud providers, so we’re investing heavily in cloud brokerage and technology that allows us to switch instantly from one provider to another. But we’ll always keep our private cloud for sensitive data.”

If you include the private stack, Volkswagen has five prominent clouds. Hofmann didn’t even mention any SaaS tools, and it is a fair bet the company also has a hefty portfolio of those.

But GE starts with SaaS. “The more SaaS we can buy the better off we are, especially for non-differentiated applications like HR, scheduling, administrative, bill paying, taxes, compliance, customs, etc.,” Drumgoole says. “The world can’t get to SaaS fast enough for us.”

Anecdotally speaking, cloud adoption seems to go something like this: When needs arise, enterprises first look for the best SaaS solution for the job, and, failing that, consider PaaS or IaaS options. In the worst-case scenario—or if security or compliance concerns dictate it—companies address the need internally using a fungible private cloud comprised of commodity, off-the-shelf components.

The good news is, commonly available APIs make it easier to integrate these new cloud silos compared with yesterday’s best-of-breed efforts, and there are even cloud-based integration services, known as Integration Platform as a Service (IPaaS). But multi-cloud management will never be a walk in the park, so it is a safe bet that as the cloud ranks swell in any given organization, there will be a round of rationalization and consolidation to simplify the process.

That will be helped by the fact that prominent cloud players will keep fleshing out their offerings, and maybe the whole industry will start to contract at some point. But even this “help” translates to a boatload of change that the enterprise buyer must contend with as the cloud era continues to unfold.

So how do you maintain governance in this shifting and rapidly evolving multi-cloud world?

One of the most basic requirements is retaining visibility into what is going on where. Unfortunately, the tools offered by the cloud providers won’t be of much help because they are inwardly focused. Amazon CloudWatch, for example, is a tool used to monitor AWS cloud resources, providing information about virtual machine CPU utilization, memory usage, transaction volumes, etc. The tool is all about the AWS infrastructure and doesn’t give you a sense of how the application is performing, to say nothing of a bigger picture view about service performance across the organization’s combined on-premises and cloud resources.

Microsoft Azure has a similar tool, that also enables you to do bytecode instrumentation of the application so you can see what it is doing. But that view only takes into account the cloistered Azure world, while what is needed is a holistic end-to-end view of that performance in the real world, taking into account the application, the infrastructure, the DNS server, the AAA server, and the various network components.

Then you need that for all of your other cloud silos, too. And good luck if the performance of a given business service relies on resources from different clouds spanning disparate geographies. You’ll be left using multiple tools to try to piece together a big picture view. The result? An IT race across disciplines to establish mean time to innocence while service performance degrades and users don’t get what they expect.

Instead, the nirvana vision is to have a single way to gauge the health and performance of service levels across these various environments. You need to be able to instrument any cloud environment, all virtual and physical resources (in the data center and out in the branches), define the service dependencies, and efficiently monitor and correlate traffic flows between all the piece parts. That will make it possible to achieve a holistic view that enables you to proactively identify service degradation, triage problems, and quickly get to the root of service issues.

The reality is that the cloud world is only going to get more complex. First, there's the growth of microservices, a development architecture that simplifies scalability by breaking applications down into independent but linked modular services. While that may be a great step forward for development, it will further complicate matters when the operations side of the house needs to find the cause of a service degradation or outage. Meanwhile, technologies like IoT and machine learning pour a tsunami of data across complex cloud environments, powering digital transformation, but also ratcheting up complexity. As the pressure grows, it's clear that the time to grab the bull by the horns is now.

~Written by John Dix. John is an IT industry veteran who has been chronicling the major shifts in IT since the emergence of distributed processing in the early ‘80s. An award-winning writer and editor, he was the editor-in-chief for NetworkWorld for many years and an analyst for research firm IDC.