Companies considering moving workloads to cloud environments five years ago questioned whether the economics of cloud were compelling enough. The bigger question at that time was whether the economics would force a tsunami of migration from legacy environments to the cloud world. Would it set up a huge industry, much like Y2K, of moving workloads from one environment to another very quickly? Or would it evolve more like the client-server movement that happened over 5 to 10 years? It’s important to understand the cloud migration strategy that is occurring today.

We now know the cloud migration did not happen like Y2K. Enterprises considered the risk and investment to move workloads as too great, given the cost-savings returns. Of course, there are always laggards or companies that choose not to adopt new technology, but enterprises now broadly accept both public and private cloud.

The strategy most companies adopt is to put new functionality into cloud environments, often public cloud. They do this by purchasing SaaS applications rather than traditional software, and they do their new development in a Platform-as-a-Service (PaaS) cloud environment. These make sense. They then build APIs or microservices layers that connect the legacy applications to the cloud applications.

When workloads migrate from legacy into cloud, it’s usually because of replacement tools rather than actually moving the workloads. Most companies implemented 60 to 80 percent of their current applications in the last seven years. They continually reimplement applications as new software versions come out with new functionalities. When a company upgrades its ERP system, for example, it upgrades into a cloud version rather than a legacy version.

Even if a company spends 20 percent of its IT budget on new applications, over time that results in a steady refresh of applications. Most organizations have exceptions – a small portion of applications that are 30 to 40 years old. They are highly stable, don’t require a lot of new functionality, and don’t migrate to the cloud.

Many third-party service providers advise clients that they need to reengineer their legacy environment and deploy it to a cloud environment to get the desired end-to-end performance of the applications. Although some of this is happening, typically it’s very selective. Enterprises’ preference is to develop APIs and a microservices layer, which allows deferring the need to redeploy and enables moving legacy applications only when their underlying functionality changes enough or new versions become available in a SaaS or cloud version.

Another factor in the cloud migration strategy today is the fact that capital is limited and business units are increasingly driving that spend. They want new functionality and see little or no gain in putting money, time and risk against duplicating existing functionality that works into a new environment.

So instead of a substantial investment into a quick Y2K movement, companies are making substantial investments in APIs and microservices, which allow hybrid environments (legacy and cloud combined) to work well together. It doesn’t mean that companies won’t have to redevelop some of their applications in the cloud. But it allows time to be selective as to which ones to redevelop. And the timing can be tied to when there is a compelling reason of cost savings or when having applications in a homogenous environment doesn’t seem to be strong enough for the desired performance.

This strategy of investing in APIs and microservices layers to connect legacy apps to cloud environments is much less risky, and cheaper, than moving legacy workloads.

This article is published as part of the IDG Contributor Network. Want to Join?