For better cloud performance, either go deep or go wide

In many instances, the issue is application design. On dedicated hardware, the application runs fine, but on a multi-tenant cloud platform, the application does not use the platform correctly.

Most people would rather not modify their applications when porting to the cloud — the lift-and-shift approach. But by not refactoring the application to use at least some cloud-native features, you'll likely encounter performance problems.

Many enterprises toss more cloud machine instances at the problem, and it might seem to make the problem go away. But your cloud bill will be larger than it needs to be, and the application will likely not provide optimized performance.

The issue boils down to a question of going deep or going wide with your use of resources.

Going deep
You focus on using as few machine instances as possible, but you use those instances in much more efficient ways. In many respects, this is like on-premise computing because you're limiting yourself to a certain number of servers. It forces the application developers to be much more efficient and effective.

Going wide
You fire up more machine instances, as you need them. This approach increases performance by launching more compute or storage resources. This plan does not require much, if any, application modifications, so it's a popular strategy at enterprises that are OK with tossing money at an application to ensure performance. The downside is the extra cost, as well as the fact that applications often become unwieldy — and difficult to manage — when they get too widely distributed.

The cloud supports both approaches. With the availability of almost unlimited resources, you can pretty much ensure performance of even the most poorly designed applications and databases. But focusing on the efficient design of applications, using cloud-native features, provides the best bang for the buck in the long run.