To build a business case for IaaS, remember that application testing, high-performance computing and disaster recovery are just some IT scenarios where public cloud is a fit.

Still in its early years, the public infrastructure-as-a-service market continues to grow.

Before a move to the public cloud, however, organizations need to fully assess how IaaS can address their business and IT needs. There are many common drivers behind public IaaS adoption, ranging from application development to reduced investments in on-premises infrastructure and disaster recovery (DR).

Here's a look at these common uses for public IaaS and more.

Application development and testing

With a public IaaS cloud, organizations can rent a cheap sandbox for application development and testing. They can totally isolate this sandbox environment from any other workflows or systems and easily dispose of it when finished.

In general, you can rent a decent-sized IaaS instance for around $110 per month, and that's a tremendous incentive to use the public cloud for short-term development jobs.

A cloud sandbox enables developers to test an application up to full-cluster scale and with dummy -- but sizable -- workloads. It also allows developers to experiment; for example, they could try out a bigger instance size or assess the value of in-memory database performance and GPUs.

Another benefit of the cloud sandbox approach is that you can build and test a set of instance types in parallel. This allows you to perform a total cost of ownership comparison -- something you wouldn't be able to do with in-house hardware.

In addition, with a single, hard-iron cluster on premises, you can only run one test at a time, but the cloud allows you to create multiple sandboxes so that you can shrink testing dramatically. Another benefit of a cloud sandbox is that you can duplicate clusters in minutes -- in case two developers want to access the same one -- and then determine if the developers can share networked storage or need a clone of that as well.

In general, you can rebuild cloud sandboxes rapidly and couple them with other services to perform heavier testing -- and in a shorter timeframe -- than what you could do in-house.

Scalable production

Another common driver to public IaaS is the ability to more flexibly scale production workloads.

There are few enterprise workloads with steady demand. In fact, wild spikes are often the norm, and things like statistical clumping, planned runtimes and customer demand variations make capacity planning difficult.

With an on-premises model, IT teams have to size up infrastructure to handle peak loads at any time. In many cases, these peaks are predictable, but it's still necessary to juggle resources and inevitably buy more gear than what the business might need for the average workload. This model is expensive and, in many cases, has led to cloud bursting, where admins use on-demand instances in the cloud to supplement in-house infrastructure during demand spikes.

In a pure IaaS model, capacity planning for production workloads is different. Let's say your sandbox test indicated you need 40 cloud instances to meet the average workload. It's a good idea to add a couple of spares. Next, build a module into your app that can crank up more instances on demand, using those spares as a buffer. This module calls a standard API within a cloud orchestrator to create or delete instances. Though VM startup can take a few seconds, meeting service-level requirements should be easy after a little trial and error. And the end result is an autoscaling app that can handle spiky workloads and exhibits high availability.

This process does presuppose that you can parallelize the app. That's easy to achieve for new apps, but legacy apps, based on single-thread x86 code or even COBOL, are a different challenge and require a lot more effort and potential redesign. The microservices model can help, since it fragments monolithic apps into smaller, more manageable chunks.

Batch processing is another good use case for the cloud, since these types of workloads have frequent fluctuations or variations in demand. The challenge, however, is that you need to create parallelism within the code and then exhaustively test for mission-critical apps.

Reduced investment in in-house infrastructure

Perhaps the easiest way to demonstrate the need for public IaaS is to show it saves money.

Moving part, or all, of an IT shop's workloads to the public cloud can drastically reduce capital expenditures, as it minimizes the need to buy on-premises IT systems. It should eventually lead to a reduction in operational expenses as well.

A reduction of in-house hardware also leads to cost savings around data center power, space and cooling. In some cases, a migration to the public cloud might lead to a discussion about resizing an enterprise data center. For example, if you have small islands of gear in an otherwise vast and empty facility, this extra space is a huge cost burden. Software license and hardware maintenance costs are also reduced with a move to public IaaS.

Overall, while public IaaS reduces capital and other expenses, organizations should be careful not to spin up more IaaS resources than their workloads need, as this can lead to high cloud costs.

Smaller in-house operations might see a reduction in admin staff to run and maintain gear after a move to the public cloud. Cloud environments are heavily automated, which means manual efforts are not required for simpler tasks, such as adding new drives and assigning individual servers to workloads. That said, this automation can also allow admins to shift their focus to more important business processes and spend less time on manual system management.

Overall, while public IaaS reduces capital and other expenses, organizations should be careful not to spin up more IaaS resources than their workloads need, as this can lead to high cloud costs.

High-performance computing

High-performance computing (HPC) is somewhat of a specialty area in the IT spectrum, but for many companies, public IaaS makes it more accessible.

To support leading-edge technologies, like HPC, computer clusters have to be fast and interconnected. Most applications are already designed for parallel processing, and most large HPC mainframes use an x86 base or commercial off-the-shelf. Often, these are hugely powerful systems and only work for very large jobs.

The cloud addresses this challenge by serving as a mechanism to fragment large HPC compute environments into multi-tenantable miniclusters. This has completely changed the computing profile for scientific experimentation and modeling. With cloud, organizations can size these HPC miniclusters and rent them by the hour, lowering the entry price for supercomputing dramatically.

There are still some challenges that remain when you need to move HPC workloads to the public cloud. Most HPC systems rely on very large dynamic RAM spaces and GPUs to boost performance. InfiniBand or Ethernet remote direct memory access networks are also valuable. That said, most of the large cloud service providers (CSPs) now offer large instance sizes with GPU support.

Hosting your web-facing services

All public clouds offer small instances with little I/O performance and limited CPU and memory, which are ideal for web-facing services. They meet the needs of a typical web server better than the common in-house iron alternative, a 1U half-wide server. That same server might handle 64 virtual instances in the cloud.

Drawbacks do exist, though. Media service providers typically forward cache content with companies like Akamai to reduce the cost and latency of service delivery. The public cloud model doesn't handle forward caching, just central content distribution. This might be less of an issue in the future as networks speed up.

In general, it is easy to create the required instances for a web service, since cloning from a tested script is straightforward, and CSPs also offer load-balancing and scaling features as well.

DR and redundancy

A public IaaS cloud can help orchestrate DR in various ways. Most clouds operate within different geographic zones, which correspond to the sites of the CSPs' data centers.

It's a good practice to operate in multiple cloud zones and keep copies of necessary data files in each. That way, if a router code update takes out a whole zone, you'll only see a temporary dip in service. How much of a dip and how long will depend on whether you keep standby instances ready in each zone and how long it takes to recover any transaction in progress.

Note that the CSPs offer geodiversity, the geographic separation of each replica, for object storage as a default and as an option for block storage. Using this capability, and erasure coding when it is available, greatly enhances your ability to start up in an alternative zone.

Data management is a critical factor here. For example, the European Union will soon have to follow the rules of General Data Protection Regulation. Encryption is vital, and CSPs already offer services to help achieve compliance. Good data management practices, such as moving old data to a cheaper cloud storage tier, should also save on operational costs.

Editor's note

With extensive research into public IaaS, TechTarget editors focused this series of articles on vendors who provided the following functionalities: user management, enterprise integration, automation and access to emerging technology, as well as capabilities around scaling, security, uptime and resilience. Our research included Gartner and TechTarget surveys.

Join the conversation

1 comment

Register

I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.