Optimizing cloud computing server architecture requires cloud providers to classify applications by their resource needs and evaluate them against their cloud business plans.

A server is one of those industry terms whose definition is broadly understood yet at the same time ambiguous. Yes, "server" means a computing platform on which software is hosted and from which client access is provided. However, the generalizations end there. Not only are there many different vendors that manufacture servers, but there are also a variety of server architectures, each with its own requirements. A mail server, a content server, a Web server and a transaction server might all need a different mixture of compute, network and storage resources. The question for many providers is: What does a cloud computing server need?

The answer will depend on the target market for the cloud service and how that market is reflected in the applications users will run. Servers provide four things: compute power from microprocessor chips, memory for application execution, I/O access for information storage and retrieval, and network access for connecting to other resources. Any given application will likely consume each of these resources to varying degrees, meaning applications can be classified by their resource needs. That classification can be combined with cloud business plans to yield a model for an optimum cloud computing server architecture.

For a starting point in cloud computing server architectures, it's useful to consider the Facebook Open Compute project's framework. Facebook's social networking service is a fairly typical large-scale Web/cloud application, and so its specific capabilities are a guide for similar applications. We'll also discuss how these capabilities would change for other cloud applications.

Cloud computing servers needs may not align with Facebook Open Compute

The Open Compute baseline is a two-socket design that allows up to 12 cores per socket in the Version 2.x designs. Memory capacity depends on the dual inline memory modules (DIMMs) used, but up to 256 GB is practical. The design uses a taller tower for blades to allow for better cooling with large lower-powered fans. Standard serial advanced technology attachment (SATA) interfaces are provided for storage and Gigabit Ethernet is used for the network interface. Facebook and the Open Compute project claim a 24% cost of ownership advantage over traditional blade servers. Backup power is provided by 48-volt battery systems, familiar for those who have been building to the telco Network Equipment Building System (NEBS) standard.

The Open Compute reference has a high CPU density, which is why a higher tower and good fans are important. However, many cloud applications will not benefit from this high of a CPU density for several reasons:

Some cloud providers may not want to concentrate too many users, applications or virtual machines onto a single cloud computing server for reliability reasons.

The applications running on a cloud computing server may be constrained by the available memory or by disk access, and the full potential of the CPUs might not be realized.

The applications might be constrained by network performance and similarly be unable to fully utilize the CPUs/cores that could be installed.

If any of these constraints apply, then it may be unnecessary to consider the higher cooling potential of the Open Compute design, and shorter towers may be easier to install to support a higher overall density of cloud computing servers.

How storage I/O affects cloud computing server needs

The next consideration for cloud computing server architecture is storage. Web applications typically don’t require a lot of storage and don't typically make large numbers of storage I/O accesses per second. That's important because applications that are waiting on storage I/O are holding memory capacity while they wait.

Consider using larger memory configurations for cloud applications that are more likely to use storage I/O frequently to avoid having to page the application in and out of memory. Also, it may be difficult to justify the maximum number of CPUs/cores for applications that do frequent storage I/O, as CPU usage is normally minimal when an application is waiting for I/O to complete.

A specific storage issue cloud operators may have with the Open Compute is the storage interface. Web applications are not heavy users of disk I/O, and SATA is best suited for dedicated local server access rather than storage pool access.

Additionally, it is likely that a Fibre Channel interface would be preferable to SATA for applications that demand more data storage than typical Web servers -- including many of the Platform as a Service (PaaS) offerings that will be tightly coupled with enterprise IT in hybrid clouds. Software as a Service (SaaS) providers must examine the storage usage of their applications to determine whether more sophisticated storage interfaces are justified.

You will need more sophisticated storage interfaces and more installed memory, but likely fewer CPUs/cores for applications that do considerable storage I/O. This means that business intelligence (BI), report generation and other applications that routinely examine many data records based on a single user request will deviate from the Open Compute model. Cloud providers may also need more memory in these applications to limit application paging overhead.

Cloud providers will need more CPUs/cores and memory for applications that use little storage -- particularly simple Web applications -- because only memory and CPU cores will limit the number of users that can be served in these applications.

Pricing models that prevail for Infrastructure as a Service (IaaS) offerings tend to discourage applications with high levels of storage, so most IaaS services can likely be hosted on Open Compute model servers with high efficiency.

PaaS services are the most difficult to map to optimum server configurations, due to potentially significant variations in how the servers will utilize memory, CPU and especially server resources.

For SaaS clouds, the specific nature of the application will determine which server resources are most used and which can be constrained without affecting performance.

The gold standard for server design is benchmarking. A typical mix of cloud applications running on a maximum-sized, high-performance configuration can be analyzed for resource utilization. The goal is to avoid having one resource type -- CPU capacity, for example-- become exhausted when other resources are still plentiful. This wastes resources and power, lowering your overall return on investment (ROI). By testing applications where possible and carefully monitoring resource utilization to make adjustments, cloud providers can sustain the best ROI on cloud computing servers and the lowest power consumption. That's key in meeting competitive price points while maximizing profits.

About the author: Tom Nolle is president ofCIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is the publisher of Netwatcher, a journal addressing advanced telecommunications strategy issues. Check out his SearchCloudProvider.com networking blog, Uncommon Wisdom.

Start the conversation

0 comments

Register

I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.