February 15, 2005 | To some extent, the whole point of using Linux clusters — or any open-source platform — for high-performance computing is to achieve consistently high compute speeds at modest price points. But when the speed falters, or the system's full performance potential isn't reached, the price point doesn't look like such a bargain.

Fortunately, lots of tools, both free and commercial, are available to improve performance. However, tools plugged in after deployment, or quickly grabbed without integration consideration, rarely solve performance bottlenecks. In fact, there is no surefire way to boost Linux cluster performance, says Bill Magro, product line manager at Intel MPI Library, in Intel's Parallel and Distributed Solutions Division.

"Performance issues can exist in the application, the network, the operating system, or even the platform itself," Magro says, "so getting to the root of a performance problem isn't easy."

Only a detailed analysis of an application's requirements will provide the desired performance, agrees Tom Quinn, vice president of East Coast operations for Linux Networx. Every aspect of an application — from its memory footprint, bandwidth requirements, floating point/integer needs, and data requirements — must be considered. One way is to run application-based benchmarks on each of the proposed architectures, but this isn't feasible for many organizations. Another approach is doing some benchmarks and a full system review.

Start Early, Finish Strong
Performance should sit at the top of the project list as companies start procuring the system pieces, Quinn says: "The biggest mistake is that organizations don't take performance into account when they're building the architecture or mapping out the hardware plan. You have to consider the system as a whole to get strong performance." That means spending time to determine each application's slow points, and truly understanding all the various pieces that need to work together.

As Magro explains, enterprises must understand the algorithms in their applications. "Do the algorithms themselves contain latent parallelism? If not, can you replace them with more parallel algorithms? If the algorithms are already parallel, determine the most appropriate method to express the parallelism. Thinking directly in terms of the problem being solved, rather than the details of its current implementation, will often yield the best results," he says.

And that's just the first step.

"You have to do your homework. You have to consider the language, the hardware, each application," Quinn says, and if a vendor is doing the work, make sure it is taking into account all the issues. And be forewarned, Quinn adds: Some vendors can't do the performance work needed, since many are focused on just the services aspect.

Another common mistake, Quinn says, is thinking that performance optimization can be done once or only intermittently. "The cluster is a huge stack of moving parts that needs to be taken care of, fed, and maintained. Performance isn't something you just do once," Quinn says.

Juan Jose Porta, associate director of the CEPBA-IBM Research Institute at IBM Boeblingen Laboratory, and an architect of the world's fastest life science Linux cluster, says, "It all comes down to a finely tuned application on every node."

Porta helped design and create the MareNostrum cluster, which sits at number 4 on the Top 500 supercomputer list. MareNostrum is built entirely of commercially available components, including 2,282 IBM eServer BladeCenter JS20 blade servers housed in 163 BladeCenter chassis, 4,564 64-bit IBM Power PC 970FX processors, and 140 TB of IBM TotalStorage DS4100 storage servers.

Porta says organizations don't realize how seemingly innocuous cluster elements — such as the power source, the system's language, and the actual physical size of a cluster — can affect performance.

"Efficiency is tied to all these things as well as exploiting every node and the processor architecture and building a system that can scale, " he says.

Another big mistake is choosing the wrong parallel programming method for a particular application. Four standard parallel programming methods are in widespread use today: MPI, PVM, OpenMP, and Pthreads. One or more of these methods is often suitable for parallelizing most applications.

"Even when choosing the correct method, users often focus on the wrong level of parallelism for their applications, leading them to obtain poor performance results or performance results incommensurate with the amount of effort applied to parallelize the application," Magro says.

Enterprises sometimes draw incorrect conclusions about the models themselves, based on the results they achieve. "More often than not, it's the application implementation rather than the model at fault. Often the hardest part of fixing performance problems is locating the cause," Magro notes.

High-Impact Tools
To achieve high performance, a developer needs to choose a good compiler, use highly tuned math and primitives libraries whenever possible, and use a message-passing library that supports the interconnect, Magro advises. "Further improving performance requires you to take the time to analyze both the serial and parallel performance. There could be bottlenecks in the code that limit performance," he says.

When tools work well, the rewards may be substantial. Compilers and debuggers can have a drastic and immediate impact on cluster productivity and performance. Beyond decreasing design, development, and test time, programming tools such as advanced optimizing compilers and HPC-tuned libraries can improve run-time performance by 5 percent to 30 percent, according to Linux Networks.

Some vendors, such as Linux Networx, have their own cluster management tools that are integrated into production systems. The beauty of a vendor solution is that it allows in-house IT and cluster management teams to focus on the compute effort rather than on server operations and maintenance.

Intel's developer tools include the Intel Compilers and Intel VTune Performance Analyzer to achieve and further tune serial performance, the Intel Threading Tools to tune multi-threaded performance, and the Intel Trace Collector/Analyzer to tune MPI. In addition, the Intel MPI Library has a multi-fabric support feature that allows users to easily extract performance from fast interconnects such as InfiniBand.

"Developers should choose the compiler that gives the best blend of performance, usability, and compatibility on a given processor. Support for parallel models like OpenMP is another important consideration if the cluster is comprised of SMP nodes," Magro says, adding that obviously the compiler must support the Linux distribution installed on the cluster.

If a cluster has a network other than ethernet, for instance, then for MPI codes an organization should probably choose a message-passing library that simplifies deployment across a multitude of interconnects.

"Cluster networks other than ethernet are becoming quite common, so it pays to look at products like the Intel MPI Library, which supports a variety of fabric interconnects for the same executable," Magro says.