HPCwire » 451 Researchhttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemSun, 02 Aug 2015 12:39:43 +0000en-UShourly1http://wordpress.org/?v=4.2.3To Unify or Not to Unify? The Question for Datacentershttp://www.hpcwire.com/2013/04/09/to_unify_or_not_to_unify_the_question_for_datacenters/?utm_source=rss&utm_medium=rss&utm_campaign=to_unify_or_not_to_unify_the_question_for_datacenters
http://www.hpcwire.com/2013/04/09/to_unify_or_not_to_unify_the_question_for_datacenters/#commentsTue, 09 Apr 2013 07:00:00 +0000http://www.hpcwire.com/?p=4102The Top500 list is dominated each year by unified systems. That is, the same vendor provides most of the hardware, the same CPUs are used across the system, etc. If HPC systems are built this way, why aren’t many datacenters?

]]>The Top500 list is dominated each year by unified systems. That is, the same vendor provides most of the hardware, the same CPUs are used across the system, etc. If HPC systems are built this way, why aren’t many datacenters?

Senior analyst Peter ffoulkes of TheInfoPro (a division of 451 Research) wondered the same thing, noting that efficiency and optimization could be better achieved in datacenters in a uniform environment. Specifically, it is easier to run tests and determine what is malfunctioning on a uniform system. Repairing those malfunctions would hypothetically be simpler too.

“If you want to have something you know will behave in a consistent way, then knowing it’s running on an identical environment means you can test it once and know it will expect to behave in a certain way the same way on every machine,” ffoulkes said, explaining the difference between maintaining uniform and non-uniform systems. He went on to mention that finding the cause of a problem within the datacenter becomes much more difficult and requires more work with an integrated approach.

The notion rests around the idea that uniformity in general, even in non-computing applications, eases company-wide maintenance. For example, ffoulkes brought up Southwest Airlines, who only uses Boeing 737 airplanes while United and American employ a combination of Boeing and Airbus models.

“They only train people once, they got one set of parts and if there’s a problem at an airport, they know they’ve got the parts there. Any airline with more variants of aircraft has a bigger problem,” said ffoulkes. Relating airlines back to the virtualized environments in datacenters, ffoulkes argued that it was the same standardization that should hypothetically breed more successfully maintained facilities. “That’s why people want to standardize on hardware. You can get to a root cause of a problem faster.”

Andi Mann, vice president of Strategic Solutions at CA Technologies, agrees with ffoulkes to a certain extent, noting that ffoulkes’s idea is a ‘best practice’ but not always applicable to real applications. “I think it’s a good idea, a best practice, to standardize on a hardware build and hypervisor,” he explained. “It reduces the fragility of the environment and gives you the opportunity to have stability,” he said, expanding on ffoulkes’s point on standardization for maintenance.

However, datacenters are frequently built over time. Over that time, the relationships between vendors and facility owners/developers is fluid. Further, sometimes a particular vendor provides a better cost model for a certain workload. “There are a lot of real world circumstances where [uniformity] is not a good idea,” Mann said. “The cost is one. Do you need the same system for every workload?”

Indeed, ffoulkes recommended splitting up vendors across datacenters for companies that utilize multiple facilities across the world. This breeds competition among the vendors and keeps costs down.

Still, the one vendor per datacenter theory is an interesting one that virtualized environments can take from HPC systems.