disruptive technology – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemFri, 09 Dec 2016 21:51:05 +0000en-UShourly1https://wordpress.org/?v=4.760365857The Business of Disruptive Innovationhttps://www.hpcwire.com/2010/11/14/the_business_of_disruptive_innovation/?utm_source=rss&utm_medium=rss&utm_campaign=the_business_of_disruptive_innovation
https://www.hpcwire.com/2010/11/14/the_business_of_disruptive_innovation/#respondSun, 14 Nov 2010 08:00:00 +0000http://www.hpcwire.com/?p=5048Like every technology-based sector, high performance computing takes its biggest leaps by the force of disruptive innovation, a term coined by the man who will keynote this year's Supercomputing Conference (SC10) in New Orleans. Clayton M. Christensen doesn't know a whole lot about supercomputing, but he knows a great deal about the forces that drive it.

]]>Like every technology-based sector, high performance computing takes its biggest leaps by the force of disruptive innovation, a term coined by the man who will keynote this year’s Supercomputing Conference (SC10) in New Orleans. Clayton M. Christensen doesn’t know a whole lot about supercomputing, but he knows a great deal about the forces that drive it.

For the past 15 years, Christensen, a professor at the Harvard Business School, has been studying how technological innovation works, how it can drive some businesses to succeed, and how it can cause others to fail spectacularly. Today he is considered one of the leading experts on innovation. At SC10, he will attempt to impart some of this wisdom to the HPC faithful.

Not a techno-geek by any means, Christensen’s focus is on the businesses end of disruptive innovation. In 1997 he penned his first book on the subject, The Innovator’s Dilemma, wherein he describes the challenges of managing innovation. Since then he’s developed a set of well-respected theories on innovation and has published a number of other books that explore different aspects of the subject. HPCwire recently got the opportunity to speak with Christensen to ask him about his work and how his theories can apply to the high performance computing industry.

From Christensen’s perspective, disruptive innovation is not a technical idea, it encompasses a business model that is at the heart of how technology is delivered to the marketplace. In a nutshell, disruptive innovation represents a new value to the marketplace, and it usually emerges as a simpler and less expensive alternative to established technologies. But it is not a market-specific concept. The way Christensen has done his research is by studying how the innovation process works in a generic sense, not by studying an industry, like high performance computing, and then developing a theory that is specifically applicable to it.

According to Christensen, there’s a basic problem the way world is designed; data is only available from what happened in the past. And it’s convincingly available only about the distant past. So when managers make predictions about the future using historical data, it tends to be very unreliable.

So how is one to predict the future? The answer is theory, says the Harvard professor. “A really good theory gets down to the fundamental insight on why the world works the way it does,” explains Christensen. “You guys are scientists and engineers and use theories all of the time in the technical dimensions. But now there is a set of theories about the business side that are very valuable.”

The group Christensen works with at Harvard has spent years developing business management models that can help predict which kind of product, service or company is likely to be successful and which will likely fail. Some of his students have had some remarkable success applying this framework to real-life situations. For example, one of Christensen’s student successfully predicted the demise of Google’s Wave communication platform, an all-encompassing web-based communication tool that the search giant put on the shelf after just four months of user trial.

The HPC business, of course, lives and breathes in a world of disruptive technologies. From the “Attack of the Killer Micros” that all but wiped out custom processor-based supercomputing in the 1990s, to today’s emergence of general-purpose GPU computing, HPC seems especially prone to being reshaped by simpler technologies from below.

Which may explain why even established HPC players like IBM, Cray, and HP often struggle to make their supercomputing businesses profitable. The challenge for the industry leaders is that they need sustaining technologies to maintain their business model, says Christensen. Disruptive technologies are not good fits for market leaders, since these companies tend to cater to customers high up the food chain. In other words, the IBMs of the world need to continually create higher value products to feed their best clients. Alternatively, they can acquire other companies whose products match their existing customer base.

Christensen’s theories actually predict this type of business interaction quite well. For example, in the 1960s, X-ray technology was the only device that let doctors people peer inside the body. But in 1971, a British company called EMI launched computed tomography (CT), a high end technology which delivered superior imaging technology since it revealed soft tissues as well. Within a year the leaders of the X-ray technology — GE, Siemens and Phillips — developed better CT technology than EMI and eventually drove them out of business.

The next medical imaging technology was Magnetic Resonance Imaging (MRI), which turned out to be any even better way to look at certain structures inside the body. But again, the early developers of MRI technology were overtaken by GE, Siemens, and Phillips. For both CT and MRI devices, the established companies found they could sell them for even better profits than X-ray machines.

On the other hand, when ultrasound technology was developed, that was a different story. Ultrasound didn’t produce crystal clear images, but the devices were inexpensive and simple to operate. Therefore it could be purchased and used as standard equipment for doctors’ offices. GE, Siemens and Phillips bypassed the ultrasound market because the financial incentives were wrong for their business structure. So a whole new set of vendors emerged for ultrasound products. It was a true disruptive innovation.

If Christensen models had been applied to startups like ClearSpeed or SiCortex, they might have revealed the technologies they developed, as good as they were, did not fit the disruptive profile at all and also did not offer a sustaining technology for larger vendors. His theories might also have predicted the recent rash of HPC software tool acquisitions of Cilk Arts, Interactive Supercomputing, RapidMind, TotalView Technologies, Visual Numerics, and Acumem. All of these tool companies had sustaining technologies of value to the larger buyers, in this case, Intel, Microsoft, and Rogue Wave Software.

So what’s the next big disruptive technology? Christensen thinks it could very well be cloud computing. According to him, the cloud is setting itself up the be a countervailing force that will cut across the mainframe and high-end computing. As such, it has the potential to usurp the established business model of HPC. “The supercomputer leaders should watch out,” he warns.

]]>https://www.hpcwire.com/2010/11/14/the_business_of_disruptive_innovation/feed/05048Startup Makes Liquid Cooling an Immersive Experiencehttps://www.hpcwire.com/2010/08/31/startup_makes_liquid_cooling_an_immersive_experience/?utm_source=rss&utm_medium=rss&utm_campaign=startup_makes_liquid_cooling_an_immersive_experience
https://www.hpcwire.com/2010/08/31/startup_makes_liquid_cooling_an_immersive_experience/#respondTue, 31 Aug 2010 07:00:00 +0000http://www.hpcwire.com/?p=5157There's nothing like a blazing hot summer to focus one's attention on the best ways to keep cool. That goes for datacenter operators as well, who are equally worried about keeping their servers properly chilled. While there is no shortage of innovative cooling solutions being proffered by various vendors, a new liquid immersion cooling solution from startup Green Revolution Cooling could end up being the best of them all.

]]>There’s nothing like a blazing hot summer to focus one’s attention on the best ways to keep cool. That goes for datacenter operators as well, who are equally worried about keeping their servers properly chilled. While there is no shortage of innovative cooling solutions being proffered by various vendors, a new liquid immersion cooling solution from startup Green Revolution Cooling could end up being the best of them all.

The stakes for more efficient datacenter cooling are already high. Power consumption for a traditional air-cooled facility eats up a third to more than a half of the energy cost. Making cooling more efficient leaves more money available for computing, which, after all, is the central purpose of the datacenter. Efficient cooling is an especially important consideration in high performance computing, since this class of users gravitate toward faster and denser (and thus hotter) server configurations. If the setup in the center is not optimal, you end up sacrificing a lot of FLOPS for cooling.

With the increasing density of servers, storage, switches and other equipment, facility managers are taking an extra hard look at liquid cooling. Water-cooled servers have been around for decades, and direct-cooled CPUs are now being offered by a handful of vendors. Submerged liquid cooling, too, has been around since the days of the Cray 2, but this technology may be poised for a big comeback.

Servers Take a Bath

Green Revolution Cooling (GRC), a two year-old company based in Austin, Texas, is offering a general-purpose liquid immersion cooling solution that they introduced at SC09 in Portland last November. It was selected as one of the “Disruptive Technologies of the Year” for the 2009 conference, an award they’ve recaptured for SC10.

In a nutshell, the system consists of a 42U rack enclosure tipped on its back and filled with an inert mineral oil mixture in which you immerse the server hardware. A pump is used to circulate the oil to an external heat exchanger, typically located outside the building.

The big advantage is that, unlike water, the oil formulation is not electrically conductive, but has 1,200 times the heat capacity of air. And since the oil is in direct contact with all the components, it only needs to be cooled down to about 104F (40C) to be effective. (CPUs can operate at 75C and hard drives at 45C.) Unless your datacenter happens to be located in Yuma, Arizona, cooling a liquid to 40C is relatively easy to attain with a simple heat exchanger or cooling tower. The solution is advertised to reduce the cooling energy by 90 percent and cut overall power consumption in the datacenter by up to 45 percent. The pitch is that a single 10kW server rack at 8 cents per kWh will save over $5,000 per year on energy costs alone.

According to Green Revolution co-founder Christiaan Best, basically any piece of datacenter equipment — rackmount server, blade, switch — that adheres to the standard 19-inch form factor can be slid into the GRC enclosure. The only equipment modifications required are the removal of the internal fans (you don’t need air cooling any more) and the sealing of any hard drive units, with an epoxy coating, to make them airtight. Typically this procedure takes a few minutes per server.

Because the GRC enclosure is laid on its back, it does takes up more floor space than a regular vertical rack. But since you no longer need hot aisles, chillers, and CRAC units, there is extra square footage to play with. Also, because there is no need to run cold air beneath the equipment anymore, the raised floor is now superfluous. “Essentially you could run it in a barn,” says Best. “All you need is a level floor.”

If you’re looking for performance, the GRC rack allows you to overclock the processors without worrying about melting the server. An NSF-funded study found that cranking up the clock on an Intel E5520 “Nehalem” CPU inside a GRC-cooled server yielded a 54 percent performance boost on Linpack, while keeping the CPU temperature at 76C. The server cost per gigaflop was reduced by about 50 percent.

It’s not just for overclocking. Theoretically, you could throw almost any sort of artificially dense board — multi-GPUs servers, custom blades with 10 CPUs on the motherboard, etc. — into the oil bath and realize the additional cost benefit of shrinking down your hardware footprint.

One possible roadblock to widespread adoption is the lack of warranty support from the OEMs. Warranties don’t typically allow the customer to take the server apart and dunk it into foreign liquids. According to Best, they’ve been talking with all the major OEMs to get their solution qualified under the original warranties, but currently none have committed to supporting the GRC setup. Since many of the big system vendors have their own liquid cooling solutions they’d like sell, they are likely to be less than enthusiastic to qualify a third-party solution.

In any case, Best says they’ve retained third-party support that will honor the original equipment warranties, so customers can be covered for any mishaps. GRC has logged over a quarter million server hours on their in-house test system and has yet to encounter a failure (with the exception of hard drive mechanical failures). Although there is no data to support it, Best is fairly certain that their solution will extend the life of the servers, given the more stable thermal environment, the lack of vibration from internal fans, and the elimination of oxidation on the electrical contacts.

Looking for a Few Brave Customers

Austin-based Midas Networks, a collocation firm, is the company’s first customer. Midas has purchased four of the GRC racks, and the systems are scheduled to be up and running later this year. Best says they also have a number of other customers in the pipeline, including some with HPC facilities, but no checks are in the bank just yet.

With the exception of Green Revolution itself, the Texas Advanced Computing Center (TACC) has acquired the most experience with the technology. TACC installed a pre-production GRC unit back in April and has been putting the system through its paces for the past five months.

Even in oil-rich Texas, energy is not cheap, so power savings has become a big priority at TACC. “We’re really, really chill-water limited where we are now,” says Dan Stanzione, TACC’s deputy director. According to him, they don’t have the ability to add any more chilled water capacity, but do have plans to expand computing capability over the next several years.

The TACC experiment started with immersing some older 1U servers in the GRC enclosure, and since then they’ve added other equipment including InfiniBand switches, GPU-powered servers, and blades. According to Stanzione, all the hardware has performed flawlessly, with no failures to date. They’ve even overclocked some of the server CPUs by 30 to 40 percent, without incident.

At present they have about 10kW of equipment in the rack, and are using just 250 watts to power the GRC solution. That’s more than a 90 percent reduction when compared to the 3,000 to 4,000 watts they would have consumed with a conventional air-cooled system. Stanzione estimates the total power savings for the whole system (equipment plus cooling) was reduced by 25 to 30 percent. “The overall power consumption has been fantastic,” he says.

The TACC crew is going to continue collecting data with the GRC system for the rest of the year. If everything checks out, Stanzione would like to start putting some production units into the upcoming datacenter buildout. They’re already thinking about loading 30 to 40 kW of compute equipment into a single rack, and GRC cooling would make that level of density quite practical. Further into the future, Stanzione is thinking about the cost savings they could accrue by immersing all 140 racks of the center’s equipment. “I think this has a tremendous amount of potential,” he says.

Barring some unforseen technological breakthrough, datacenter computing is only going to get denser and hotter in the years ahead. And since the cooling capacity of air isn’t going to change, the move to liquid-cooled systems appears all but inevitable. “You may not buy liquid cooling from us,” concludes Best, “but you will buy it from someone.”