Six cool innovations for the data centre

Sure, consumer gadgets are getting most of the attention these days, but data centers are getting some love too. These new products and technologies promise to solve real data centre problems or are already working to make enterprise operations run more smoothly. How many are on your wish list?

1. Fibre optics with a twist

The success of the HDMI cable in consumer electronics has proved that having a common cable that works with Blu-ray players, HDTV sets and just about any set-top box helps remove clutter and confusion. Intel has developed Light Peak following this same approach. It's fibre-optic cable that will first be used with laptop and desktop computers to reduce clutter and to speed transmission, but it could also make its way to the data centre as a way to connect servers and switches.

The 3.2mm cable, which is about as thin as a USB cable, can be up to 100 feet long. Intel has designed a controller that will sit inside a computer, and cables are currently in production. Third parties are expected to start making computers with Light Peak fibre-optic cables in them by 2011.

For data centres, Light Peak presents a few interesting possibilities. Fibre optics have been in the data centre since the early 1990s, when IBM introduced its Escon (Enterprise Systems Connection) product line; it connects mainframes at 200Mbit/sec. Light Peak differs in that it runs at 10GB/sec., and Intel claims that the components will be less expensive and lighter-weight than existing fibre-optic products.

"Intel claims Light Peak will be less complex and easier to manage by eliminating unnecessary ports, and deliver the higher throughput required by high performance e-SATA and DisplayPort systems," says Charles King, an analyst at Pund-IT in Concord, Mass. "If the company delivers on these promises, Light Peak could simplify life for data centre managers plagued by installing, managing and troubleshooting miles of unruly optical cables."

Success here will depend on "how willingly developers and vendors" embrace Light Peak and build products around it, King explains.

2. Submerged liquid cooling and horizontal racks

Liquid cooling for data centres is not a new concept, of course, but Green Revolution Cooling has added a new twist. For starters, the rack is turned on its side, which helps with cable management and makes it easier for administrators to access equipment, and the horizontal rack is surrounded by liquid. A new coolant, called GreenDEF, is made from mineral oil that is nontoxic, costs less than other liquid-cooling methods and is not electrically conductive like water, according to a GR Cooling spokesman.

"The liquid actually moves through the floor and circulates up through all of the computing nodes," says Tommy Minyard, director of advanced computing systems at the Texas Advanced Computing Center, part of the University of Texas at Austin. This means more-effective cooling because heat is moved away from the processors via cables on the sides and under the rack, he explains. Minyard is installing GR Cooling systems in his data centre and expects 30 to 40 percent savings compared to traditional air-cooled systems.

Data centre cooling device Green Revolution uses horizontal devices for racks, along with a new type of coolant, to reduce energy costs in a data center.

Minyard says liquid cooling has made a rebound lately, recalling the days when Cray offered submerged cooling systems, and he notes that even IBM is moving back into chilled-liquid-cooling some compute nodes.

Pund-IT's King says a major issue is that enterprises have fought the return of liquid cooling in the data centre because of the high costs of implementing the technology and because it is unproven as a widespread option.

"Liquid cooling usually costs much more to install upfront than air cooling," says Mark Tlapak, GR Cooling's co-founder. "Compared to air, every liquid cooling system has some added nuance, such as electric conductivity with water-based cooling systems. " But, he says, "spring a leak in the water systems, and you lose electrical equipment." Still, for Minyard, GR Cooling is an ideal fit: His data centre gravitates toward dense, powerful systems that pack intense power into small spaces, such as IBM blade servers and the latest Intel processors. The Ranger supercomputer, for example, uses 30kw of power per rack.

3. Several broadband lines combined into one

Enterprises can spend many thousands of dollars on fibre-optic lines and multiple T1 connections, but at least one emerging technology is aiming to provide a lower-cost alternative.

Mushroom Networks' Truffle Broadband Bonding Network Appliance creates one fast connection out of up to six separate lines, a technique known as bonding. The Truffle combines the bandwidth of all available broadband lines into one giant pipe, with download speeds of up to 50Mbit/sec., the company says. Internet access may be through a DSL modem, cable modem, T1 line or just about any broadband connection.

This helps increase overall throughput, and acts as a backup mechanism, too. If one of the "bonded" lines fails, the Truffle connection just keeps running with the other available lines.

Steve Finn, a television producer in Kenya, uses Mushroom Networks' appliance for a program called Africa Challenge that is broadcast to eight African countries. He relies heavily on broadband to produce the content and at one time paid as much as $4,000 per month for connectivity. Speeds vary depending on latency and traffic, but he says the bonded speed is generally about four times faster (four lines times the speed of each individual line), at about half the cost of one equivalent high-speed line.

Frank J. Bernhard, an analyst at Omni Consulting Group, says Mushroom Networks fills a niche for companies that do not want to pay the exorbitant fees for multiple T1 or T3 connections but still need reliable and fast Internet access. Other companies, including Cisco Systems offer similar bonding technology, but at a greater cost and with more complexity at install, which means the technique has not yet been widely used.

4. Multiple data centres more easily connected

In a very large enterprise, the process of connecting multiple data centres can be a bit mind-boggling. There are security concerns, Ethernet transport issues, operational problems related to maintaining the fastest speed between switches at branch sites, and new disaster planning considerations due to IT operations running in multiple locations.

Cisco's new Overlay Transport Virtualization, or OTV, connects multiple data centers in a way that seems really easy compared with the roll-your-own process most shops have traditionally used. Essentially a transport technology for Layer 2 networking, the software updates network switches, including the Cisco Nexus 7000, to connect data centers in different geographic locations.

The OSV software costs about $US 25,000 per license and uses the maximum bandwidth and connections already established between data centers.

There are other approaches for linking multiple data centers, a Cisco technical spokesman acknowledges, including those involving MPLS or, before that, frame-relay and Asynchronous Transfer Mode protocols.

But unlike some of the older approaches, the spokesman explains, Cisco OTV does not require any network redesign or special services in the core, such as label switching. OTV is simply overlaid onto the existing network, inheriting all the benefits of a well-designed IP network while maintaining the independence of the Layer 2 data centers being interconnected.

Terremark, a cloud service provider based in Miami, uses Cisco OTV to link 13 data centres in the U.S., Europe and Latin America. The company says there is a significant savings compared with taking a "do-it-yourself" approach to linking data centers, due to reduced complexity and OTV's automated fail-over system that helps multiple data centers act as one if disaster strikes.

Virtual machines from one location can now use VMware's VMotion, for instance, to automatically move to another physical location in the event of a failure.

5. Priority-based e-mail storage

Communication is what drives a business, but too often the bits and bytes of an e-mail exchange are treated in the data centre as just another data set that needs to be archived. Messagemind automatically determines which e-mails can be safely archived.

The tool analyzes all company communication -- tracking which messages end users read, delete or save -- and then groups them according to priority level.

Data centre administrators can use that information to store e-mail based on priority level, which in turn can save money. For example, instead of storing all e-mails in one high-cost archive, messages marked as low priority -- based again on the end user's clicking behavior -- can be stored in lower-cost storage systems. High-priority e-mail can be stored on higher-performance, and higher-cost, media.

That same behind-the-scenes analysis can be used outside the data center, rolled up into a dashboard that managers and end users can view to help them on projects. For example, business units can view e-mail diagrams that show who is communicating effectively on a project and who seems to be lagging behind and rarely contributing.

Pund-IT's King says Messagemind is an intriguing prospect because e-mail has become such a wasteland of broken conversations and disconnected project discussions. Managing e-mail becomes even more painful if a company is subject to litigation, and e-mail becomes part of the legal discovery process.

"Even the best e-mail solutions require employees to manage their messages," says King. "If it works as advertised, I could see this catching hold in enterprises. By managing e-mail more effectively -- and automatically -- Messagemind's solution could take a great deal of weight off the shoulders of data center admins struggling under ever-increasing volumes of stored messages."

6. User accounts virtualized for easier restoration

Virtualization has become the buzzword of the past decade, but it usually involves abstracting an operating system from a server or data from your storage allocations. AppSense is virtualization software for user accounts. It extracts user profile settings from Windows applications and maintains them separately. That means that if an application is updated or changed, the user information is still available. If user settings are corrupted or lost, administrators can restore the settings with a minimum of bother.

Landon Winburn, a software systems specialist at the University of Texas Medical Branch in Galveston, Texas, uses AppSense to virtualize user-account profiles for his 3,000 students. Winburn says the university used to manage user settings manually, taking about 40 to 60 calls per week related to log-ins. The university also had five to 10 corruptions per day related to user settings.

"Before AppSense, the only solution for a corrupt profile was to delete the profile and have the user start again from scratch for all applications," says Winburn.

But now, with AppSense's ability to restore these settings, the university doesn't have to directly address the problems, since they are handled automatically. By virtualizing accounts, the university could also increase the number of XenApp Server accounts from 40 user profiles per server to about 80.

John Brandon is a veteran of the computing industry, having worked as an IT manager for 10 years and a tech journalist for another 10. He has written more than 2,500 feature articles and is a regular contributor to Computerworld US.

Copyright 2017 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.