Thank you

Sorry

Companies that specialize in data recovery are still getting many calls for help from businesses and institutions whose equipment was damaged by the effects of Hurricane Sandy.

There are multiple efforts underway by services firms to recover data from servers that were underwater during storm surges or were damaged by power surges in the New York metro area.

Many businesses had underestimated the power of the storm that hit the east coast last week while many that did weren't able to get key computer equipment out of harm's way in time, according to companies that specialize in data recovery.

Data center equipment exists in a controlled environment of steady temperatures and relative humidity. But storm-caused flooding took some data centers offline, and in some cases caused generators to fail. One data center even reported temperatures rising above 100 degrees Fahrenheit, as its staff scrambled to make generator repairs.

In breaking the environmental cocoons protecting IT equipment in many east coast data centers, the storm may have wounded some servers and set them up for component failures weeks or months from now.

"Everybody just underestimated the strength of the hurricane," said Todd Johnson, vice president of operations of Kroll Ontrack, which provides data recovery services.

Johnson compared Sandy to Katrina, the 1995 hurricane that devastated the U.S. gulf coast and caused significant damage to data centers in many businesses. "People underestimated the power," he said.

Kroll Ontrack, as well as data recovery firm Drive Savers, are reporting a sharp increase in calls from people seeking help with flood damaged equipment.

Johnson said his firm has customers who reported servers in water that was 10 to 13 feet deep. Data can be recovered off flood damaged systems, but Johnson recommends that users act swiftly to minimize damage.

Similarly, Michelle Taylor, a spokeswoman for Drive Savers, said the company is helping many storm-affected customers, including a large home furnishing retailer in New Jersey with a RAID system that includes nine drives storing two terabytes of data each, recover data.

Taylor said the company expects more work as companies continue to dig out from Sandy.

Less certain is whether the environmental problems in data centers will lead to problems down the road for servers.

The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) recommends that data centers operate between 64.4 degrees and 80.6 degrees fahrenheit.

ASHRAE has studied manufacturing data to determine what happens to servers that work in high temperatures, according to Don Beaty, a consulting engineer and publications chair of the society's Technical Committee 9.9 for mission-critical facilities, technology spaces and electronic equipment.

"Using a baseline of 68 degrees fahrenheit as the benchmark or baseline for failures, a temperature of 104 degrees represents a 66% increase in failures. This seems like a big increase in failures. However, if the average failure rate is 4%, then operating at 104 degrees would result in the failure rate rising from 4% to 7%," said Beaty.

However, he noted that the failure rate is also based on duration.

There are 8,760 hours in a year, he said. "If 10% or 87.6 hours were at 104 degrees and the remainder at 68 degrees, the total failure rate for the year would be a ratio or weighted average. This means the 66% rise would be 6.6% rise in failures. At a 4% failure base, this means 4.66% failures rather than 4%," said Beaty.

Vendors can use equipment built to withstand much higher temperatures. All equipment is manufactured to Class A1 standards and has an upper limit of 89.6 degrees, and increasingly equipment is being made to meet Class A2 standards, up to 95 degrees.

There has been a trend to increase data center temperatures as part of push to use less energy, and it is becoming more common for data centers to operate at 72 degrees to 75 degrees, said Beaty.

There have been experiments by IT managers to put servers in sheds and tents, exposed to temperature and humidity extremes. More often than not these limited efforts surprise people with the durability of the equipment.

Nonetheless, equipment that is operating at higher than recommended temperatures, such as a hot spot in a data center, could see failures, said Scott Kinka, the CTO of cloud services company, Evolve IP.

"Heat equals age in the computer component world," said Kinka, and equipment that has been exposed to high temperatures, such as what may occur in a data center hot spot, may be at a higher risk of component failure at some point and a manager may see an uptick in component problems.

But it may be hard to trace back, exactly, the root cause of the failure because it could happen months later, said Kinka. "The hard part about this one is you are just not going to know," he said.