are not at risk. The need for a robust WAN often means it is the most expensive part of a good DR strategy. Again, de- dupe can eliminate up to 97 percent of your replication traffi c for disaster recovery, and including continuous data protection will improve the recovery times from hours or days to just a few minutes. When you

Chris Poelker is VP of enterprise solutions, FalconStor Software. For more on FalconStor Software solutions: www.rsleads.com/203ht-203

calculate the savings from the cost of a single outage, it will provide the business case for the solution purchase.

to new servers as if they were connecting to the old servers; so infrastructure servers – such as directory services, network routers and phone lines, if required – need to be available. The right infrastructure must be at the DR site to handle these kinds of operations. You need to understand how to bring up applications that have crashed so that they work again. There are typically many interdependencies between application servers and databases for most modern applica- tions, and they need to be brought up on the right platform in the correct sequence. The ability to bring applications up using consistency grouping for the replicated data is extremely important.

In the event of a disaster, clients need to be able to connect to new servers as if they were connecting to the old servers;

so infrastructure servers – such as directory services, network routers and phone lines, if required – need to be available.

A phased approach to smart recovery

Smart recovery isn’t as elusive as some IT professionals are led to believe. The correct sequence can help maximize your backup and recovery processes.

Determining the target site’s ability Just like the WAN, your target site must be able to handle peak workloads. All of the critical applications requirements need to be met, even if it is in a degraded fashion. Data must also be available in an amount of time that supports business continuity at a minimal impact. When determining DR in- frastructure requirements for your organization, remember that healthcare companies can lose approximately $640,000 for every hour of downtime. That being the case, you can still reduce the costs of DR by virtualizing your servers and storage so that you don’t get locked in to the same vendor or equipment at both sides. Storage virtualization provides the ability to replicate from expensive large storage arrays at the production site to fast modular storage for DR, which can save up to 50 percent of the storage cost at the target site. Implementing rapid physical-to-virtual recovery enables virtual servers at the DR site to run on server blades, which reduces footprint and consolidates server costs.

Transforming your recovery into smart recovery In the event of a disaster, clients need to be able to connect

The first step is to add de-duplication to your backup process. In addition to being an affordable necessity, a de-dupe solution can replicate data offsite to eliminate offsite tape movement, recall costs, eliminate array- based replication licenses for that data and reduce the WAN requirements to replicate that data by up to 97 percent.

Next, snap-shot-based backup implementations will reduce the backup window to a couple of seconds. Moving from traditional bulk backup to snapshot backup eliminates the physical data movement from one point to another. The changed processes can back up more often, and data recov- ery is typically much faster. Also, the use of continuous data protection (CDP) makes recovery happen at lightning speeds and eliminates data loss. As previously established, hospitals cannot afford to lose information, and CDP can reduce data- loss fi gures to zero. Finally, look at implementing virtualization into your stor- age process to do server consolidation, data mobility, cost reduction and more.

Smart recovery isn’t rocket science, but it does take work. It is better to optimize your storage, backup and recovery systems before a disaster rather than be scrambling after a disaster. The cost of losing data is too high for healthcare IT professionals to do anything else.