Not very long ago disaster recovery was a luxury afforded by only the very large companies due to the prohibitive cost and effort required. Frequently even these large companies were unable to justify the investment and went without a disaster recovery plan. Today, virtualization and cloud enables companies of all sizes to implement a scalable, highly efficient disaster recovery plan without a huge investment.

COSTLY AND RESOURCE-INTENSIVE DISASTER RECOVERY OF YESTERDAY

At one time, investment in disaster recovery came in one of two forms: build a replica or subset of the production- computing environment at a secondary site or contract with a disaster recovery provider. These disaster recovery providers maintained data centers equipped with compatible computing platforms upon which a company could restore their environments when a disaster was declared. The latter was often the more feasible solution since the service provider was able to leverage their hardware investment over a pool of customers thereby lowering their per unit cost and passing some of the savings along to their customers. Though, I have heard many companies complain over their $50,000-$400,000 monthly costs to maintain their contract for a secondary site disaster recovery location. These exorbitant fees did not even cover the customer’s annual testing costs to simulate a disaster and test their recovery process that often included IT staff members rolling through airports with cases of backup tapes.

Beyond the expense, disaster recovery services were also very resource intensive with long recovery point and recovery time objectives. Recovery Point Objective (RPO) is the maximum tolerable period in which data might be lost from an IT service. Often the nightly backup tapes were used for disaster recovery purposes so the RPO could be as long as 24 hours. The Recovery Time Objective (RTO) is the duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity. Time could easily exceed 48-72 hours with travel to the recovery site required, following what may be out-of-date documentation and shuffling tapes recovery. Due to the cost and effort, these tests were normally performed on an annual basis and were not frequent enough to keep the plans up-to-date.

MORE AFFORDABLE AND EFFICIENT DISASTER RECOVERY OF TODAY

Virtualization, now a prevalent technology in the data center, has led to a major consolidation of server hardware. It is not uncommon to see 10 or more virtual servers running one a single physical server acting as a host. This disruptive technology has not only changed the way our data centers are designed but has also laid the ground work for a more effective, efficient disaster recovery solution. No longer does a secondary data recovery center require a one-to-one physical inventory that mirrors the production site. A data center with 100 physical servers may now be running 100 virtual servers across 10 physical servers. As you can imagine, equipping a secondary data center with 10 physical servers versus 100 physical servers is a huge cost savings. Not to mention the additional cost savings derived from the reduced footprint for rack space and decreased consumption of power and cooling.

ROBUST, FLEXIBLE DISASTER RECOVERY WITH VIRTUALIZATION

The ecosystem built around virtualization, specifically backup and replication software, has assisted in creating a more affordable and efficient disaster recovery plan. There are many new back up and replications solutions to choose from, but some of the most interesting are the ones that operate at the hypervisor level.

Hypervisor-based replication offers the following benefits:

Hardware-agnostic—Hypervisor-based replication supports all storage arrays so organizations can replicate from anything to anything. In today’s increasingly heterogeneous IT environment, this allows users to mix

Storage technologies such as Storage Area Network (SAN) and Network-Attached Storage (NAS), and virtual disk types such as Raw Device Mapping (RDM) and VMware File System (VMFS).

Faster and More Efficient—Hypervisor-based replication solution achieves RPO in seconds and RTO in minutes.

Centralized Management—With no guest-host requirements or additional hardware footprint, a hypervisor- based solution is easy to manage. It simply resides in the hypervisor, enabling centralized management.

By combining virtualization and hypervisor-based replication a very robust, flexible and cost effective disaster recovery solution was waiting to be born. The missing component was a platform to deploy on which equaled in flexibility, cost effectiveness. And then there was the Cloud.

SCALABLE, ON-DEMAND DISASTER RECOVERY AS A SERVICE

Cloud Computing as defined by the National Institute of Standards and Technology is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

Cloud Computing or more accurately the delivery model of Infrastructure as a Service has enabled companies to cost effectively set up a secondary data center to which they can replicate their mission critical systems in a scalable, flexible and on demand manner.

By leveraging Cloud services, customers remove many of the remaining high costs associated with disaster recovery services. Namely, there is no longer a large capital expense to purchase physical servers, storage and network hardware required to build a secondary data center or pay a disaster recovery providers to do so. Companies can now fulfill this hardware need with a Cloud provider or DRaaS. These providers deliver virtual servers, storage, replication software, management services and provide a comprehensive disaster recovery solution which is scalable, up or down, on demand for a monthly cost.

By Marc Malizia,

Marc is the Chief Technology Office and a founding partner for RKON Inc. As the CTO, he has responsibilities for designing and enhancing both RKON’s Professional and Cloud Service offerings. During his 15 years growing RKON, Marc served as a pre-sales subject matter expert on technologies ranging from application delivery and security to Cloud and managed services. Marc earned a B.S. in Computer Science from the University of Illinois in 1987 and a M.S. in Telecommunication from DePaul University in 1992.

Established in 2009, CloudTweaks.com is recognized as one of the leading authorities in cloud computing information. Most of the excellent CloudTweaks articles are provided by our own paid writers, with a small percentage provided by guest authors from around the globe, including CEOs, CIOs, Technology bloggers and Cloud enthusiasts. Our goal is to continue to build a growing community offering the best in-depth articles, interviews, event listings, whitepapers, infographics and much more...

Review Before Investing In Data Analytics Big data, when handled properly, can lead to big change. Companies in a wide variety of industries are partnering with data analytics companies to increase operational efficiency and make evidence-based business decisions. From Kraft Foods using business intelligence (BI) to cut customer satisfaction analysis time in half, to a…

What Is The Internet of Things? “We’re still in the first minutes of the first day of the Internet revolution.” – Scott Cook The Internet of Things (IOT) and Smart Systems are based on the notions of Sensors, Connectivity, People and Processes. We are creating a new world to view and measure anything around us through…

Big Data Analytics Adoption Big Data is an emerging phenomenon. Nowadays, many organizations have adopted information technology (IT) and information systems (IS) in business to handle huge amounts of data and gain better insights into their business. Many scholars believe that Business Intelligence (BI), solutions with Analytics capabilities, offer benefits to companies to achieve competitive…

The Future (IoT) By the year 2020, it is being predicted that 40 to 80 billion connected devices will be in use. The Internet of Things or IoT will transform your business and home in many truly unbelievable ways. The types of products and services that we can expect to see in the next decade…

Cyber Security: The New Frontier The security environment of the 21st century is constantly evolving, and it’s difficult to predict where the next threats and dangers will come from. But one thing is clear: the ever-expanding frontier of digital space will continue to present firms and governments with security challenges. From politically-motivated Denial-of-Service attacks to…

CONNECT TO THE CLOUD

NAME *

EMAIL *

YOUR MESSAGE

SELECT OPTIONS

Yes, include me in your mailing listWe are interested in receiving sponsorship info

Cloud Logo Sponsors

Contributor Spotlight

Established in 2009, CloudTweaks is recognized as one of the leading influencers in cloud computing, big data and internet of things (IoT) information. Our goal is to continue to build our growing information portal, by providing the best in-depth articles, interviews, event listings, whitepapers, infographics and much more.