One of Sungard’s workplace recovery centres. The firm believes that because many cloud-based recovery products still depend on physical IT, customers have only bought a “partial” BC&DR solution. It says what they lack is the people that have the range of skills needed to recover a business in line with the board’s expectations.

If there is one thing that is certain in life (apart from the fabled death and taxes) it is the fact that IT never stands still. Each year seemingly brings with it a new set of acronyms – from UC and BYOD dominating the headlines in recent times, we are now in the realms of XaaS and IoT, with AI looming large on the horizon.

So are the challenges for enterprises still the same as they have been when it comes to business continuity and disaster recovery? Or are the latest technology trends presenting fresh issues for IT teams when it comes to BC&DR? (Yes, another acronym.)

One company that is arguably well placed to answer such questions is BC&DR specialist Sungard Availability Services. It believes that the combination of legacy systems and new cloud-based models makes it difficult for businesses to recover in the event of a calamitous network event.

“Trends such as IoT and BYOD are becoming an increasing challenge for companies,” says Sungard’s business continuity expert Daren Howell. “The greater number of endpoints and devices expose more entry points and vulnerabilities to attack. Networks, communications and telephony have also tracked steadily upwards as a source of system failure and why customers invoke our services.”

Furthermore, with the number of cyber attacks growing exponentially, Howell believes that a business’ greatest competitive strength, IT, has now become its greatest weakness.

Sue Mosovich, data continuity manager with business continuity solutions provider Datto, is likely to agree here. While acknowledging that ransomware is nothing new, she says it is now the number one disaster IT managers will face, more common than accidental data loss and hardware failure.

Citing data from security firm Malware Bytes, Mosovich says about 40 per cent of enterprise organisations get hit with ransomware each year (although she reckons this figure is likely to be higher as many companies are reluctant to report such cyber breaches). “The number of companies experiencing an attack is growing fast. Back in 2014, ransomware was a $25m business; in 2015 it jumped to over $100m, in 2016 it eclipsed $1bn, and we’re on the path to this being a five billion dollar industry by the end of this year.”

Mosovich adds that while we constantly hear about how much of an impact ransomware has on on-premise services and servers, SaaS apps are just as vulnerable to attacks. As a result, she says the fundamentals of backup and recovery still apply.

New technologies therefore don’t present new challenges for IT teams because they still need to go through the processes of identifying the systems and data that need to be backed up, and what needs to be available in the event of a disaster. Echoing Howell above, Sean McAvan, Europe MD of managed services provider Navisite, says it is the increasing number of devices that can store and process more information that makes things trickier.

“Deciding how to secure those devices, and even whether or not those devices need to be backed up, adds a layer of complication. However, the answer to the complication is also in cloud-based solutions. There are tools that will let you manage many thousands of devices. In addition, cloud-based tools and advances like micro segmentation allow you to firewall individual assets down to a very granular level.

“So, I think that while the proliferation of new end devices does potentially paint a more complex picture, advances in cloud-based solutions mitigate some of that.”

Ditching complexity

So does greater use of cloud-based platforms actually make business continuity and disaster recovery easier for organisations?

“Potentially, yes,” says McAvan. “Cloud platforms have the ability to simplify access to reasonably priced business continuity solutions. Plus, the ability to replicate to the cloud and hydrate infrastructure and apps only when disaster recovery is invoked reduces the cost. On top of that, the replication technologies are advancing all the time, so you can reduce your RPO [recovery point objective] and RTO [recovery time objective], making these kinds of solutions more viable.”

The answer to the question is also a “potential yes” from Sungard. Howell believes XaaS providers are most likely to be relatively new businesses or ones with standardised contemporary IT. As such, he says it is highly probable that their systems will be based on virtual or cloud platforms and not constrained by legacy infrastructure found in more mature enterprises. “You have therefore done away with a great deal of potential recovery complexity and conflicts, and enabled the automation of recovery.”

But of course, and as Mosovich points out, while adopting XaaS-based applications enables organisations to avoid the complexity and cost of on-premise infrastructure, it doesn’t mean that cloud solutions are fully protected from loss. “Cloud may be remote, but it’s still comprised of physical infrastructure somewhere and, as such, can be affected by device failure, software corruption, malicious attack, and more.”

Eric Jahn, VP of IT infrastructure and operations at Rocket Software, adds to this by saying that data in XaaS platforms may (“or may not”) have good resiliency, business continuity and disaster recovery management. But in reality, he says the perimeter of your data centre has moved beyond the walls you control. “As a result, BC&DR has become diversified, so the likelihood of a ‘complete’ disaster is less likely but the impact of losing your CRM system or demo servers may have consequences at a micro level.”

Druva, which lays claim to the industry’s first data management-as-a-service platform, says the issue of availability becomes mitigated when you have cloud-based services. “Microsoft should be better at running email services at scale than you are, while AWS can build and implement IaaS at much lower cost than individual companies can,” says Dave Packer, the company’s VP of product and alliances marketing. “However, the public cloud companies often don’t provide full disaster recovery within their services. That is still left up to companies themselves.”

Clearly then, in-house IT teams cannot let the cloud lull them into a false sense of security or become complacent when it comes to BC&DR best practices.

“Look at the terms and conditions in your cloud apps,” says Packer. “Some Office 365 products include data recovery based on versioning of files, but there’s no model of backup/recovery that provides time-based restoration or en masse recovery within the majority of Office 365. Salesforce can get your data back after an error or a deletion – but it will take a month for them to recover that data, alongside additional cost.

“For companies, making sure that their existing policies around data protection and management can be replicated in the cloud is more important than the ones and zeroes that make up files or data getting moved to the cloud.”

Mosovich has a similar view, adding that Google, Microsoft and Salesforce handle data responsibly for many companies and that a breach or outage at their end is “highly unlikely”. And she reckons that when it comes to user error, malicious attacks, compliance issues and user management, a cloud-to-cloud backup and recovery solution is your only hope for preventing data loss, downtime and the related financial demise that follows.

Having said that, she reiterates Packer’s point above: “Many users – and even IT professionals – assume that if data is in the cloud, it is automatically protected. But SaaS applications like Office 365 or G Suite aren’t necessarily backed up by Microsoft or Google. These vendors operate under the shared responsibility model; basically, they are going to provide you [with] a secure operating environment that they’ve built to be highly available, redundant and scalable. However, they leave the protection of what is put into their cloud up to the customer. Essentially, if you put data in their cloud, it’s your problem if something goes wrong that’s not environmental.”

According to Mosovich this is where most organisations start to feel the pain of not having a backup. “Nothing is stopping users from deleting data, changing data or doing stupid things like clicking on ransomware. And Google or Microsoft has no responsibility to provide protection from these occurrences or help you with recovering.”

If you fail to plan…

Single suppliers, such as managed services providers, often give businesses the oft-cited ‘one throat to choke’ in the event of something going wrong. But even with the advent of cloud services, in-house network managers should make sure that the throat isn’t theirs. For instance, Druva’s Packer says that while the public cloud can be better than internal IT, it’s not perfect or foolproof. “AWS saw a big outage this year due to human error during an update, for example.”

Databarracks – which claims it launched the UK’s first managed online backup service more than 10 years ago – agrees. The company has been running an annual survey of IT decision makers since 2008, and every year it says the top cause of data loss is human error. “Regardless of the new trends, that doesn’t change,” says technical operations manager Oscar Arean.

According to Databarracks, a good disaster recovery solution does more than just protect technology – it accounts for and mitigates against the disruption to an organisation’s people, processes and assets.

Arean goes on to state that one of the biggest weaknesses when it comes to organisations arranging BC&DR plans tends not to come from the technology itself, but the process. “It’s quite common to hear a business say ‘we have a business continuity plan’, but what they actually have is an IT runbook detailing the technical steps for recovering servers. There’s been no business impact analysis, no critical service mapping from business functions to the IT assets, and if there has been testing, it has been to recover servers but not to really see if users can continue to function.”

The latter point about testing is one that unites the industry. For example, Howell says that while the IT people Sungard meets are all “highly skilled and inherently good at their jobs”, DIY disaster recovery testing is, at best, done four times a year or even just once annually. “Without practice, it’s extremely difficult to gauge how a business will react under pressure during a disaster. [Also] bear in mind that the recovery process will have changed by the time you perform the next recovery.”

Navisite’s McAvan agrees that enterprises and organisations do not commonly trial and test their BC&DR plans on a regular basis. “We often see people set up disaster recovery or business continuity arrangements and platforms which they test during the initial implementation, but then ongoing testing falls by the wayside. Replication tools working with a managed service provider can mitigate against that, so that the tools and monitoring around the platforms can tell you if replications occurred correctly.”

According to McAvan, Navisite can build disaster recovery testing into SLAs: “We will have a bi-annual disaster recovery test put into the contractual obligations. It’s worth noting that it’s also best practice to run testing after any major system changes, so if you change a system or grow it, testing the responses should be part of the change process.”

He continues by highlighting the issue of plans not being updated. For example, if a critical member of staff is named in the plan and subsequently leaves, problems may arise when it comes to implementing the procedures.

Howell agrees and says that keeping up with changes and knowing what your IT estate looks like is vital for businesses: “IT staff come and go, and unless changes and dependencies for recovery are captured, documented or automated, you have had it at time of disaster.”

Another weakness Howell identifies is ‘shadow IT’ which can creep into the estate and lead to chinks in security defences. “This means that you can’t protect and recover what you don’t know about. Doing an inventory of your estate is a laborious task that can now be automated, so there are no excuses. When you know what your IT estate looks like, then you can start to paint a more accurate picture of what your recovery requirements are likely to be.”

We’re gonna need backup

For Datto, other concerns that apply to all BC&DR policies include not having redundant copies of data, both locally as well as off site. As stated above, Mosovich says cloud data has the additional option of being replicated to a secondary cloud location, which is ideal. “Original plus two copies is standard, but original plus three is optimal,” she advises.

There are additional considerations for organisations that adopt SaaS-based applications such as email. “When users are removed from an organisation, there is risk that critical business data will be lost when that user’s account is deactivated. It’s important that organisations remove the ex-employee’s access, but not lose the data. A good cloud-to-cloud backup application can help here,” says Mosovich.

Furthermore, she is critical of small businesses that often protect data with “antiquated” file/folder backup solutions: “These do nothing to protect applications, nor can they restore full systems when they fail. Working with a system snapshot-based solution will ensure restoration of full machines yet allow for file/folder restores and application recovery.”

So who backs up the backup? While data centres may have redundant facilities to protect themselves and their clients, is all this foolproof?

Databarracks’ Arean says the key consideration for disaster recovery of either IaaS, PaaS or SaaS is in where responsibility lies between the service provider and the customer organisation. “Some cloud services include no backup or recovery options at all. In these instances, the cloud provider (usually IaaS providers) offers an SLA for uptime. But if there is downtime and data is lost, the provider accepts no responsibility at all and you may lose your data completely.

“AWS is the perfect example of this. They provide the infrastructure and it is your responsibility to both build-in your resilience and create your backups. In this case, it isn’t just the data centres that have redundant facilities, it is the platform itself.”

McAvan agrees that there has to be a clearly delineated responsibility for what is done with backup data. “For example for some of our clients, we backup the backup, either by replicating it to another data centre or by storing backup data on media which is then sent off-site to either the client or to a third party to be stored safely.”

Sungard often serves as the backup to a number of disaster recovery service providers and, according to Howell, some traditional forms of recovery are also experiencing a renaissance: “We’re seeing physical IT DR environments that aren’t connected to the net. This allows businesses to rebuild IT knowing they can scrub it clean of malware and then connect it back up again in full confidence they won’t re-infect the [network]. This is proving to be a great way to beat ransomware attacks.”

What are the pitfalls to avoid when choosing BC&DR solutions and what should net managers look for?

McAvan says businesses should steer clear of large, fixed flexible solutions that require high capex. “Historically, disaster recovery solutions tended to be based around facilities and the storage. That is capital intensive and very inflexible.”

Because the technology landscape is changing so quickly, McAvan maintains that cloud-based platforms offer greater flexibility. But Rocket’s Eric Jahn appears to sound a more cynical note when he says: “Outsource DR vendors make a living on charging customers for the reservation, and then copious amounts more if you dare to show up and use the hotel room. The temptation is to move it outside, so you can say you are safe (but be warned that you’re the only one who will get fired if it doesn’t work).”

Nonetheless, if analysts such as Gartner are to be believed, it is estimated that 50 per cent of all company data will live outside the corporate data centre by 2020. Looking at cloud-based DR, Packer therefore says it’s important to be aware of where your data really gets created and stored over time: “Is it in cloud apps? Is it on mobile devices or laptops? Are there file servers used at remote offices? The data centre is no longer the centre of data. Yet all that data will still have to be managed and adequately protected.”

He encourages cloud customers to go back and scrutinise their SLAs: “While cloud services can provide better availability, resiliency and reduce your costs around data loss due to storage failures, it always pays to read the small print when it comes to data protection in regards to corruption, accidental deletion or malicious attacks and how you might recover.”

Jahn also recommends re-acquainting yourself with your supplier’s agreement: “If you choose to go the DR route (or are already there), go back and read every line of the contract. It’s likely [the provider is] doing very little for a lot of money. The advent of cloud storage connectors, which hybridise your data centre and keep a significant number of data backups in multiple vendors with geographic diversity, is the game changer that enables any organisation to bring their DR back in-house.”

GDPR

For those who feel the need for yet another acronym, there is one that has been dominating the IT agenda of late: GDPR.

The EU’s General Data Protection Regulation was ratified in 2016. Businesses were then given two years to become compliant or possibly face multimillion pound fines if they fall foul of the legislation which will apply in the UK from 25 May 2018. The government has confirmed that Brexit will not affect this start date.

Under the regulation, businesses will need to take adequate measures to ensure the security of personal data, actively demonstrating that they comply with the GDPR and have implemented “privacy by design”. So how does all this play into BC&DR?

“GDPR is going to deliver a framework that will drive people to protect personal data,” says McAvan. “Part of that framework/guidance will be geared around ensuring that the data is securely protected and also that backups and copies are stored and available. In some ways, these changes are welcome, and I imagine that it will drive some organisations to review their current policies and current solutions.”

He adds that some elements of GDPR, such as the right to be forgotten, will potentially mean that many organisations will need to review the tools they use to backup data: “They will need to identify the records affected by those elements and be able to extract individual ones from potentially huge amounts of data media, and then be able to erase that.”

Jahn is also concerned about this: “IT organisations may get stuck with a significant challenge until application service providers build the necessary, granular data retention tooling that can easily identify PII [personally identifiable information] and act on requests such as the ‘right to be forgotten’. How do you go back and purge one user’s data from seven years of backups and replicas? In the long term, storage vendors and enterprise search capabilities need to evolve for constituents in a ‘reverse legal hold’ scenario where data may still be resident but obfuscated or blocked on access.”

Meanwhile, Packer also believes GDPR will force many companies to look at the data they create and how it is managed. With the increasing volumes and storage locations of data, he says the importance of having a single control plane for visibility and policy management increasingly becomes an imperative. “Achieving this and being able to track sensitive data across the business will be important for aligning to compliance initiatives. However, this is not just an ‘IT project’ – it goes to how your business uses customer data and how you manage their records as a resource.”

According to Howell, there are four explicit areas to focus on in terms of the GDPR and BC&DR.

Firstly, there is what he describes as the “pseudonymisation” and encryption of personal data. This involves the processing of personal data in such a way that it can no longer be attributed to a specific subject without the use of additional information. But he adds that businesses must hold the pseudonymised data and the additional information separately to prevent possible identification, since the data only becomes identifiable when both elements are held together.

Secondly, there is data protection where the ongoing confidentiality, integrity, availability and resilience of processing systems and services has to be ensured.

Thirdly, restoration is crucial so that the data is provided in a timely manner in the event of a physical or technical incident.

The fourth element concerns management and implementing a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the BC&DR policy.

Everyone’s accountable

Rocket Software’s Jahn believes too many organisations remain confused when it comes to BC&DR. He reckons they fail to distinguish between: application/service resiliency (making something highly available as a service); disaster recovery (what you do in the event of a disaster); and business continuity (what you do in advance to plan for the disaster). “The entire business needs to understand the difference and importance of each layer so that companies develop integrated strategies where everyone is accountable. Ask your team members ‘is BC or DR (solely) the IT department’s responsibility?’ If anyone answers ‘yes’, you have work to do.”

Perhaps that’s no bad thing given all the digital transformation that is currently under way and looks set to continue over the next few years; few would disagree that paradigm shifts and new ways of thinking are needed at every level in an enterprise.

But for now, and in the words of Mosovich, data resides on real hardware and software, no matter if it’s in your network or the cloud. She says: “It’s just as likely to be affected by all the threats we’re familiar with wherever it lives. Don’t get complacent. Get knowledgeable.”