Jon Toigo, founder and CEO of Toigo Partners International, is a frequent speaker at our Storage Decisions seminars and conferences -- and he’s also a big proponent of keeping tape in your organization's disaster recovery policy. This week, we discussed tape’s role in backup and DR today, how tape is evolving, and whether cloud storage will replace tape for long-term data retention.

“The most compelling development in tape is growing capacities,” he said. “It won’t be long before you are going to be looking at LTO cartridges that can store up to 32 terabytes of data,” said Toigo, referring to the published LTO roadmap.

Currently, LTO-5 tapes offer native capacity of 1.5 TB (3.0 TB compressed). However, the LTO Ultrium roadmap indicates that LTO-6, which will be released later this year, will offer 3.2 TB (8.0 TB compressed).

Another big development was last year’s release of the Linear Tape File System or LTFS. LTO-5 tapes can be partitioned into two segments, and LTFS uses the first partition to store a self-contained hierarchical file system that indexes the data stored in the other partition. LTFS is open source, so any application provider or tape user can download the LTFS spec from the LTO Ultrium website.

“Tape is also being well-positioned to become a sort of NAS on steroids,” Toigo said. “In other words, all of your older files that aren’t being accessed very frequently can be written to a tape library, which is fronted by a small disk cache running LTFS. That gives you the ability to store massive amounts of data -- we are talking on the scale of petabytes -- on a single raised floor tile and consume the power of a couple of light bulbs.”

While that might be a bit of an exaggeration, tape’s energy efficiency is a strong benefit for long-term retention of data.

Tape isn’t the answer for everything

Of course, disk backup has some obvious advantages over tape. The number-one benefit of backing up to disk is restore time. Even with LTFS, it is unlikely tape will ever be able to compete with disk in terms of restore time.

“I always recommend that people back up 30 days of data to disk on site,” said Toigo. “Finding an individual file on disk is going to take a hell of a lot less time than going through a bank of tapes, reloading the tapes and then finding the file. That takes too long for a minor crisis.”

It is widely accepted that this use of both disk and tape is ideal. And, many people in the IT world agree that long-term data retention is tape’s niche. However, cloud-storage vendors are doing their best to chip away at that tape stronghold.

Will cloud replace tape for long-term storage?

We frequently hear cloud storage vendors pitch their services as a replacement for tape. It offers some compelling benefits -- pay-as-you-go, unlimited capacity, no hardware investment. And, now that many of the major backup software providers allow you to backup directly to a cloud service, it is becoming more and more convenient to do so.

However, there are a number of lingering concerns. “I wrote a book about application service providers in the late '90s and you could blow the dust off the cover and change ASP to cloud,” Toigo said. “You see all of the same problems as we did with ASPs -- lack of security, lack of service-level reliability -- things that limit organizations ability to outsource certain activities.”

Toigo went on to note that tape isn’t the answer for every company’s disaster recovery policy. “There are a lot of smaller companies that wouldn’t do anything about backing up their data if they didn’t have access to a resource somewhere on the internet,” he said. “However, we hear a lot of promises from cloud backup vendors that they are replicating your data behind the scenes, they have Class A data centers, but nobody ever goes out and checks that stuff.”

Latency, jitter and delay can also be big issues when replicating data to the cloud, according to Toigo. “You have all of the same issues you see with going over a WAN,” he said. “If you are doing replication over a WAN, you have to factor in the latency and jitter that is going to occur. If you really need to have zero downtime, chances are the state of the data in your remote site -- whether cloud or a site you own -- is not going to be the same as the data in your primary site. That can cause major problems in restoring mission-critical applications."

0 comments

Register

Login

Forgot your password?

Your password has been sent to:

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy