Backup Databases: The Data Security Achilles' Heel

The same sensitive information on production databases resides on backups -- protect them accordingly

If production databases contain regulated information or valuable intellectual property, then it follows that backup copies of those data stores carry the same risky information. And yet, while many organizations spend the time and investment in hardening live databases, they frequently fail to adequately protect their backup databases.

"So many people invest heavily into ruggedizing a database and spend little time tracking where their backup data goes," says Ken Pickering, development manager of security intelligence at Core Security. "A breach of critical information can occur just as easily from a backup file as it can from the production database."

It's a common scenario that plays itself out at countless organizations large and small, says John Mensel, directory of security services at IT service provider Concept Technology, who says his firm frequently sees customers making the mistake.

"A lot of companies have databases that contain things like credit card information, and yet they just leave those backups hanging out wherever anyone can get at them," Mensel says. "It's really simple to secure them, but it's often overlooked because of laziness or a lack of a sense of urgency about protecting them."

"Just as organizations calibrate production database protection based on the risk and compliance priorities around the data contained within them, backup databases should be secured in accordance with the sensitivity of the information they contain," says Dr. Stan Stahl, president of Citadel Information group, an infosec management firm, and president of the Los Angeles chapter of the Information Systems Security Association (ISSA).

So any backup databases containing personal healthcare information have to live up to HIPAA and HITECH standards, for example.

"The only 'good enough' solution is one that protects sensitive information in accordance with applicable laws and regulations, the company's competitive opportunities, and its fiduciary responsibilities," Stahl says.

Paramount to the backup database protection plan is an adequate encryption mechanism.

"Encrypt, encrypt, encrypt. It is important to implement granular encryption that will encrypt the data and not just the database," says Tsion Gonen, chief strategy officer for SafeNet. "That way, if data is stolen, it is rendered useless."

But at the moment, not even a quarter of organizations encrypt all of their database backups, according to the 2012 IOUG Enterprise Data Security Survey. Just less than half either don't encrypt backup databases at all or don't even know whether they encrypt any backup databases.

This inconsistent application could come down to logistics. Sometimes simple full-disk encryption may not be an option. For example, backup databases on the cloud could pose special issues, says Fred Thiele, COO of consultancy Laconic Security, who explains that the Amazon AWS relational database service (RDS) allows administrators to snapshot databases and store them for quick restores. But the caveat is there's no option to encrypt the snapshots.

"So if you're not doing field- or column-level encryption on your database, the data your snapshots [are] in [reside in] Amazon-land in plaintext," Thiele says. "Same with their 'backup to point in time.'"

That is why organizations should consider field- or column-level encryption, even for databases, he says. Even outside of the cloud, an organization's disaster-recovery processes may stand in the way of simpler encryption mechanisms. In critical systems with little margin for downtime -- often the very same systems that contain the most sensitive information -- IT is pressed into configuring backups for rapid recovery in the disaster situations.

"In order to maximize the recovery time objective (RTO), a standby copy of all data and their applications may be stored in their native format in a secondary location," says Josh Mazgelis, senior product marketing manager at Neverfail, a disaster recovery and business continuity management firm. "Recovering data from an encrypted and deduplicated backup takes time, so short RTO requirements mandate the data be stored in its ready-to-use native format."

In these scenarios, if the data and the surrounding environment is exactly the same as production, then the same exact protections need to be put in place around them, Mazgelis says.

"User access control lists in the disaster-recovery copy will usually be identical to production due to the nature of the disaster-recovery replication, but don't assume this is always the case," he says. "Instance-level or storage-based replication will usually provide this, but database or record-level replication can easily move information into a less-secure environment."

Mazgelis also warns that one concern about backup databases could play more of a factor than with production databases, and that is the issue of physical security. Depending on the backup medium, disaster-recovery copies face the additional risks posed portability. Tapes fall off the backs of trucks all the time, so security strategy needs to account for this factor.

"Even encrypted copies can be compromised once a hacker has access to the physical copy. Be mindful of storage media when it is retired," he warns. "Old tapes and old hard drives may contain old data that can be valuable to all wrong people. Old media should be properly wiped or physically destroyed."

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

Published: 2015-03-31The build_index_from_tree function in index.py in Dulwich before 0.9.9 allows remote attackers to execute arbitrary code via a commit with a directory path starting with .git/, which is not properly handled when checking out a working tree.