Posted
by
Soulskill
on Saturday October 04, 2008 @05:14AM
from the grandma-needs-those-pictures-of-her-cat dept.

RichiH writes "Most of you are the free IT staff of friends and family, just as I am. One of my largest headaches is backing up their data. What I am looking for allows for off-site storage on multiple server machines running Linux, has Linux & Windows clients that Just Work and require zero everyday effort (although a large-ish effort to set them up is just fine), allows for granular access control, is versioned and will, ideally, allow me to grab data automagically (think photo pool for your family where your mother, sister, etc., share each other's photos). This is something I've been trying to find for years, but I've never seen anything even closely resembling what I want. With the Wall Street Journal handing out its Technology Innovation Award to Cleversafe recently, I was once again reminded of this particular itch which needs scratching. Before I deploy it, I want to ask the Slashdot community for its opinion on that piece of software, and on potential alternatives. How do you solve this problem?"

Yes, because storing thousands of jpg images and other binary data is exactly what git was intended for.
Get people to store their data on Samba fileservers. Set up home directories in their name as well as shared directories accessible by everybody or Samba groups. Use ACL if you need to.
To backup, use rsync and OpenSSH, write a few batch scripts and hey - presto! Instant solution that'll even work with cheapo webhosts and your home linux box as backup servers. Versioning can be done for any amount of

actually, for my own digital assets repo - see signature - i see two features of git which might be handy, atomicity of commits and hashes which avoid storing duplicates. git has "plumbing" commands which might help. Still haven't explored it.

BTW if you have enough band you could do away with a doxroom instance on a host, don't forget to backup files and db and remember it's alpha quality.

Afaik, Git supports Meta/recursive repos where I have one master repo with many subrepos. Thus, it would be best to have a master repo that contains all other repos. That will make replication easier.

The only other requirements would be that it adds all files in a given directory to repo foo and pulls repos bar, baz, quux. Preferably, it would happen automagically & regularly with a throttled connection. Requiring them to click a button in a butt-ugly app is fine, as well.If W

I'm perfectly serious. It's a useful app and a pretty easy problem. If you'll email me at CTO@Openmigration.net and let me know more about your specific requirements (number of remote hosts, total archive size, etc) I can start figuring out what the best way to do this is. Also, I'll need to know all of the platforms you're running on (will you need support on cell phones? Xbox?), the level of redundancy you're comfortable with, will you need a web interface, etc.

python + wxpython for the GUI (with wxglade as the gui-builder). If you follow the wxwindows standards the UI will have native look and feel on Windows, GNOME, and Mac machines (including more major differences like the menubar being in the window on Windows/GNOME at at the top of the screen on Macs).

I can tell you how I solve it in a business context, but whether or not it could be scaled down to personal I'm not sure.

The problem: 2 sites each with 70-100GB of data needs offsite backup with similar criteria to your own. Bandwidth available to these sites is 2-4Mbps. The only OS involved is Linux, though I'm sure Windows could be shoehorned in somehow. A third site which has a tape streamer and someone to take tapes offsite is available. Data protection legislation means that storing it with a hosted service is illegal unless I encrypt it myself before sending it offsite - I'm only aware of one tool which claims to be able to do this and still send data as a binary delta (it uses the rsync library) and that tool is still not particularly common in Linux distributions and not very widely used. I'm nervous of trusting my backups to a tool that isn't on heavy use, particularly if strong encryption is being employed.

The Solution: A server in the third site and some judicious scripting with rsync allows it to mirror the data in the other two sites. The first sync is fairly painful, of course, but provided you don't have too much data regularly changing subsequent syncs aren't too bad. The server is backed up to tape which provides versioning capability so if someone only realises that they lost a file a week after the fact it can still be restored,

Initial effort to set up was pretty great but now it's done it JFW and requires no brain power whatsoever to run on a daily basis. I can make the data available over the VPN (of course the access speed will be dog slow) more-or-less immediately and I can make it available at LAN speed by copying it to a hard disk and courier it to the remote office in under 48 hours. A full restore of 100GB across a 2Mbps connection will take at least 4-5 days.

For storing permissions and the such, are you using a.tar container? My biggest stumbling block with my backup scheme is storing ACLs and permissions.

I've got a few ideas about doing it, but they're all kludgy or force me to walk away from my rsync scripts which are really fairly mature at this point. Furthermore, I need to get deltas downstream and packing everything in to one file pretty much defeats that purpose at the several gig level unless I'm running an rsync server to calculate the diffs. Th

Recent versions of rsync fully support POSIX ACLs (including, if asked, setting up ACLs on the receiving end that don't make any sense because they refer to uids that don't exist - though you could work around that one with a common authentication mechanism such as LDAP) - I've not tried to get Windows working so I'm not sure how well that would work.

Be warned that full POSIX ACL support hasn't made it into every Linux distribution yet - IIRC Debian Etch's rsync doesn't, for instance. If you're paranoid,

Even better, recent versions of rsync allow you to shoehorn all metadata into xattrs on files, so you can (for example) store Mac OS X metadata and ACLs on a linux box with no special file system setup. You can even store the files as an unprived user and have the real perms stored in xattrs as well.

Be warned that with Mac OS X metadata, that gets stored under the same filename as the original with ",_" prefixed - I'm not sure what happens if a file with that name already exists.

Also, if you want to make full use of rsync options, you need the same version on both ends of the tunnel.

(That being said, props to Mr. Tridgell, rsync is an absolutely awesome tool which has saved me I-don't-know-how-much in terms of time and effort. I really must make a donation to the project at some point),

You don't necessarily need to make that first backup painful. Rsync while you've got both servers in the same room over a LAN, and from then on you just have to deal with the delta and don't need to worry so much about bandwidth.

You don't necessarily need to make that first backup painful. Rsync while you've got both servers in the same room over a LAN, and from then on you just have to deal with the delta and don't need to worry so much about bandwidth.

The source server was in another country and in daily use, the destination server was bolted into a cabinet, weighed about 40kg and also in daily use.

Data protection legislation means that storing it with a hosted service is illegal unless I encrypt it myself before sending it offsite - I'm only aware of one tool which claims to be able to do this and still send data as a binary delta (it uses the rsync library) and that tool is still not particularly common in Linux distributions and not very widely used.
Based on my limited understanding of crypto, when you encrypt data it should turn into pseudo-random noise, so if *any* bits change the whole thing c

Based on my limited understanding of crypto, when you encrypt data it should turn into pseudo-random noise, so if *any* bits change the whole thing changes (unless you're doing a block-cypher, but if it's chained-block then every portion *after* that will also change). So for large files, this seems like the delta would end up being practically the entire file, wouldn't it?

I'm not sure how it works, but I can think of a few ways you could work around that in theory.

The most obvious is to encrypt every file individually and then ship a tar of the whole lot up. Though for best results, you'd need to download each file, decrypt it, perform a binary delta against the source file, encrypt the delta and ship that up.

End of the day though, it sounded rather too complicated for my liking. I get the benefit of offsite backups stored with someone like Amazon but I'm using a tool whic

You're asking two questions. The first is that you want backup, so that all their data just gets thrown somewhere and they lose the last few days' work their hard drive dies. You don't even necessarily want this on the network; just back up to a DVD-R every so often, and take every month's DVD-R offsite (a friend's house, a bank's vault, whatever). There's lots of backup software for this. Most can do fancy stuff like incremental backups. You can probably find something opensource you can host for your friends and family on a decently-available server.

The second question is networked file storage, where you don't care about automatically archiving files, but you do want frequent access and a good UI. For this I recommend something like Dropbox [getdropbox.com], which has good support for OS integration and a web interface.

If you try to roll backup and distributed file-storage into the same application, you're not going to get anything useful. Aunt Sally is going to want every single file including her OS and her tax returns backed up, in case her hard drive dies, but only wants the photos -- and only some of the photos, actually -- to be visible to Grandma Suzie. If Suzie can see every file on Sally's computer, and the entire history of each file, she's not going to be able to browse the photos in a way that's at all intuitive.

And worse yet, if Sally wants to send out links to her photos to fifteen of her friends by e-mail, she needs some sort of interface to mark parts of her backup as world-readable but the rest (like her passwords and e-mail) not. If the network backup program even lets you do this, it won't give Sally a UI that she'll be able to figure out.

You can certainly get network backup services: Mozy was mentioned in an earlier comment.

If you rethink your requirements in terms of your goals, you'll probably find that both rolled into one isn't what you want, and not just because a product doesn't exist at the moment that does that — a product that does that can't possibly have a good UI. If they shouldn't notice or care about how backups are being made, how are they going to figure out how to share photos with each other?

Not sure if it suits your situation, but you could take a look at drbd. Current stable versions only support 2 node mirroring, but future versions are planned with further nodes.

Personally I've used it for shared-device semantics for backing storage on Xen VM's (and prefer it over my previous iSCSI-in-VM config). It is also, however, eminently suitable for remote-site mirroring of block devices. It isn't too difficult to build a stack with backing devices remote-mirrored over drbd, shared out over iscsi in

Dropbox is absolutely fantastic as a sync tool (and also has some degree of versioning), but there's no practical way as of yet to make it into a full-system backup. When 'watch folders' show up, it'll get a lot closer, but like any web-based system, it becomes impractically slow for anyone dealing with lot of data. Even digital snapshots add up quickly with the resolution of the point-and-shoot cameras, never mind if there's an actual photographer shooting RAW.

If you want backup, use Mozy.com. It's already been around and perfected for many years, it integrates into "Shadow Copy" on Windows and "Time Machine" on OS X, it's cheap and effective. My media PC is backed-up to Mozy, over 250 GB with no problems.

Dropbox cooperates with government and law enforcement officials and private parties to enforce and comply with the law. We will disclose any information about you to government or law enforcement officials or private parties as we, in our sole discretion, believe necessary or appropriate to respond to claims and legal process (including but not limited to subpoenas), to protect the property and rights of Dropbox or a third party, to protect the safety of the public or any person, or to prevent or stop any activity we may consider to be, or to pose a risk of being, illegal, unethical, inappropriate or legally actionable.

If I read this correctly, your data is anything but secure or private, as Dropbox can use any arbitrary reason to give your data to any party.

I don't think I'd want any of my backups on a service that clearly has access to my data. All such a service should be able to see is utterly opaque encrypted binary blobs they don't have the key for. Dropbox clearly think that's too hard, and prefer to err on the side of making their implementation easier.

I think Dropbox has the right idea, with one glaring flaw - You need to trust
your data to a third party (Amazon S3), and that third party needs to continue
to exist (and offer the service) for Dropbox to keep working.

For most purposes, I wouldn't consider that a major problem - I doubt Amazon really cares
about the contents of my personal collection of apps to which I'd like to have access anywhere
I go, or my family photos, or the contents of my to-do list; an

Have you considered the JungleDisk client that works with the Amazon S3 storage cloud? This has backup clients for Windows, Linux, and Mac and with suitable configuration of 'buckets' would allow you to do most of what you are trying to achieve. Okay so it's a pay-for service (albeit cheap) but it does provide the all important off-siting, strong security/encryption and unlimited capacity.

From a privacy perspective, Jungle Disk [jungledisk.com] encrypts your data with a key you control prior to upload - no one else can read it. From a security perspective, you can read their Security Whitepaper here [amazonaws.com], but suffice it to say they take security really seriously.

As far as redundancy goes, your data gets stored in multiple Amazon datacenters around the country, which provides redundancy and high availability. At the end of the day, it's a far superior solution to anything you can cook up at home.

I looked at Cleversafe, trying to get through the PR bubblespeak. It seems they are emulating disks, not offering integrated _backup_. As saving from my mom's SD card to a distributed online disk via a DSL line is not feasible, I will most likely need to scratch that idea.

Backup isn't the same as sharing. And do you want actual replication or merely fault tolerance to node failure? Actual n-fold replication means you're going to pay n times the amount of money for storage. And why do you insist on one application to do everything?

My suggestion: set up automatic backups to one of the many backup services on the net. They worry about how to replicate your data, you don't have to. For the same service to support both backup and sharing is hard and it's probably a bad idea. It's much easier if you know that the backup service simply cannot access the contents of any of your files.

For sharing, use services designed for that: Flickr Pro, Picasa, Google Docs, whatever. They are designed for sharing, they know about users and permissions, and they can only publish what you actually upload to them.

As for Cleversafe, the idea is as old as forward error correction, but the economics and management never seem to quite work out. And basically, you're getting the same functionality from hosted storage: Amazon, Google, Box.NET, etc. are already figuring out how to keep your data available and secure, and are probably doing a better job than you could do with a homebrew system.

I want distributed backups with several, for lack of a better word, working copies checked out on different machines.

Aha, now I figured out why we're all misunderstanding you. Those aren't backups. "Backups" to my ears means that you copy the entire contents of your disk or your Documents folder nightly onto tape or some other archival medium, so that in case of hardware failure you have something to restore from. Potentially you also keep prior versions around. The tapes are stored in a corner somewhere because they're never actually accessed except in an emergency, and they're destroyed after a few months.

What you want isn't backups, since it doesn't make sense for different people to share backups any more than it makes sense for different people to share a single networked hard disk or networked home directory. You just want a distributed file storage system, with automatic syncing / commits.

What you want isn't backups, since it doesn't make sense for different people to share
backups any more than it makes sense for different people to share a single networked hard
disk or networked home directory

Although I agree that the GP has something other than just "backups" in mind, I would still
consider the result quite a decent form of backup (moreso even than the notorious "works
perfectly until you need to recover" tape archive) - If you have copies available locally, at
a remote repository, and

If you had only Windows and Mac, I'd opt for Mozy (http://www.mozy.com) which is owned by EMC. It's $50/year for unlimited storage and their agent is unobtrusive and backs up even open files.

The downside is that it limits upstream bandwidth to 1Mb/s, so your initial backup might take a week. But after that, it takes 3 minutes a night and it does it without prompting. I've strong-armed my immediate family into using it because it also allows me to monitor remotely the status of all backups.

4 disks and RAID 6? That make little sense. If you have 4 drives and are willing to give up 50% to redundancy (which is not out of the question), RAID 10 (pair of striped drives + pair of striped drives, mirrored) is much less complex.

Runs pretty tight (low bandwidth), supports channel encryption and datastore encryption, can even create Bare Metal Recovery disks. I have a server room with LTO3 tape drives that I use to backup my clients' incremental data changes nightly, including Linux, Mac and Windows clients and servers. I have VPN's out to each client, so don't use the built-in channel encryption, but I maintain a keypair for each client.

Backup only, but I/could/ present a maintained volume as a share over the VPN. Bacula supports disk and tape volumes as backup stores. I've personally had no need to do that to date.

We're not talking terabytes here - my ISP would pwn me if that was going on, but I do circa 20G of data changes every night from clients. Some of them are laptops that are not always on or connected. Most are friends and family PC's, so it backs up when it can. I have to do almost no maintenance apart from changing a tape occasionally. The backup client is tiny and unobtrusive, even when running. On Windows it uses VSS, so it is reliable.

I have had a number of panic phone calls (esp from my kids at Uni) who have lost a thesis or the like and are utterly amazed when, after a few clicks over the phone they look at their webmail and yesterday's version is in their inbox. That's what it's all about! I am the god of lost data! Which, of course, works for me.

There are a bunch of people offering this sort of service (or build your own) on Amazon's S3. It has the advantage of being accessible to everyone, has the security built in and you only have to worry about the data not server availability.

Backup not on the cloud just doesn't make much sense to me these days.

AFS is only about 20 years old, and supported on Windows, Mac, and most flavours of *NIX, so it might not be sufficiently mature for your needs, however it does provide the following capabilities:

Remote storage with local caching.

Snapshots, allowing coarse-grained versioning.

Replication on the server.

As well as all of the standard things you'd expect from a networked filesystem (ACLs, authentication, and so on).

If you set up an AFS cell with your volumes replicated across a few remote servers and get your clients to connect to this cell then it should be fine. Set a cron job to take regular snapshots, and dump them to some offline medium periodically.

It sounds as if the author of the opening post is looking for a Network-Attached Storage device that will function as a server, is based on Linux, and comes with pre-loaded applications.

I found and tested the predecessor of the following device (which I can recommend on basis of a year-long test of a sample with N=1): Bubba (see http://excito.com/bubba/about-bubba.html [excito.com] ). A Swedish NAS device. I have to note that it's certainly not "distributed" in the sense that it's easy to mirror data across multiple d

high-performance, enterprise-grade system for backing up PCsBackupPC is disk based and not tape based. This particularity allowsfeatures not found in any other backup solution: * Clever pooling scheme minimizes disk storage and disk I/O.
Identical files across multiple backups of the same or different PC are
stored only once (using hard links), resulting in substantial savings
in disk st

BackupPC is nice. Its pooling strategy is very good, it works brilliantly and painlessly when backup up linux -> linux (though I have to re-try it Windows -> linux), and their UI is what a lot of the other solutions need for people to browse/restore their own data using a web browser.
Its devs are responsive, too!

I think that the issue is faced by far more people than is readily apparent... it's the need for a VERY easy to use tool to share Our Stuff with Our Family. If my Mom and sisters were able to share all their photos with each other by carrying a USB drive around when they see each other... the most important thing they have on their computers would be backed up... the need for social file sharing is huge... we just don't have the tools to do it well yet.
Something that does auto-discovery of stuff, remembers previous decisions, and just goes to work making copies in the right directions is what we need.

I just didn't want to deal with it.
I use cloudbackup.openrsm.com [openrsm.com] and have them buy an account. It can do a whole network of Linux, MAC, and Windows machines with one account, or just a laptop. The client software is free and does network drive of the backup space too.
I figure easy and my friends paying for it works. It's saved my butt a couple times too.

AhSay's free version of their Offsite Backup Server (http://www.ahsay.com/en/freeedition/ahsay_free_edition_index.html) does versioning and, well, everything you're really asking for. I use this at work with about 20 clients, and it's rock solid.

It supports rsync, ssh, tar, and SMB. Performs pooling which reduces the number of stored files. Only issue is it uses the local account password file, so you'd have to set up an account for each user you wanted to give direct access too.
http://backuppc.sourceforge.net/ [sourceforge.net]

I use SVN to backup my sister's important stuff to my home server. It was easy to teach them to commit changes and add new files to be versioned because I installed Tortoise SVN on their Windows computers. It has full versioning and can use an encrypted link if that's important.

Don't most businesses already do this? On laptops, I used roaming profiles, and synched My Docs with the user's home directory on the server. All additional backups, versioning, etc. were handled on, and by the server.

Downside is it's not a complete solution, as any data stored in Program Files or Common Files dirs wasn't mirrored.Upside is that it's simple network management, and even lets you use login scripts.

The current state of open source backup technology is abysmal. Currently, I'd say reliable would by rsyncing to a large, removeable hard drive, and then couriering it to a remote location or "secure" physical storage service.

For "long term" backup, get a DLT tape drive, and selectively backup to tape. The tape, if properly stored, will be more likely to recover data than a hard drive. Also note, this is a few hundred dollar investment, with large capacity DLT tapes going for a hundred a pop as well.

While it is true that if the initial border zone on a DVD becomes unreadable the disc cannot be accessed normally, there are recovery techniques that will allow the data to be read from the disc. No, there isn't any way to "demand" access to the data - you have to go through the drive's normal protocol. But you can play some tricks.

No, there aren't any "alternate forms" of DVD encoding. DVD discs are rather complicated and the drive and chipset are very, very much required as an intermediary. And the dr

I'm surprised no one has mentioned Wuala - www.wua.la - which is a distributed online storage system. You agree to store (encrypted) bits of others' files in exchange for the ability to do so on others' machines across the wuala network. It's free and pretty damn cool. They can explain it better than I can: http://wua.la/en/learn/why [wua.la]

I watched their CTO's Google Talks presentation and it was really interesting. I got all excited, joined their beta only to realize that they - IMO - misused the technology they had and designed a rather mediocre product. Wuala wants to be a backup tool, a sharing tool, a social networking medium as well as few other things. In other words it lacks focus and wants to do everything - an approach that rarely works.

ObStdDisc: I work for the company I mention here... but suffice it to say that I left a very stable job to do so - so's to indicate that I do actually believe in the excellence of the product.

Keep an eye on Rebit [rebit.com]. It doesn't do what you're asking about as of this moment... but (without treading into realms of "I'm not allowed to talk about that") I can safely say that the future holds some interesting things along this sort of direction.

I use carbonite. Small app, I can have multiple machines within the same account, unlimited data for something like $49/year. I got it for a work machine - and it has already been used to retrieve deleted files (very painless process), liked it so much that I got it for a couple of the family machines that I support. I set it up for them and the only instructions they have to remember is "don't save tax returns under c:\windows\system32, save them under My Documents".

http://www.crashplan.net/ [crashplan.net] has done exactly what you describe. everyone in your 'backup network' backs up to each other, and for free. They make money from selling their own offsite backup.
--Sam

http://trac.manent-backup.com/ [manent-backup.com]
Easy: yes, after a first setup. Reliable - yes. Versioned: you bet! Actually, every backup you do is accessible as a different version, with a very little overhead.

No Linux client, AFAIK (though I do run it on my MBP). It's become rather impractical for me as a photographer though, as sometimes I'll shoot enough photos that my internet connection would be completely maxed out for days on end trying to sync up the new data - and I have a decent-for-cable 1Mbps upload rate.

rsync to Amazon S3 might be an option, if only for cross-platform capabilities. No versioning though, but outside of Apple's Time Machine (obviously useless for Windows and Linux), you're not going to get that without some major headache. Any remote system is going to be horribly slow for the first sync with any typical internet connection, and quite possibly problematically slow for photographers, media horaders, and in general people with big hard drives.

+1 vote for JungleDisk. I use it on my Windows and *nix machines and couldn't be happier. I really like the idea of paying for the software once (use on as many machines as you like, with free upgrades forever) and paying Amazon for storage "at cost." So many other internet services rely on oversubscribing limited resources, with heavy users eventually getting ejected in favor of more profitable clientele. With Amazon, I know I'm getting exactly what I pay for and they're not going to disappear with my

Either you have approximately three libraries of congress worth of data, or a very cheap cell phone bill. S3 storage is pretty cheap considering the redundancy and offsite and all that good stuff - 15c/GB-month, and 10c/GB for transfer in. So up to about 30GB of so worth of stored data, it's cheaper than Mozy ($5/mo), but I'd need to be storing over 400GB-month of data plus a good chunk of rsync transfer bandwidth before it would cost as much as my cell line.

rsync to Amazon S3 might be an option, if only for cross-platform capabilities. No versioning though, but outside of Apple's Time Machine (obviously useless for Windows and Linux), you're not going to get that without some major headache.

Server running opensolaris/*bsd with ZFS, rsync to that, create a snapshot every day.

rsync to Amazon S3 might be an option, if only for cross-platform capabilities. No versioning though, but outside of Apple's Time Machine (obviously useless for Windows and Linux), you're not going to get that without some major headache

Um, there are plenty of incremental backup tools dotted about, just upload the dumps?

Alternatively tarsnap [tarsnap.com] is currently in beta testing, uses Amazon S3, and the client is written by the top FreeBSD security bod, with the client coming as source (though the service isn't free).