And also for s10u2, ZFS Automatic Snapshots

Wes was kind enough to
point out that the zenity-based GUI in the last release of the ZFS Automatic Snapshots SMF Service didn’t work under earlier versions of JDS. After a bit of digging, we discovered that this was because some zenity features I was relying on weren’t in the older GNOME 2.6-based zenity.

So, rather sooner than I expected, I’ve got another version of the software for you to try – zfs-auto-snapshot-0.4.tar.gz (link now deleted, see Update at the bottom of this post).

We now detect which version of zenity is on the system, and do the right thing. I’ve included a README to make things easier for the first-time user.

I also cleaned up some of the logic that creates the name of the SMF instance for each snapshot schedule. In the earlier code, if you had two ZFS filesystems tank/foo-bar and tank/foo/bar with separate snapshot schedules (not strictly required, since you could have one schedule for tank, and use the “snapshot all child datasets” option), SMF would have failed to import the 2nd instance – we can’t use ‘/’ in SMF instance names, so I was escaping them with ‘-‘ characters, hence the namespace clash. This is fixed now, so all feedback is welcome!

They say the best backups are the ones you don’t have to worry about, and while I realise that ZFS snapshots are only the first step in a “proper” backup solution, in my day to day work testing ZFS, this stuff has already saved me from going off looking for tapes…

Perhaps the next step, is to extend the current functionality to provide an option to use zfs send/receive, so along with taking snapshots on a schedule, we would also store the snapshots incrementally to a remote machine (and perhaps a do a bit of email notification! – Zawinski’s Law ? Bring it on!).

However, today, it’s a gorgeous day in Dublin, which is quite a rarity, so I’m going outside to play :-) Have a nice weekend folks!

Update June 30th: Joe pointed out a bug where the recursive snapshotting wasn’t respecting the retention limit properly: this was because I was using the same variable name in a shell function as I was in the function that called it (I thought ksh used local-scoping for variable names). You can get the fixed bits in zfs-auto-snapshot-0.5.tar.gz.
Thanks again for the bug report Joe!

Like this:

Related

Post navigation

10 thoughts on “And also for s10u2, ZFS Automatic Snapshots”

Perhaps I’ve stumbled on a bug. When I set up snapshots for the zfs pool with various filesystems, it correctly is retaining all copies of the primary pool, but the “children” which I’ve requested to keep snapshots as well only keep one, deleting previous snapshots with every running of the service. Here’s the cron output and my settings:

svc:/system/filesystem/zfs/auto-snapshot:shares-hourly
shares/vol1@zfs-auto-snap-2006-06-25-04:00:00 being destroyed as per retention policy.
shares/vol2@zfs-auto-snap-2006-06-25-04:00:00 being destroyed as per retention policy.
shares/vol3@zfs-auto-snap-2006-06-25-04:00:00 being destroyed as per retention policy.
shares/vol4@zfs-auto-snap-2006-06-25-04:00:00 being destroyed as per retention policy.
shares/vol5@zfs-auto-snap-2006-06-25-04:00:00 being destroyed as per retention policy.
shares/vol6@zfs-auto-snap-2006-06-25-04:00:00 being destroyed as per retention policy.
shares/vol7@zfs-auto-snap-2006-06-25-04:00:00 being destroyed as per retention policy.

Further analysis shows that the snapshot retention counts all the children in the count instead of having it apply as a variable per filesystem. In other words, If you have 12 filesystems in the pool and you have 10 hourlys so far, but want to retain 12, you’ll see at most 10 hourlys in “tank” and then only at most 1 per each child filesystem. I’m seeing it retain at this one per child filesystem (aka tank/vol1) but it won’t let you keep 10 per filesystem when defining a single snapshot instance for the entire pool.

Hey Joe, Thanks loads for the analysis – that does sound like a problem alright. I’m just back from vacation today, but as soon as I’ve caught up on my email, I’ll try to re-whack this code. Thanks again for giving it a spin!

Would there be a way to add a user-specified identifier for each instance, such that you could have multiple instances for the same filesystem? I was thinking something along the lines of a setup where:
hourly: run every hour, keep 24
daily: run every day, keep 31
monthly: run every month, keep all

hey Roger, great minds think alike! I was actually thinking of adding that, as it’s the sort of thing I’d find quite useful (eg. very frequent snapshots during working hours, keeping only 1 days worth, but also allowing more coarse grained snapshots on a monthly basis).

I also thought that including an option to zfs-send via ssh incremental-backups to a remote server might be nice, again with an option to take complete backups of each snapshot on a longer timeframe would be useful.

I’ll see if I can implement these for the next version in a few weeks.

Hi Tim,
A bit curious…
When you do a zfs send/recv to a remote machine with a completely different storage pool size, how does ZFS store the bits in the remote machine.
Scenario: a 6 hard drive mirrored pool (total 12 drives) in a main machine, running our applications. The /data is the only FS to be zfs send/recv. But the backup is just a 1 hard drive mirror pool (total 2 drives). Will this work?
Or I would require exactly the same hard drive config as in main machine?
Thanks in advance

Hey Amit, yes – that’ll work, so long as there’s capacity in the pool on the backup server to contain all the data being sent from the client. There’s no requirement that the pools on the client and server have the same size or topology. The SMF service I wrote should move to maintenance mode if it can’t complete a backup job successfully (for example, we run out of space on the server)