Primary menu

Tag Archives: snapshots

I had decided to try out Citrix Xen Server at home since i work a lot with vmware during my working week and felt like a change. It all seemed well… That is until i had to deal with snapshots. I suppose i have taken for granted almost all other virtual host software that provides a simple “revert to snapshot” option. From what i can tell this is totally absent from Citrix XenServer 5.5.

There are comments from within Citrix that they are working on this as a feature, but it has yet to make it to fruition. Unfortunately for me with the type of work i do (testing / proof of concept etc) this is a deal breaker. Looks like i’m gunna have to try out vSphere at home. (currently only using 3.5 at work)

I’ve actually got no choice but to stay with Citrix Xen for now, looks like the sata controller and network chip on my motherboard is not supported by either 3.5 U4 or vSphere. Doh! (i should have checked the HCL but sometimes just like to try my luck)

This problem i came across was after i was playing with crontab. It looks like the zfs snapshot service uses an account called “zfssnap” and if it doesnt have access to cron then it will have issues creating / checking snapshots. Check the file /etc/cron.d/cron.allow and ensure that “zfssnap” is in there. The issues i had looked like this in the log… (check the logs via the log file viewer)

Error: Unable to take recursive snapshots of rpool/ROOT@zfs-auto-snap:frequent-2009-03-16-09:06.

Moving service svc:/system/filesystem/zfs/auto-snapshot:frequent to maintenance mode.

Here is an bit from this site – “This problem is being caused by the old (IE: read non-active) boot environments not being mounted and it is trying to snapshot them. You can’t ‘svcadm clear’ or ‘svcadm enable’ them because they will still fail.”

Apparently a bug with the zfs snapshots similar to /root/opensolaris type pools — anyhow to fix i’ve just used a custom setup in time slider. Clear all the services set to “maintenance” then launch time-slider-setup and configure to exclude the problem pools.

Update : As per Johns comments below you can disable the snapshots on the offending zfs system using the following command…

zfs set com.sun:auto-snapshot=false rpool/ROOT

As above to clear “maintenance” status on the effected services run the following command…

svcadm clear auto-snapshot:hourly

svcadm clear auto-snapshot:frequent

Now run this to ensure all the SMF services are running without issue…

There is some funky ways of modifying the default “time slider” services to do the work for you, but i like a bit more hands on. Generally so i know what is happening in the background, but the time slider can be sometimes overkill creating snapshots every 15 mins if not configured properly.

On a side note i’ve yet to get my head around the SMF stuff properly… Anyhow onto creating snapshots.

I”ve decided to snapshot both my unprotected and protected zpools.

I’ve created three scripts,this is what my snapdaily.sh script looks like ;

The other two are similar, but weekly and monthly. The name of the snapshot is after the @ symbol as above. the -r switch is recursive, so all zfs file systems beneath the named zfs also have snapshots created.

next I’ve saved this script and added it to crontab (as root since its zfs commands which are usually restricted);