It seems to me that if I remove snapshots up to and including zroot/vm/inet17/disk0@2018-08-01_00.00.00--2y then I should recover approximately 400Gb. If so then that should put utilisation back to about 75%.

That doesn't add up. It does when it's RAID-Z, not RAID-Z2. With 4 x 3TB in RAID-Z you get around 8.16 TB of usable space. With RAID-Z2 that would be around 5.44 TB.

Also keep in mind the differences between metric and binary prefixes. Disks are sold using metric prefixes, not binary prefixes. So a 3TB drive is 3.000.000.000.000 bytes, not 3298534883328 bytes. The OS (for both UFS and ZFS) uses binary prefixes. So a 3 TB disk is actually 2.72 TB (and that's excluding the obvious overhead of formats and disk layouts)

Well-Known Member

Another point in addition to SirDice's explanation: The "SIZE" and "ALLOC" in zpool list's output do not take into account the redundancy. With RAID-Z2 you have only 50% usable space, so the ALLOC shows actually your data doubled.
Also, to keep in mind - when using compression (depending on the content), sometimes you get much more from the storage that what it's actually displayed.
Another point - Snapshots also take space. If you have lots of content changing and snapshots keeping the old versions, it adds up quickly. Check the "REFER" column to see how much is taken including snapshots.

That may or may not be the case. While the "REFER" column shows the size that this specific archived filesystem would contain if it were standing for itself, the "USED" column shows, well, something:
Files that were created, snapshoted and then deleted again, are certainly contained in the respective snapshot(s), do consume space, but are not visible in the figures per individual snapshot, only in the sum of "usedbyshapshots". The space gets freed if one deletes all of the snapshots that were taken during the lifetime of that file - but there seems to be no figure to predetermine how much that might be.

Active Member

Thank you for the explanation. We (I) use zfsnap to automatically create snapshots and in my naivete I adopted a rather aggressive snapshot schedule which has turned around and bitten me. We were creating monthly snapshots with a two year expiry date. I have changed that to one year and have destroyed snapshots older than one year. This has gotten us back to 80% utilisation so far.

Our problem appears to be our imap server which is a BHyve vm (inet17). This seems to be consuming the vast majority of the snapshot space, providing that I am reading the report correctly:

Well, that would not surprize me - that seems to be a mail service, so probably your snapshots keep lots of Spam.
In a middle term approach I would consider a way to separate that application's installation+configuration from it's payload (i.e. the mails) and keep the latter only for a time needed.

Adjusting for raidz2 I take it that this is telling me that I have ~5.3Tb of which 4.26 Tb is allocated and 1.05 Tb is available. Am I close?

Yes, that's how I would read it, regarding the zpools.
I don't think your inet17 snapshots take the most of the space though. The snapshots seem to refer 392G (it's the same value, you don't have to add them up in REFER!), so it seems like your ZVOL has relatively few changes over time and the consumption does not expand significantly.

Can you maybe try to use this command I saw from the URL below? It should show you the space consumption per dataset:zfs list -t all -o space -r zroot