11:39
<srhb>
Also indecipherable. I think maybe I should just draw this by hand. But ugh, the purpose of this exercise was to make it actually editable when revisions are needed, and we all know how images get updated (never)

13:43
<eyJhb>
Yeah, but I just hate seeing it in configs too! Each time I need to actually use the nick for anything... And the WORST... the structure of Golang src.. `~/go/src/github.com/eyJhb/<project>`, but true.. Never type it that often

23:21
<elvishjerricco>
So `man zpool` says it's ok to have an `ashift` that would imply a sector size greater than the disk's actual sector size. In that case, how can ZFS ensure the atomicity of writing a single sector? Isn't that the whole basis for its guaranteed consistency?

23:37
<elvishjerricco>
Oh I see, it's somewhat journal-like kinda. There's actually four copies of the superblock on each vdev. When importing a pool, ZFS just scans for the one with the highest transaction number whose content matches its hash. So if you crash while writing the new superblock for the first time, it'll just use one of the others with the old superblock

23:37
<elvishjerricco>
So they don't rely on block-level atomicity at all

23:42
<clever>
elvishjerricco: ive asked about similar things in #zfs before, and when you modify any directory, it will recursively modify all parent directories, all the way up to the superblock

23:47
<elvishjerricco>
clever: Yea. It's apparently 4x128, not just 4. Kinda. There's an array of 128 superblocks, and every new superblock goes to the next index in that array, wrapping around at the end. If your most recent superblock isn't completely written, it uses the previous slot. For redundancy, this array is stored in four places on each vdev. So it'll look at all copies and just pick the newest verifiable slot out them all.

23:51
<elvishjerricco>
Now I'm trying to figure out how often ZFS commits its transaction groups... It isn't per fsync or even write I don't think. I tried making a small pool, filling it with a large file, and overwriting that file with new data, and it didn't claim to be out of space; so presumably if I lost power in the middle of that write operation, the file would be in a half and half state.