Brendon, you were right. All it took was the right arguments "zfsadm -b knight -l" instead of trying to modify the zfsadm file itself. It built just fine this time. Is there anything in particular I should watch for? Any stats I should poll? Any settings I should tweak?

"Knight" is performing MUCH better. Right now kernel task is staying steady at under 8 GB after 10 hours of uptime and I removed the previous configuration script I was using. System is much more stable. I still have over 70 GB memory free and I've not seen this ever with ZFS.

I just installed Knight too.I have the arc capped at 12 GB and arc meta at 9 GB but it doesn't seem t be gobbling up that memory as it did before.And I think it's running better with no zero read speed episodes during the scrub.

Super grateful to the people developing OpenZFS on OS X!RaidZ2 working well for storing my data, OS X Server share for Time Machine, FCPX, iTunes, Photos, etc. on my Hackinstosh.

OK, so I had a crazy experience this morning. Last night I left Server.app run over night due to it wanting to change permissions on a share I have. I removed a user and added a user. Server.app hung from what I assume needing to apply access permissions to the zfs volume I have set up on that tank (I also have it set to be HFS+ compatible). After coming back to the system this morning, I found my entire system out of memory. After killing enough processes over the course of a very patient and nightmarish hour, I managed to ssh in and run top to find that kernel_task was using 113GB of memory. No matter how many processes I killed, the system stayed at the upper bounds with only about 45 MB of memory free. So a sudo reboot finally saved the day (I didn't want to do it until I found out what process was consuming so much memory).

I have to think now that ZFS acted strange with all of the permission / ACL changes from Server on a large share like this and so it went memory hog crazy. Anyone see this kind of behavior before and is there a solution?