smh retitled D11278: Fixed bsdinstall location of vfs.zfs.min_auto_ashift from Fixed bsdinstall location of vfs.zfs.min_auto_ashift
vfs.zfs.min_auto_ashift is a sysctl only not a tunable so updated bsdinstall to use the correct location /etc/sysctl.conf instead of /boot/loader.conf to Fixed bsdinstall location of vfs.zfs.min_auto_ashift.

Feb 15 2017

Essentially, L2ARC periodically writes to every block on the device and that allows to physically move the data around.
That's exactly the reason why I think that TRIM is not needed for L2ARC.
TRIM is useful when we don't need data in some area, but we are not going to overwrite that area, so we need to a way to tell the storage system that it can reuse the physical cells without worrying about any data in them. But if we overwrite that area anyway, then the storage system is automatically aware that the data in those physical cells is obsolete. It's free to choose either those same cells or any different cells for the new data according to the wear leveling algorithms, but that's beside the point.

My understanding of how L2ARC writing works is this. The code maintains a "hand" (like a clock hand) that points to a disk offset. At the regular intervals a certain space in front of the hand is freed by discarding L2 headers that point to that space and then new buffers are written to that space. Then the hand is moved forward by the appropriate amount.
There is also some freeing of L2 headers when ARC headers are freed, etc. In any case, after some uptime almost the whole cache disk is usually filled with data. And the hand inevitably moves forward. So, every block gets written over sooner or later. I do not see how TRIM helps in that case.
The only scenario where it makes difference, in my humble opinion, is the scenario that @mav described: a worn out disk where the "cheese holes" behind the hand can make some difference for writing new blocks at the hand. But I think that that's too marginal to be important.

In scenario #1 the performance of TRIM is also generally good, mitigating the need to avoid doing it.

Is there such thing as good TRIM performance? On my new Samsung 950 NVMe I had to disable TRIM as unusable. Though yes, NVMe driver still probably needs aggregation of TRIM requests to get better numbers.

This can could excessive slow down as the capacity of the disk is reached, however it could be argued that a better mitigation for L2ARC devices would be to use an under-provisioned slice to ensure the SSD controller always has space to work.