Looks like services do not load ZFS modules at startup.
Having studied the sources, you can understand that the module will be loaded automatically, if only zpool is defined.
We will not check this, instead we will force the load of zfs module every boot time following to Fedora's recommendations:

ZFS snapshots use Redirect-On-Write technology (mistakenly called CopyOnWrite).
In the schedule of just installed scripts, a lot of snapshots are created,
which in fact do not help, and instead spend a lot of resources on them.
The oldest snapshot will take up a lot of disk space.
Rotating frequent tiny snapshots requires a lot of computing resources.
So I use only daily snapshots and the rest of the schedule I've deleted.

Creating pool

The ZFS pool is the place to create file systems (or volumes).
The pool spread data between physical disks and takes care of the redundancy.
Although you can create a pool without any redundancy technique, this is not common.
We will create RAID5 from the third partition of our disks.

NOTE: If you have many disks and want to specify the size of the raid group,
simply specify the keyword "raidz" after the desired number of disks.

I used -m none option to not mount whole pool.
The export is the name of created pool.
I plan to mount its FS under the /export hierarchy, so that's the name.

The amount of deleted data is subtracted from the data part of the FS data (as shown by "df")
and is added to the snapshot usage space ("USEDSNAP" column).
A more detailed listing shows that this space belongs to the snapshot "etc_copied".
The initial snapshot still uses almost zero space, because deleted data had not exist when this snapshot was created.

You can revert the whole FS only to latest snapshot.
If you want to revert to other than latest snapshot, you have to remove latest snapshot before revert.

The hidden directory .zfs automatically mounts the required snapshot for you,
and then you can copy single file from there.
The "zfs diff" command proves that this is not a real revert.
The snapshot still contains deleted data blocks, and the file (with exactly the same name and matadata)
is newly created in the FS new data blocks.

Working with clones

First, we will find all the snapshots belonging to FS that we need to clone.
The zfs list command can be very slow on the loaded system.
A much faster way to check the names of snapshots is to list the .zfs/snapshot pseudo directory.

Remote replication

We will use the same ZFS system as origin and target system.
Therefore, the sending process is piped to the receiving process.
You can use another ZFS system for data replicattion over the network.
SSH can be used as a channel if you want additional protection on the wire
or netcat if you want to achieve copy efficiency.

Now reconnect the disconnected disk.
ZFS does not see the reconnected disk, they may somehow have to be rescanned.
I was too lazy to read the manual and I just rebooted the server.
Everything returned to normal: