After many many reboots and tries, i've finally managed to get my system to boot a ZFS rootfs completely all the way into Gnome w/o error.
Then since i had so many issues.. i figured the best test would be.. reboot and make it do it again..
Yea not so much.

The error that has plagued me the entire process strikes again.
( paraphrasing, initramfs ).
Can not import pool "OS" cause it looks like its in use else where try with -f.
Then we cascade through a lack of rootfs, and ends in a stack trace.

I know using -f will force the import, which works just fine.. (once i boot into the Live DVD to do it) but it was a clean reboot. so i shouldn't have to force anything.
Then again, i've NEVER been able to get zpool import to work w/o first doing zpool export. ( makes sense ).
export only ever works when the pool is not in use, and in fact zfs.service only does a zfs umount -a ( no export, but it wouldn't work anyway.. ).

Since i can't seen to find endless posts about this i have to think i'm missing something.
My guesses are.
1) This works in OpenRC some how.
2) There is some flag i'm missing somewhere that makes initramfs "-f" the import and everyone is using it. ( i'd wonder to the safety of that.. )

[ side note ]
It would seam that zfs.service starts far to late to allow /var/log to be its own gzip compressed volume, and try as i might i could not get it to move up far enough to get it mounted before things started getting dumped into it..
pity really..
so had to abandon that design to get the one full boot that i did..
also odd, cause i know it 'seamed' to work OK in my experiments in VirtualBox using OpenRC..

Moving back to my VirtualBox playground and back to OpenRC. I can say that setup does have the same issue.
Namely after reboot you get the "cannot import 'XX': pool may be in use from another system."
However OpenRC does indeed start the zfs init script soon enough for /var/log to be its own volume.

But back to the other issue.
On shutdown, all the volumes get unmounted, then / is remount ro.
However an export never happens, cause it can't since were still inside /.

Reading around, i've managed to figure out most of what i assumed should happen is indeed what should happen.
Namely, the start to stop process flow for complex rootfs ( raid, zfs, lvm ) moves something like.

1) the initramfs does its thing, mounting rootfs.
2) switch root into rootfs (newroot) executing init/systemd
3) full boot and usage.
4) shutdown, unmount everything, / goes back to read only.
5) at this point its supposed to jump back into the initramfs to do a clean unmount of the rootfs (newroot) (EG: zpool export).

I think the stickler is point 5, i get the feeling whatever genkernel makes or even Gentoo as a whole does not do the last bit..

That's all fine. The not mounted datasets are the zpools and containers. But the following might cause the error: (shortened and commented by me)

Code:

NAME USED AVAIL REFER MOUNTPOINT
bigvo 1.01T 9.36T 198K / # <- No! Do not give the pool a mountpoint!
bigvo/HOME 221K 9.36T 221K /home # No. Make this a container and add dataset under it
bigvo/ROOT/gentoo 1.01T 9.36T 1.01T / # This is the real mount.
bigvo/SWAP 2.06G 9.36T 116K - # I'd rather not put swap into ZFS.

So it looks like 'bigvo' and 'bigvo/ROOT/gentoo' are rivals here.

Maybe this causes the problem? bigvo/ROOT/gentoo and bigvo hog the same mountpoint, so neither can be unmounted as the other keeps it busy.

Note on bigvol/HOME : I wouldn't put /home in it, but a dataset for anything under /home. I did it this way, and yes, I know that I have exaggerated 'a bit'

Thanks for all the pointers!

Partially Solved
My problem of mounting the system is partially gone as it now boots with no need to force anything. What had to do is set the cachefile so my initramdisk knows about previous mounts.

Code:

zpool set cachefile=/etc/zfs/zpool.cache bigvo

I read somewhere that in some cases this is a problem and I have not done so previously. When I say it is partially solved I mean that the system mounts and boots but I do receive error message in the process coming from the /etc/init.d/zfs script that it could not mount the system. It is likely that this message appears because of two competing mountpoints. Question is how do I safety remove the first one?

As far as the SWAP is concerned I have read in quite a few places that it has to be in ZFS in case ZFS needs to use it, which in my 16GB RAM system should not be a problem anyways even when 2GB is used for TMPFS

EDIT: Moving HOME was easy. I moved the little data that was there to different directory and than

. Now all my problems with zfs not being able to mount a partition are gone
On the side note one also needs to have "zfs" in boot (rc-update) as it mounts partitions other than boot_________________Sky is not the limit...