You should only have to follow the archiving and restoring steps, when you need to do bare metal recovery.
One exception is that you should be doing the archiving steps to a remote system as part of basic root pool
backups in case of bare metal recovery and then hope that you never need to do full recovery.

As long as a root pool disk is intact, you can attach, detach, replace and so on while the system is running and
everything is online. ZFS (should) make most management tasks easier.

Thanks Cindy, I think you've already helped a great deal by pointing me to the correct approach(es).

Although I couldn't open the link to the bug, I added the line to /etc/system, at the bottom of the file, preemptively.

The existing rpool is on a SAS mirror on an (old and slow) onboard controller.
The SATA SSD is on a newer fast HBA via a breakout cable.
There is a SSD running a dedup pool of nonglobal zones running VirtualBox instances, and a RAIDz1 storage pool cached by another SSD, currently on the new HBA.

Since the rpool mirror was upgraded from S11/11, I think I may have to:
add an SMI label and GRUB before I can bring the SSD into the mirror as bootable,
and then resilver,
and then zpool split the mirror,
then destroy the old mirror (I'll just shutdown and pull the disks at first ;) ),
then rename the rpool2 on the SSD that resulted from the split to rpool
then tell BIOS (or at least make sure it knows it can boot from the SSD in question),
then boot and enjoy

Is this correct?

On edit:

Can I just:

1. Add and resilver in the new SSD into the existing mirror,
2. pull the SAS drives
3. destroy the mirror
4. profit
???

Or...since I can't read the details of the bug...is it better use of resources to leave rpool on the SAS mirror and simply use the SSD as a cache device?

I would agree that the better approach is to leave the SSD for pools that could
take advantage of a performance enhancement, rather than using it for rpool.

I think you are also correct that if you have upgraded from Solaris 11, then you
would need to add a VTOC label to the SSD and creating a slice 0 before you
attach it to the existing rpool.

Splitting and renaming the root pool is too much trouble. You should be able
to attach a new disk or replace an existing root pool disk and let the rpool
contents resilver automatically. We do this all the time, if an existing rpool disk
is too small.

Make sure you have done the math with dedup to determine that the your
data is dedupable and you have enough memory to support it.

Thanks again for the pointers, and the metaphysic.
I was complicating a thing made to be simple. :D
You're right - a simple observation of activity lights shows that my rpool drives aren't hit very often. It will take a long long time to 'notice' anything other than a fast boot and scrub.

And thank you for the dedup warning.memstat shows 49% of RAM devoted to ZFS after 31 days of uptime. zonestat shows only about 16GB phys used of 32GB available.
<8GB to maintain the current dedup tables (1.58 ratio) seems about right under the "32GB/TB" rubric I've encountered regarding ZFS deduplication RAM allowance for storage.
Although it's all running from RAM right now, it is a fact that I could do most things done with those VMs much more efficiently in Zones on pools without dedup using native Solaris applications. That is the plan.
In the future, the dedup pool will help manipulate LiDAR and other point cloud data. Which may prove to be a much better use the power of ZFS dedup.

Solaris really is a brilliant OS.
Thanks again for your patient help, Cindy. ZFS will solve the rpool question itself at such time as it actually needs solving. :D
I went ahead and used the SSD in question to properly mirror that dedup pool and run a 60-second scrub. It took less than 4 minutes of my time from start to finish. LOL, I don't touch-type. :D