Strange ZFS behaviour when a drive fail. - FreeBSD

This is a discussion on Strange ZFS behaviour when a drive fail. - FreeBSD ; Hi, I have a a zpool containing two raidz-set, and one of the drive died.
After a reboot the controller removed the disk and renumbered the other:
NAME STATE READ WRITE CKSUM
media DEGRADED 0 0 0
raidz1 DEGRADED 0 ...

I have since gotten the drive online (as mfid8) and now strangely zfs reports that it is resilvering:
scrub: resilver in progress, 1,11% done, 307445734561825779h49m to go
But to what drive?
zpool iostat shows that it reads alot from all the drives in the first raidz: