We had to restore from local tape to bring back the Client. note that the disks do not have any hardware problem, then we suspected that it is a mismatch of the system version used:

We are using an srt with solaris update 9, and the node we want to restore has a same patch installed(Solaris with Generic_147440-12), so I have tried to add all missing patches to the SRT but when installing some patches the installation hang and the SRT become invalid.

Could anyone help on how to add the following patches to SRT:

142933-03

144526-01

147061-01

125555-11

144500-19

142933-04

147440-12

The succesful patch installation of each of the avove are very random. It succeeds for one of them and fails next time.

During a Bare Metal Restore, the Zeta file system (ZFS) temporary mount fails.
This issue occurs if any ZFS is not mounted or the canmount value is set to OFF
during a backup.

To restrict the disk or the disk pool, edit the Bare Metal Restore configurations.
The edits ensure that the disk is not overwritten and the data that it contains
is not erased during the restore process.

For more information on how to edit the configurations, refer to the following
sections of the Bare Metal Restore Administrator's Guide:

Just to update this thread, I worked specifically on this issue along with "eomaber" and it seems that the IO error occured because the disks werent partitioned, all sclices had 0 cylinder attributed to them as you can see :

Can you please confirm that which SRT you have used for restore? Is it prepared using Solaris 10 Update 10 or not.
if not, Please try restore with Solaris10 Update10 SRT. If still see issues, please log a support case along with restore log.

The rpool/var and rpool/var/opt are not mountable, they are just containers to the rpool/var/opt/fds file system, so you cant see them when executing df -h command:

bash-3.00# zfs get canmount
NAME PROPERTY VALUE SOURCE
rpool canmount on local
rpool/ROOT canmount on local
rpool/ROOT/SDP5 canmount noauto local
rpool/dump canmount - -
rpool/swap canmount - -
rpool/var canmount off local
rpool/var/opt canmount off local
rpool/var/opt/fds canmount on local

When we perform a backup,everything is present on the backup file, but when we perform a restore, we were getting errors:

1- First the error that "eomaber" mentioned about the restore script being enable to mount the rpool/var, we get around this error by editing the configuration file and seting the canmount of rpool/var and rpool/var/opt to "on"

2- The second issue was that when the restore completed, the rpool/var and rpool/var/opt are mountable, this is causing many important services to go to maintenance state, especially svc:/system/filesystem/minimal:default

This happened because the system is unable to mount the /var/run file system

When we enter the single user mode we found the rpool/var rpool/var/opt and rpool/var/opt/fds not mounted, so we have to options that we tried out:

First we tried to leave the canmount options as they were and mounted all three file systems (zfs mount -a)

This caused the system to go to 3rd level after clearing the svc:/system/filesystem/minimal:default service, but we were still getting many errors and the system seemed to be unstable.

The second option was to set back the canmount option to their original value, in this case only rpool/var/opt/fds will be mounted, but /var and /var/opt will be empty which will cause the system to go to 3rd level with many errors on the console, luckily we were able to restore the /var and /var/opt content again, the errors stopped. This solution seems to work better than the first, the system was stable and we were able to run all our applications without problems.

Finally, I think that Netbackup is misunderstanding our file system configuration, perhaps this is because the zfs file system has been migrating from a UFS file system which is according to Symantec is not supported but they didn't give the reason.

Anyway, we figure out how to perform a restore, a strait forward one would be appreciated, so if anyone have a better ideas we will be grateful