Now I had to replace a defect hard drive on an IBM x3650 server, running with Solaris 10. Different hardware - different story.

First of all: The IBM server and its RSA II did not detect the failed disk. The defect disk was detected by the Nagios plugin check_zpools.sh to monitor the health and usage of ZFS pools.
zpool status showed the following output:

(solaris91 ) 0 # zpool status pool: rpool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: none requested
config:

On the HP servers, the physical disks had to be replaced and then activated by hpacucli, the HP command line utility for the raid controller. Like HP has its hpacucli command line utility, there is arcconf for Adaptec raid controllers.
arcconf can be downloaded from the Adaptec website. I downloaded and installed (well, unzipped) arcconf v. 1_2_20532 from http://www.adaptec.com/en-us/speed/raid/storage_manager/arcconf_v1_2_20532_zip.htm .

Well - interesting. There are only three devices/disks (plus the enclosure) shown in the output. The defect disk seems to be missing (note the row 'Reported Channel, Device').

So far I have the following information: The defect disk is in zpool "rpool" and its size is 70GB. The problem: There are two disks like that and the server did not detect the failed disk as failed - no LED light indicates the bad disk for me.
Well, arcconf can help here, too. I can identify the working disk by letting its LED blink:

So finally there are 4 disks detected. But the new disk's state is READY and not ONLINE as the others. To bring the device/disk online, a logical drive/simple volume needs to be created. Remember that zfs is handling raid on this server and not the hardware raid controller. The arcconf help shows how to do it:

Now that the physical disk was replaced, formatted and partitioned, we can replace it in the zpool:

(solaris91 ) 0 # zpool replace rpool c0t1d0s0

The zpool status output now shows the resilvering (= raid resynchro) of the disks in rpool:

(solaris91 ) 0 # zpool status pool: rpool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 1.62% done, 0h5m to go
config: