If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

after a planned power outage and a restart of a open-E DSS6 cluster (with primary & secondary raid) the primary came up with no problems however the secondary had initially two failed hds and during the resync (at 94% ) a third one died. So the raid6 entered the failed state. The primary is working fine.

The first thing of course is to replace disks then rebuild the raid6 volume set, define a volume group and its logical volume drives just as it was before and then finally define volume replication tasks. All this should not be a problem I guess.

Next I would try to re-activate and re-integrate the "repaired" secondary passive raid just by clicking on "Start" in menu Setup->Failover of the secondary system.

Is this the right way to go and should this work or is it impossible to bring back a failed secondary without restarting failover as a whole on both primary and secondary (causing a loss of all ISCSI connections)?

You will need to make sure that the replication task will be deleted and recreated again and that it is synced from the Primary side. You will need to start the Failover again though so be aware of this. In DSS V7 you dont have to and you can HOT add to the cluster without any downtime.

You will need to make sure that the replication task will be deleted and recreated again and that it is synced from the Primary side. You will need to start the Failover again though so be aware of this. In DSS V7 you dont have to and you can HOT add to the cluster without any downtime.

Thank you very much for your answer.

You recommended to delete the replication tasks. Did you mean any task that still might exist on the secondary, broken open-e or also on the healthy primary?
If I have to delete replication tasks on the primary as well, how can I recreate them? As far as I remember these tasks were auto created when defining logical volumes, but on the primary I cannot delete and recreate volumes since this would delete all my data?

If the 2nd node has to be completely rebuilt then you will need delete the replication tasks from the Primary and once the Secondary is ready then reconnect and recreate the tasks again. Deleting the recreating the tasks does not delete the data, the data resides on the Logical volumes/ Volume Group / RAID Array.