Answered by:

Getting started with Windows Volume Replication?

Question

I'm well aware that the Technical Preview of "Windows Server 10" has just been released but I was wondering if there are any resources for how to get started with Windows Volume Replication (one of the more exciting features IMHO).

I just need a few pointers to get going in the right direction and then I should be able to get this up and running.

Do I need a cluster - even for asynchronous replication? Is there a GUI for this or do I have to rely on PowerShell? What PowerShell cmdlets are there for setting this up? Can I replicate between servers in different domains/workgroups? And so on...

If any of you can help me (and others) to get started I'd very much appreciate it.

Answers

There are two main scenarios for Storage Replica in the Windows Server Technical Preview:
- Using Storage Replica to create Server to Server replication using Windows PowerShell
- Using Storage Replica to create a Hyper-V Stretch Cluster using Failover Cluster Manager

Using Storage Replica to create Server to Server replication using Windows PowerShell

1. Prerequisites for the Server to Server scenario

1a. Windows Server Active Directory domain (does not need to run Windows Server Technical Preview).
1b. Two servers (one for each site) with Windows Server Technical Preview installed.
1c. Two disks on each server using local storage (DAS), Fibre Channel SAN, or iSCSI SAN.
1d. At least one 10GbE network connection on each server.
1e. A network between the two sets of servers with at least 8Gbps throughput and average of ≤5ms round trip latency when sending non-fragmented 1472-byte ICMP packets for at least 5 minutes.

2a. Four disks are required: a source data disk, a source log disk, a destination data disk and a destination log disk.
2b. The data disks must be formatted as GPT, not MBR.
2c. The data disks must be of identical size.
2d. The log disks should be of identical size.
2e. The log disk should use SSD storage using mirrored Storage Spaces, RAID 1, RAID 10 or similar resiliency.
2f. The data disks can be on HDD, SSD or tiered, using mirror Storage Spaces, parity Storage Spaces, RAID 1, RAID 10, RAID 5, RAID 50 or equivalent configurations.
2g. The data disks should be no larger than 10TB. We recommend testing with than 1TB to reduce initial replication time.
2h. The log volumes must be at least 10% of the size of the data volumes or at least 2GB, whichever is larger.

3. Pre-Installation steps (to be performed on both servers)

3a. Install the following features and reboot: File Server and Windows Volume Replication
3b. Enable the inbound firewall rule: File and Printer Sharing
3c. Provision the storage as described on item 2.

4. Configuration Steps

4a. Use the New-SRPartnership cmdlet to create replication between the nodes on the source node. For instance:

Using Storage Replica to create a Hyper-V Stretch Cluster using Failover Cluster Manager

1. Prerequisites for the Hyper-V Stretch Cluster scenario

1a. Windows Server Active Directory domain (does not need to run Windows Server Technical Preview).
1b. Four servers (two for each site) with Windows Server Technical Preview installed. Each server should be capable of running Hyper-V, have at least 4 cores, and have at least 8GB of RAM. You will need more memory for more virtual machines.
1c. Two sets of asymmetric shared storage (2 nodes see one set, 2 nodes see the other set), using Shared SAS JBODs, Fibre Channel SAN, or iSCSI SAN.
1d. At least one 10GbE network connection on each server.
1e. A network between the two sets of servers with at least 8Gbps throughput and average of ≤5ms round trip latency when sending non-fragmented 1472-byte ICMP packets for at least 5 minutes.

2a. Four disks are required: a source data disk, a source log disk, a destination data disk and a destination log disk.
2b. The data disks must be formatted as GPT, not MBR.
2c. The data disks must be of identical size.
2d. The log disks should be of identical size.
2e. The log disk should use SSD storage using mirrored Storage Spaces, RAID 1, RAID 10 or similar resiliency.
2f. The data disks can be on HDD, SSD or tiered, using mirror Storage Spaces, parity Storage Spaces, RAID 1, RAID 10, RAID 5, RAID 50 or equivalent configurations.
2g. The data disks should be no larger than 10TB. We recommend testing with than 1TB to reduce initial replication time.
2h. The log volumes must be at least 10% of the size of the data volumes or at least 2GB, whichever is larger.
2i. Add a Volume Label to each volume that identifies its site and purpose, such as “Data 1 Redmond”, to make it easy to identify the disks when they become CSVs.

3. Pre-Installation steps (to be performed on all nodes)

3a. Install the following features and reboot: Failover Clustering, Multipath IO, Hyper-V, and Windows Volume Replication
3b. Enable the inbound firewall rule: File and Printer Sharing
3c. Provision the storage as described on item 2 for each of the two asymmetric storage sets.

4. Installation Steps (to be performed on only one of the nodes)

4a. Using Failover Cluster Manager (FCM), configure a cluster of the four nodes.
4b. Configure the cluster quorum to use a file share witness or Azure cloud witness (do not use disk witness)
4c. In the Disks pane, make source data disk CSV or a member of a role (it cannot be in Available Storage).
4d. Ensure all storage is owned by the node where you are running FCM.
4e. Right click the source disk and click Replication, Enable. Follow the wizard to select the source log disk, destination data disk, and destination log disk. Choose unseeded disk.
4f. At the end of the wizard, replication is configured and replication starts.
4g. You can change the source of replication by moving the storage using FCM to a node in the other site.

Note 2: You can verify that the replication is complete by checking events in the WVR Admin event log.

On the source server, check for events 5002, 2200, and 5015.
On the destination server, check for events 5015, 5001, and 5009.

Note 3: Removal of replication via FCM does not work in the Technical Preview. Use the following Windows PowerShell commands instead.

There are two main scenarios for Storage Replica in the Windows Server Technical Preview:
- Using Storage Replica to create Server to Server replication using Windows PowerShell
- Using Storage Replica to create a Hyper-V Stretch Cluster using Failover Cluster Manager

Using Storage Replica to create Server to Server replication using Windows PowerShell

1. Prerequisites for the Server to Server scenario

1a. Windows Server Active Directory domain (does not need to run Windows Server Technical Preview).
1b. Two servers (one for each site) with Windows Server Technical Preview installed.
1c. Two disks on each server using local storage (DAS), Fibre Channel SAN, or iSCSI SAN.
1d. At least one 10GbE network connection on each server.
1e. A network between the two sets of servers with at least 8Gbps throughput and average of ≤5ms round trip latency when sending non-fragmented 1472-byte ICMP packets for at least 5 minutes.

2a. Four disks are required: a source data disk, a source log disk, a destination data disk and a destination log disk.
2b. The data disks must be formatted as GPT, not MBR.
2c. The data disks must be of identical size.
2d. The log disks should be of identical size.
2e. The log disk should use SSD storage using mirrored Storage Spaces, RAID 1, RAID 10 or similar resiliency.
2f. The data disks can be on HDD, SSD or tiered, using mirror Storage Spaces, parity Storage Spaces, RAID 1, RAID 10, RAID 5, RAID 50 or equivalent configurations.
2g. The data disks should be no larger than 10TB. We recommend testing with than 1TB to reduce initial replication time.
2h. The log volumes must be at least 10% of the size of the data volumes or at least 2GB, whichever is larger.

3. Pre-Installation steps (to be performed on both servers)

3a. Install the following features and reboot: File Server and Windows Volume Replication
3b. Enable the inbound firewall rule: File and Printer Sharing
3c. Provision the storage as described on item 2.

4. Configuration Steps

4a. Use the New-SRPartnership cmdlet to create replication between the nodes on the source node. For instance:

Using Storage Replica to create a Hyper-V Stretch Cluster using Failover Cluster Manager

1. Prerequisites for the Hyper-V Stretch Cluster scenario

1a. Windows Server Active Directory domain (does not need to run Windows Server Technical Preview).
1b. Four servers (two for each site) with Windows Server Technical Preview installed. Each server should be capable of running Hyper-V, have at least 4 cores, and have at least 8GB of RAM. You will need more memory for more virtual machines.
1c. Two sets of asymmetric shared storage (2 nodes see one set, 2 nodes see the other set), using Shared SAS JBODs, Fibre Channel SAN, or iSCSI SAN.
1d. At least one 10GbE network connection on each server.
1e. A network between the two sets of servers with at least 8Gbps throughput and average of ≤5ms round trip latency when sending non-fragmented 1472-byte ICMP packets for at least 5 minutes.

2a. Four disks are required: a source data disk, a source log disk, a destination data disk and a destination log disk.
2b. The data disks must be formatted as GPT, not MBR.
2c. The data disks must be of identical size.
2d. The log disks should be of identical size.
2e. The log disk should use SSD storage using mirrored Storage Spaces, RAID 1, RAID 10 or similar resiliency.
2f. The data disks can be on HDD, SSD or tiered, using mirror Storage Spaces, parity Storage Spaces, RAID 1, RAID 10, RAID 5, RAID 50 or equivalent configurations.
2g. The data disks should be no larger than 10TB. We recommend testing with than 1TB to reduce initial replication time.
2h. The log volumes must be at least 10% of the size of the data volumes or at least 2GB, whichever is larger.
2i. Add a Volume Label to each volume that identifies its site and purpose, such as “Data 1 Redmond”, to make it easy to identify the disks when they become CSVs.

3. Pre-Installation steps (to be performed on all nodes)

3a. Install the following features and reboot: Failover Clustering, Multipath IO, Hyper-V, and Windows Volume Replication
3b. Enable the inbound firewall rule: File and Printer Sharing
3c. Provision the storage as described on item 2 for each of the two asymmetric storage sets.

4. Installation Steps (to be performed on only one of the nodes)

4a. Using Failover Cluster Manager (FCM), configure a cluster of the four nodes.
4b. Configure the cluster quorum to use a file share witness or Azure cloud witness (do not use disk witness)
4c. In the Disks pane, make source data disk CSV or a member of a role (it cannot be in Available Storage).
4d. Ensure all storage is owned by the node where you are running FCM.
4e. Right click the source disk and click Replication, Enable. Follow the wizard to select the source log disk, destination data disk, and destination log disk. Choose unseeded disk.
4f. At the end of the wizard, replication is configured and replication starts.
4g. You can change the source of replication by moving the storage using FCM to a node in the other site.

Note 2: You can verify that the replication is complete by checking events in the WVR Admin event log.

On the source server, check for events 5002, 2200, and 5015.
On the destination server, check for events 5015, 5001, and 5009.

Note 3: Removal of replication via FCM does not work in the Technical Preview. Use the following Windows PowerShell commands instead.

It appears that there are no possible owners according the cluster, which suggests that the cluster or FCM never updated after the cleanup. Do you see the same behavior after restarting FCM or rebooting the node?

Nice work on the pointers, Ned thanks!! got both
scenarios working. With the cluster the replication status only stay's unknown and had 1 blue screen when reversing the replication in the 1-on-1 scenario.

Now to my understanding its "active-passive", what i mean is i can use the Sourcedisk but can't use the destination disk till i break the replication. is that correct or am i missing something?

I'm trying to figure out how this would work in the Azure Cloud. Without Physical Disk Resources this is really a non-starter for Azure Cloud deployments. Am I correct?

Good job David! The only confusion I have now is - why Ned tells DAS is supported in his Server-to-Server scenario and in the same time we need something what has "Physical Disk Resource" tag on it? If replication is done over Ethernet why the same
disk should have paths to both nodes of the cluster?!?! If it does and we have so many disks we can build a Clustered Storage Spaces... Do I miss anything obvious here? Thanks again for your great blog post! :) We'll continue with our own experiments on Monday.

There are two main scenarios for Storage Replica in the Windows Server Technical Preview:
- Using Storage Replica to create Server to Server replication using Windows PowerShell
- Using Storage Replica to create a Hyper-V Stretch Cluster using Failover Cluster Manager

[ ... ]

Thank you for your wrapping of the things together! Few questions so far if you don't mind :)

Q1. What types of DAS are really supported? Can we use a pair of a physical servers with SATA spindles or PCIe flash to build a 2-node Hyper-V cluster with no physical shared storage (no SAS JBODs) using first listed scenario? What about simple 2-node Scale-Out
File Server cluster (same config, SATA-all-around and nothing physically shared)?

Q2. Scalability? Can it be more then 2 servers in the first scenario? More then 4 with the second?

Thanks :)

P.S. Am I correct and what we have here is kind of a "DFS on steroids" rather then further evolution of a Clustered Storage Spaces?

Greetings, Robert Smit Follow me @clustermvp http://robertsmit.wordpress.com/ “Please click "Vote As Helpful" if it is helpful for you and Proposed As Answer” Please remember to click “Mark as Answer” on the post that helps you

However, i am having issues currently to detect correct states Cluster UI vs. SR Powershell

Without checking eventlog for 5009, one might be lost ...

Also upon installation, the CSV shows redirected .... f

The volume transfer to remote seems to be full an not 10 Agnostic ... my Thin iSCSI LUN remote filled up the Complete space.

When removing a CSV that is not Part of the replica, but Lower Number ( eg. ClusterStorage\Volume1), upon Next reboot, the repicated data Voluem 8 and Only the datavolume ) get´s moived into the "free" Position.

My Data CSV was ClusterStorage\Volume3 and became ClusterStorage\Volume1

The GUI config works, but it is not always "automatic". In order to select the Target Disk it needs to be Online. I think I have seen AVailable Storage automatically move to the Secondary Server as part of this process, but not always. Instead,
I just make sure Available Storage is online on the secondary server before you have to choose the target. I'm pretty sure that process in meant to be automated, but it seems to not work all the time.

I'm testing this with "Server To Server" and all going okay with the install and initial setup. I also did a couple of tests changing the direction of Source & Destination and that also going okay.
But then, I wanted to test a real DR scenario when the Source goes down suddenly. From this point, I was stuck and not able to recover the Destination volume. The last messages I see in eventlog are ..

.. Connection lost to computer hosting the primary replication group
.. WVR secondary entered stand-by state

I could not revive the disk to be available .. and also I can not remove SRPartnership or SRgroup.

Microsoft is conducting an online survey to understand your opinion of the Technet Web site. If you choose to participate, the online survey will be presented to you when you leave the Technet Web site.