My case study in the clouds…

EMC MirrorView configuration on the EMC VNX arrays.

By building their own Disaster Recovery solutions often reach for solutions based on data replication between storage arrays. One such solution (let us add that the cheapest) is EMC MirrorView. It is a very simple and easy to set up service fully cooperating with VMware Site Recovery Manager (SRM). LUN replication can be done synchronously or asynchronously, in the framework of assimilation theory and terminology refer you to the StorageFreak blog where colleague Tomek exactly everything described. We will focus on MirrorView configuration directly on the VNX arrays, in my case are VNX 5200 and VNX 5300.

As part of preparations create connection through SAN between arrays. We combine ports described as MirroView, Port A-0 SPA in the first array to the port A-0 SPA on second array (and correspondingly SPB). Ports which will take place in replication can not be used in the hosts Storage Groups. If you are used these ports to communicate with the hosts, remove them from the Storage Group before connecting arrays (otherwise awaits us restart SP controllers and a lot of nasty messages).

After the storage connected, verify if seen correctly, go to the section Hosts -> Initiators.

VNX 5200:

VNX 5300:

As you can see, the connection is set up correctly. To be able to perform Mirror operations, both arrays must know about yourself, be in the same domain or in two different Domains (local and remote).

This operation is carried out with a newer storage or higher-numbered firmware, in my case from the VNX 5200 I add VNX 5300 (the other way it will not work).

At this point, I have in VNX 5200 two domains, Local and Remote, for VNX 5300 is only the Local domain.

From the VNX 5200 can be managed simultaneously by both arrays seamlessly switching between them at the Unisphere client level.

Next, if you have not already have, we will create LUN for “write intent logs’. This log will help in reversing the array of problems that might occur during replication (something like transaction log). Sam LUN does not have to be big, the minimum requirement is 2GB, but we can not create it as part of Pool, this must be a RAID group. Additionally, these logs must be two, one for each SP. Under Storage-> Storage Configurations-> RAID Groups create two new groups and create new LUN.

Now under the Data Protection click on the “Configure Mirror Write Intent Log” and add our LUN. Write Intent Log is not necessary for replication, if you do not have spare disks from which we could create RAID group we can skip this step (its existence, however, increases safety).

Then we create a Reserved LUN Pool, RLP is used in the snapshots and to present the VMFS to ESXi during testing SRM. They are also necessary for asynchronous replication. Same LUN does not have to be big (this is dependent on the amount of changes in production volumes which to postpone between successive copy steps in asynchronous copy). I created three 512GB LUNs ( can not be Thin). Add them in the Data Protection-> Reserved LUN Pool.

Using VMware SRM can make switching in both directions, so a similar set of LUNs create for the second storage.

Now we move to set up replicas, create new LUN (or choose one) and from the menu choose “Create Remote Mirror”.

Depending on the distance, select whether it be a copy of synchronous (delay of no more than 10ms) or asynchronous (delay of no more than 200ms).

And so forth for each LUN. Now we go to the remote array and proceed to configure (create a LUN). After this operation, we return to the first array and check the LUN Mirrors if everything is ok (Active).

Select the LUN and click “Add Secondary”, previously prepared LUN on the remote array must be the same size as the source and can not be assigned to any Storage Groups.

At this point, we have defined mirror image of our volume (enable synchronization).

If you have more volumes that are subject to synchronization and additionally, these volumes will act as a single vSphere DRS cluster, you might want to combine these into one Mirror Consistency Group.

This ensures that all synchronized operations will be carried out simultaneously on all LUNs.

In addition, Consistency Groups translates directly into VMware SRM Protection Group. At this stage, the configuration MirrorView has been completed, the case described herein relates to replicate in one direction. It is also possible replication in both parties (Bi-Directional), the configuration is very similar. Of course, in the case of Bi-Directional talking about the replication of two different LUN sets of each array of one replicated to the second array (we have then the two active DC replicated to the other sites).

Rate this article:

[Total: 2 Average: 5/5]

Related

Computer always, since I got a Commodore 64 at the end of primary school, through his beloved Amiga and Linux infinite number of consoles, until today, fully virtual day. Since 2001, Unix/Linux Systems Administrator, for seven years a faithful companion and protector of Solaris system, until his sad end. In the year 2011 came in the depths of virtualization, then smoothly ascended into the clouds and continues there today. Professionally working as Systems Architect in the Polish Security Printing Works.

14 Comments

Before all my best compliments to your articles because their are really clear! I just read the vvnx article and I would know if is possibile to do the same with vvnx that is replication between primary and secondary over wan combined to site recovery manager.

Thank you for the compliment, very nice to read it :)
This is a very good question, VNXe and vVNX have built-in replication, for VNXe there are SRA adapters for SRM 5.8 and 6.0. It should work, but I think that vVNX + SRM configuration is not supported. I decided that in the free time I check this configuration :-)

thanks for the quick and kind reply.
I still have a couple of questions:

1) vVnx replication
Ok, I’ll wait your check (and I thank you for that) but…if I have correctly understood, your doubt is that there isn’t SRA adapter for vVnx. Is that correct ?

2) Storage replication and physical equipment
I know that this question is not strictly related to this article but, because I didn’t find good informations, I’ll try to ask you:
could you briefly explain what kind of physical equipment is usually used (in enterprise environment) to replicate storage between sites?
I mean:
vnxstorage_sitea——-?——–layer2(fiber?)——–?———-vnxstorage_siteb

a) About the layer2, is the fiber the only option?
b) Is there some kind of switch or special device in place of the question mark?

I read something about “dark fiber” or DWDM but I’m a bit confused and I didn’t find a decent schema or example about physical devices (brocade???)

3) EMC Mirror view alternatives
You wrote: let us add that the cheapest…
Could you just mention some alternatives?

Hi,
Exactly, no SRA adapter for vVNX (but vVNX is virtualized VNXe).
In EMC MirrorView all you need is Fabric license in Brocade FC (distance less than 16km) or Extended Fabric (distance greater than 16km) between data centers. In my company We have DWDM ring between all ours localizations (dark fiber is long distance FC). So my schema is VNX->Brocade FC->DWDM->Brocade FC->VNX. If you have, for example, two DC (distance 2km) with direct FC connection, you only need Fabric and MirrorView license to run replication (two storage and four switches). In VNX block storage fiber is only option, in file (VNX Unified aka Celerra) replication run through ethernet. With vVNX/VNXe replications run only over ethernet (infrastructure is therefore very simple). We are talking only about VNX or in general? MirrorView is “software” replication, next you have RecoverPoint and VPLEX. Good idea is to talk with EMC (or others) representative, they send you engineer to talk about what you need :-)
Regrds,
Piotr

Hmm, 128mb is the minimum size with older levels of flare and for CX. Recommended size for VNX is 2GB, but there is really no restrictions in this matter.
I correct my guide, 128GB is not a minimum :-)

Do you have any viable source to confirm the information that the recommended size for VNX is 2GB ( question: the 2GB refers to one SP or 1GB per one SP?).
I checked last available docs and guides from EMC knowledge base and couldn’t find recommended size at all. All docs say about the minimum size, event navicli reference guide :)

I spoke with a friend from EMC, the recommendation includes one lun 2GB per SP. At the same time he sent me a mail with documentation referred to 128MB ;-)
Generally the size of the log does not affect performance, also WIL is not required for asynchronous copy.

I was recently training with VNX, and there I learned interesting things. Creating WIL on raid group is not necessary, the log will automatically create in the RAM os SP. This log is a bit map, one bit to one block. Therefore 128mb means to map multiple terabytes, it means that if someone assumes WIL on raid group a 128mb should be entirely sufficient.

You are right. The WIL is not necessary, but the best practises of mirrorview configuration are recommended to implement WIL. It is the next level of security in the case of SP failure. The log which resides in the RAM of SPs calls the fracture log and based on WIL can be reconstructed after an outage.
The log is a bitmap, that represents areas in the primary image which called extents. It is the same logical notion that is used in database technology. An extent is a specific number of contiguous data blocks. I have allocated 128MB on the raid group, and it’s still working :) Thanks Piotr.