I've never had an expirience with HP storage and can now use only manuals and google to get some basic view.

Could i ask you to clarify some details?

1. AFAIU i need to use quorum witness inside my Managment Group. The Entity "Cluster" would be created with creation of MG or vice-versa?

2. AFAIU i can have only one quorum witnessin MG, and it cannot be placed inside the LUN of my SV3200, so i cannot cluster the VM which would serve as NFS share. Did i get it right, if my share fails with server cluster node where it is located, there would be no reaction from my Network RAID cluster untill it's node loss each other? If my share disappears i just can create the new and configure MG to use it with no downtime?

3. I consider FC8 storage with no FC switches. So basiccally i just need a shared storage for my hyper-v cluster, majority of my LC cables from both server nodes will be connected to SV3200_1 node controllers, could a put the second SV3200_2 in standby mode awaiting for failover OR IO activivty will be occuring simultaneously on both nodes? So, will i see all my physical LUNs on managment console or it will be a virtual LUNs with 2 physical behind each?

Re: Basic design question for 2 storage system HA configuration.

I don't understand some of your questions but will make a few comments.

For a single StoreVirtual 3200, no external quorum witness or failover manager is used. Quorum is managed internally with hardware locking that prevents split-brain. Quorum for a scale-out StoreVirtual 3200 is also managed by the controllers so no external quorum witness is needed.

You only need a quorum witness with the StoreVirtual 3200 when you have a scale-out multi-site cluster. And because of the way quorum witness works, you would only have one for a multi-site cluster.

I have a blog post with a ChalkTalk that gives an overview of the StoreVirtual 3200. When we announced the new features, we said that the StoreVirtual OS 13.5 (version that will have scale-out) will be available in this calendar quarter. I'm sure that as we get closer to release 13.5, we'll have technical documentation so stay tuned.

"A management group is a collection of one or more storage systems. It is the container withinwhich you cluster storage systems and create volumes for storage. Creating a management groupis the first step in creating HP StoreVirtual storage."

You wrote me:

"You only need a quorum witness with the StoreVirtual 3200 when you have a scale-out multi-site cluster"

In my head the scale-out multi-site cluster is not just a two SV3200 in one rack with direct/ over FC-switch/ over 1Gbs switch connection to each other , right? You refer that scheme as "scale-out StoreVirtual 3200" that has no need on quorum to be in sync and work as 2-nodes hardware cluster?

Sorry for troubles with comprehension, it's probably because of my English :)

Re: Basic design question for 2 storage system HA configuration.

Sounds like your looking at a StoreVirtual Users Guide for either the VSA or the StoreVirtual 4000. This is different from the StoreVirtual 3200. The StoreVirtual 3200 is a dual controller array. The StoreVirtual 4000 and StoreVirtual VSA are based on an architecture where each node only has a single controller and you get HA by scaling-out. In the case of a 2 node StoreVirtual 4000 or VSA, you have to have a quorum witness. This can be either a lightweight VM or now we also support an NFS share.

The StoreVirtual 3200 is NOT the same - so the manual you're looking at is not the right one. Again, you don't need a quorum witness with the StoreVirtual 3200 unless you are doing a multi-site stretched cluster.

Re: Basic design question for 2 storage system HA configuration.

You mention that you want a FC Fibre Channel network? Then StoreVirtual VSA is no option since it supports only iSCSI. So you will need to go to the SV3200 which exist with FC controllers.

Know that at this moment no Direct Attach is supported. So you will always need FC switches between servers and SV3200.

If it is only 2 servers you might want to choose for iSCSI since you can choose rather cheap 2920 switches that has 4 x 10Gb ports, enough for 2 servers and the SV3200.

SV3200 is high available since it has 2 build-in controllers in 1 cage. We don't talk about nodes. Nodes are in the VSA and the old P4000 world (where you needed at least 2 in a cluster). In the SC3200 we don't talk about a cluster but about a Storage Pool. Just to be clear...

I have a blog article explaining completely the deployment and configuration of the SV3200, maybe it might help understanding the system and the information needed to configure

Re: Basic design question for 2 storage system HA configuration.

HPEStorageGuy, you were definetly right, i couldn't see a direct statement in manual, but only SV4xxx systems mentioned there, no SV3xxx.

It clears, yes, thanks a lot.

So...... SV3200 is just a regular HA storage array in one cage with all components doubled. It couldn't be added to MG. Like many other SAN arrays in market.

SV4xxx are single-controller systems with sync replication over 1g or 10g link and auto failover by Quorum or FOM, but they stiil may be installed in one rack and used in single cluster of VM managed by Vshpere or Hyper-v, it's not necessary to use them is in multi-site topology?

Re: Basic design question for 2 storage system HA configuration.

Guys, one more question in this topic if you don't mind.

The storage vendor with whom we cooperated before told me recently that their arrays would go to offline mode after power loss if this loss caused at least one disk in array to fail. Despite Raid's e.g. RAID6 could tolerate the fail of 2 disks we'll still get non-accessible data before admin's coming for manual operations. This is not the way we understand fault tolerance or high availability, so i want to ask - how HPE arrays react on such events? Would they also be waiting for admin or smth else there?

Re: Basic design question for 2 storage system HA configuration.

Apologies as I missed your follow on question. I don't know why an array would go offline due to a drive failure. RAID protects from a drive failure (with RAID 5) and other RAID 6 and some implementations of RAID 10 protect from multiple drive failures so I have no idea why an array would go offline in the case you described.