Snapmirror transfer rate (7GBs hour) =slow

We're getting a transfer rate of about 6-7GBs per hour from source to destination (Austin-NewOrleans) over our 45MB pipe. Network team reports that the pipe is only utilized at 10% with no other traffic. We called NetApp support and was asked to run some perfstats. In the end, they said the bottleneck was the source filer (FAS2040). Support said we were limited to our disks and to either add more spindles or upgrade to SAS. We currently have 1 AGGR (22 disks) (2 RGs = each with 11 disks), SATA 7200rpm drive. The only thing running on this source filer is 5 virtual guests. When trying to smapmirror the volume (460GBs) its taking a long time.. much longer than anticipated.

Sata with 11 disks in raid?
Indeed, that is asking for troubles.
The scsi protocol (also used in sas disks) is designed to handle larger raid volumes, sata is for cheap DAS storage. Ideal for pc's and raid 0 or 1 with 2 disks.

Once the volume has been seeded (Initialized) then only the snapshots gets snapmirrored. Your snapshots should fairly small. The initial seed mirrors your volume and snapshots and afterwards it just replicated anything new.

what we are looking for here is the amount of time we can be in a failed over scenario and what to expect when we failback in regards to timing and data transfer.

We're on the gulf coast so failing over is not uncommon. We need to know how many days we can sustain in a DR scenario and still be able to failback within a reasonable amount of time. Needing to know what is the cutoff date on trying to sync back or packing up hardware and driving it back to do a local SYNC.

Without considering the numbers here, let's say we accumulate 1TB worth of deltas in a week while in DR. At 7GBs/Hr that would be 142 hours to sync, plus the deltas that queue up during that 142 hours of syncing and then failback.. soooo lets say 5 days to failback once PROD comes online???

That's "ok".. but not 3 weeks for 3TBs... so our cutoff would be somewhere around 3-4 days in DR mode.. knowing that if we went longer we would plan on just moving the hardware to the original source..

Even with a a 45Mbps connection that would never happen (3 to 4 days) for 3TB.

You would typically get less than 45Mbps due to overhead, hops, etc.. but let's do hypothetical.

45MBps = 5.6MB/s = 336MB/m = 20GB/hour

3000GB (3TB) divided by 20GB/h = 150 hours which is over 6 days. Normally you would double that for real world so about 10 to 12 days would be ideal with that type of bandwidth. Also keep in mind that if both controllers are sending data then they may be clogging up the pipe.

The following article sheds light on easy-to-use steps to recover non-responding hard drives without data loss. Count on these approaches to fix undetectable, not responding, or non-working hard drives.

This tutorial will walk an individual through the process of installing the necessary services and then configuring a Windows Server 2012 system as an iSCSI target.
To install the necessary roles, go to Server Manager, and select Add Roles and Featu…

Despite its rising prevalence in the business world, "the cloud" is still misunderstood. Some companies still believe common misconceptions about lack of security in cloud solutions and many misuses of cloud storage options still occur every day.
…