StorSimple is a hybrid storage array by StorSimple Inc. which was a Santa Clara, California-based small company. In November 2012, Microsoft acquired StorSimple. The hardware portion of the solution is manufactured and distributed by Xyratex, a UK company, which was acquired by Seagate in December 2013.

StorSimple basics:

2 RU rack mount unit

This is a block device that serves out storage via iSCSI only

Dual controllers running in active/passive mode

Dual hot swap power supply units

4x 1 Gbps NICs

Capacity:

Main features:

3 storage tiers:

SSD tier

SAS teir

Cloud tier

Automated tiering.

Inline automated deduplication

Automated data compression

Choice of cloud provider. Major cloud providers like AWS, Azure, and Google are supported

Local and cloud snapshot. A snapshot is a point in time recovery point.

Cloud clone. A cloud clone is the ability to perform a cloud snapshot to multiple clouds in a single operation

Fast recovery. Upon restoring a volume group, only the volumes’ metadata is downloaded, and the volume is unmasked to the host(s). To the host, the volume and its files appear and can be used. As a file is opened by a user, StorSimple fetches its blocks from the cloud (3rd tier). This is much quicker than waiting many hours to download a 20 TB volume for example before any of its files can be accessed.

SSL encryption for data in transit, and AES-256 bit encryption for data at rest (in the cloud – 3rd tier)

Protection policies can be setup in StorSimple to use combinations of local and cloud snapshots to match the organization RPO, RTO, and DR (Recovery Point Objectives, Recovery Time Objectives, and Disaster Recovery)

Easy administration via simple web interface

No-downtime controller failover, allows for firmware/software updates with no down time.

Authentication to admin interface can integrate with LDAP directories like Active Directory.

In summary, this compact array makes it easy to move 400 TB of rarely used files for example, and move them seamlessly to the cloud. Best part is that IT folks don’t have to redesign applications or do anything major, other than mount an iSCSI volume on an existing file server. This offloads expensive on-prem primary storage. It removes the need for secondary storage, tape backup, and off-site backup. This provides a significant return on investment.

Limitations:

1 Gbps NICs (no 10 Gbps NICs) put a limit on the throughput the array can deliver. At a maximum, 3 out of the 4 NICs can be configured for iSCSI providing an aggregate bandwidth of 3 Gbps or about 10k IOPS

Maximum outbound useable bandwidth to the cloud is 100 Mbps

When planning a use case for the device, tiering to the cloud must be considered. This is desirable for a workload like a file share, but not recommended for an active workload where the entire volume is active such as SQL or Hyper-V.

Wish list items include:

No central management point if you have several devices. Each device has its own web interface.

Have you ever been in the situation where you need to run or test some Powershell commands on a server in your environment but you cannot because you don’t have the needed Powershell module? For example, you may need to run some Powershell commands against a SQL server, Exchange server, or Web server, but you don’t have the SQLPS Powershell module or the WebAdministration module, or.. One way was to resolve this issue was to find the installation media of the application like SQL, find the Powershell module bits on it, identify, download, and install all the prerequisites, then you can use the needed PS commands.

For example this script which truncates SQL logs uses the SQLPS module. If you don’t have it on your local machine, you need to download and install the following 3 components in order:

In some cases you may need to checkpoint a group of related VMs at exactly the time, such as a SharePoint farm, or a multi-tiered application. The process includes pausing, saving the VMs, creating checkpoints, restrating, and eventually restoring back to original state. Powershell can help automate the process:

In Windows Server 2012 R2 a VM checkpoint in Hyper-V captures a point in time state of a virtual machine’s memory, CPU, system state, and disk contents.

Checkpoints were called VM Snapshots in Server 2012 but that was confusing because it was too close to VSS volume snapshots for example, and it was also known as Checkpoint in VMM.

When a checkpoint is invoked, a .avhd file is created for each of the VM’s .vhd(x) files, and the .vhd(x) files becomes read-only. All changes are then written to the .avhd files preserving the state of the .vhd(x) files at the point of creating the checkpoint. If the VM attempts to access a file, Hyper-V will look for it in the .avhd file first, if not found, it will look for the parent .vhd file. This causes a performance degradation particularly if a number of nested checkpoints are created like:

In this example, The disk files can be observed in the VM folder:

where all the files are read-only except the latest .avhd file at the bottom. Only one checkpoint .avhd file is read-write at any given point in time. Each checkpoint has 1 parent check point. In Powershell, we can run the cmdlet:

Get-VMSnapshot -VMName vHost17

to identify each checkpoint’s parent checkpoint:

Removing a checkpoint merges its content (blocks) with its parent checkpoint. Checkpoints do not have to be removed in order. In this example, we can remove the 7:37:31 checkpoint using this Powershell cmdlet or from the Hyper-V Manager GUI:

This maps a drive to the Azure share, copies the Veeam 8 iso to c:\sandbox on the VM, and mounts the iso:

These 2 drive mappings are not persistent (will not survive a VM reboot). This is OK since we only need them during setup.

6. Setup a persistent drive mapping to your Azure Storage account:

There’s a number of ways to make Azure storage available to the VM:

Attach a number of local VHD disks. The problem with this approach is that the maximum we can use is 16TB, and we’ll have to use an expensive A4 sized VM.

Attach a number of Azure File shares. There’s a number of issues with this approach:
The shares are not persistent although we can use CMDKEY tool as a workaround.
There’s a maximum of 5TB capacity per share, and a maximum of 1TB capacity per file.

Use a 3rd party tool such as Cloudberry Drive to make Azure block blob storage available to the Azure VM. This approach has the 500TB Storage account limit which is adequate for use with Veeam Cloud Connect. Microsoft suggests that the maximum NTFS volume size is between 16TB and 256TB on Server 2012 R2 depending on allocation unit size. Using this tool we get 128TB disk suggesting an allocation unit size of 32KB.

CloudBerry Drive has proved to be not a good fit for use with Veeam Cloud Connect because it caches files locally. For more details see this post. So, I recommend skipping CloudConnect installation and simply attach up to 16x 1TB drives to the Azure VM (may need to change VM size all the way up to Standard A4 to get the maximum of 16 disks), and set them up as a simple disk in the VM using Storage Spaces)

– Under the Mapped Drives tab, click Add, type-in a volume label, click the button next to Path, and pick a Container. This is the container we created in step 3 above:

– You can see the available volumes in Windows explorer or by running this command in Powershell:

Get-Volume | FT -AutoSize

7. Add VHD disks to the VM for the Veeam Global deduplication cache:

We will be setting up Veeam’s WAN Accelerator for Global Deduplication. This uses a folder for global deduplication cache. We’ll add VHD disks to the VM for that cache folder to have sufficient disk space and IOPS for the cache.

Highlight the Azure Veeam VM, click Attach at the bottom, and click Attach empty disk. Enter a name for the disk VHD file, and a size. The maximum size allowed is 1023 GB (as of September 2014). Repeat this process to add as many disks as allowed by your VM size. For example, an A1 VM can have a maximum of 2 disks, A2 max is 4, A3 max is 8, and A4 max is 16 disks.

In the Azure Veeam VM, I created a 2TB disk using Storage Spaces on the VM as shown:

This is setup as a simple disk for maximum disk space and IOPS, but it can be setup as mirrored disks as well.

If you’re using a VM smaller than A2 with less than 2 CPU core, you’ll receive a warning during Veeam installation. This is safe to ignore.

Select to install the Veeam Powershell SDK

Accept the default installation location under c:\program files\Veeam\backup and replication

Click “Do not use defaults and configure product manually”

Change location of the Guest Catalog folder and vPower cache folder:

Veeam will automatically install SQL 2012 SP1 express on the VM:

10. Add an endpoint to the VM for TCP port 6180:

While Veeam is installing on the VM, let’s add an endpoint for Veeam Cloud Connect. In the Azure Management Portal, click on the VM, click Endpoints, and click Add at the bottom. Add a Stand Alone Endpoint:

Select the e: drive we created in step 7 above, enter the maximum amount of disk space available, and finish:

14. Add a certificate:

This is a one time setup. SSL certificate is needed for authentication and tunnel encryption.

Note: This step requires Veeam Cloud Connect license. Without it, the Cloud Connect Infrastructure will be hidden in the Veeam Backup and Replication 8 GUI.

For production I recommend using a certificate from a public Certificate Authority. For this exercise I will generate a self signed certificate. In Cloud Connect Infrastructure, click Manage Certificates:

Click Generate new certificate

Type in a friendly name

Click “Copy to clipboard” and save this information. We will need the thumbprint when connecting from the client/tenant side. This is only needed because we’re using a self signed certificate. If we used a certificate issued by a public Certificate Authority we would not need that thumbprint.

15. Add Cloud Gateway:

This is also a one time step. Under Cloud Connect Infrastructure, click Add Cloud Gateway:

Pick a server,

Use the Public IP of the Azure VM and the default port 6180.

Select its NIC,

and Finish.

16. Add a user/tenant:

A user may represent a Cloud Connect client/tenant or a corporate site. It is associated with his/her own repository under the main repository, and a lease. The lease corresponds to a backup client/backup cloud provider agreement term.

In the Resources tab, click Add, create a name for the user/client repository name, select a Backup Repository from the drop down menu, check to Enable WAN acceleration, and pick the WAN accelerator from its drop down menu.

From the client/corporate side, in Veeam Backup and Replication 8, under Backup Infrastructure, click the Service Providers node, and click Add a Service Provider:

Enter the IP address or DNS name of the Azure Veeam VM we created above, and click Next:

Since we used a self-signed certificate, we need to provide the thumbprint from step 14 above. Click Verify:

Notice that the certificate has been validated. The certificate will be used for both authentication and tunnel encryption. Click Add and add the user name and password created in step 16 above:

Click Next, and Finish.

Finally, create a new backup copy job using the Cloud Repository as a backup target:

StorSimple 8000 series comes with new exiting features. One of them is the ability to have a virtual device. Microsoft offers the StorSimple 1100 virtual appliance as an Azure VM that can connect to a StorSimple 8000 series array’s Azure Storage account and make available its data in case of a physical array failure. There’s no cost for the StorSimple 1100 virtual device, but the VM compute and other IAAS costs are billed separately. So, the recommendation is to keep the StorSimple 1100 Azure VM powered up only when needed.

To setup a StorSimple 1100 virtual appliance:

1. In the Azure Management portal, click StorSimple on the left, then click on your StorSimple Manager service, click the Devices link on top, and click Create Virtual Device at the bottom:

Enter a name for the new StorSimple 1100 virtual device. Pick a Virtual Network and Subnet. Pick a Storage account for the new virtual device, or create a new one. Check the “I understand that Microsoft can access the data stored on my virtual device” box:

Notes:

A Virtual Network and Subnet must be setup prior to starting to setup a StorSimple 1100 virtual appliance.

It’s recommended to use a separate Storage account for your StorSimple 1100 virtual appliance. If you use the same Storage account as your StorSimple 8000 series array, the data stored there will be subtracted from your array’s Storage Entitlement (more costly than a separate Storage account)

The “I understand that Microsoft can access the data stored on my virtual device” checkbox is a reminder that data is encrypted at rest only. That is when it resides in the Storage account associated with your StorSimple 8000 series array. Once this data is accessed and used by a StorSimple 1100 VM, it’s not encrypted. Stored data is encrypted, compute data is not. In other words, if data in your Storage account is accessed by unauthorized individuals, it’s still safe due to EAS-256 bit at-rest encryption. However, if data in your StorSimple 1100 VM is accessed by unauthorized individuals, it’s no longer safe since compute data cannot be encrypted.

This can take up to 15 minutes

2. Back in the Devices page, we now have the newly provisioned StorSimple 1100 device:

To use a StorSimple 1100 virtual appliance in case of a StorSimple 8000 device failure:

1. In the Azure Manage portal, click StorSimple on the left, then click on your StorSimple Manager service, click on your StorSimple 8000 device, and click Failover at the bottom:

2. Select one or more Volume Containers:

Note: It’s recommended to keep total volumes’ capacity in any given Volume Container under 30TB since a StorSimple 1100 virtual device has a 30TB maximum capacity.

3. Select the target StorSimple 1100 virtual device to fail to:

Check the summary screen, and check the box at the bottom:

This has created a failover job. Jobs can be viewed in the StorSimple Manager service page under the Jobs link on top.

Once this failover job is complete, we can see the Volume Container that used to be on the StorSimple 8000 series array, now on the SVA:

So far we’ve made available the StorSimple 8000 series array’s volumes on the SVA we just spun up. The volumes in this Volume Container on the SVA will have the same ACRs as the original volumes on the StorSimple 8000 series array, which will make the volumes available to the original on-prem servers.

This is how the topology looks like before the failover:

and here how it looks like after the failover:

Considerations and limitation:

In this scenario of failover, we have to mount the new volumes presented from the SVA onto the file server, and recreate the SMB file shares.

SVA has a maximum capacity of 30TB. So, volumes larger than 30TB cannot be recovered without a replacement physical StorSimple 8000 series array.

More than one SVA may be needed to provide enough storage capacity for all volumes that need to be recovered.

SVA has one vNIC. MPIO is not supported.

SVA does not have controller redundancy. So maintenance that requires SVA reboot or down time will cause volumes to be unavailable.

Data on the SVA is accessible to Microsoft. If the SVA is compromised data is unprotected.

Volume access speed will be slower than usual due to WAN link latency

Microsoft suggests taking it a step further and creating new Azure VMs as file servers:

This recovery scenario suffers from all the above limitations in addition to:

Need to remap drive letters on all client computers for all recovered volumes

Need to reconfigure all applications that use UNC paths that point to the original servers to the new VMs

Azure virtual file servers will have to join the domain

Additional time, costs, and complexity of creating new Azure virtual file servers

Some of these steps may not be needed if a name space is used to access network drives.