Archive for October, 2014

In Windows 7 and Server 2008 virtual machines and above, expanding the boot/system disk is a simple matter of expanding it in the hypervisor, then expanding it in the guest OS in the Computer Management/Disk Management GUI. In Windows XP/Server 2003 guest VMs, expanding the boot/dystem disk is not available via native Windows tools.

To use it: Download the attached file, unblock it, adjust PS execution policy as needed, run the script to load the function in memory, then use this line to get detailed help and examples as shown below:

Note:The script will shutdown the VM during this process.

For example, if you down the VM, and expand the disk in Hyper-V Manager GUI:

In the VM, you cannot expand the boot/system partition with native Windows 2003/XP tools:

You can do that with this script. On the Hyper-V host where the VM is running, run:

Expand-C -VMName MyVM1 -Size 17GB -BackupPath "d:\save"

The script will

Backup the VHDX file before expanding it if the ‘BackupPath’ is used

Convert the file from VHD to VHDX format if it was a VHD file. In this case you’ll need to delete the old .vhd file manually.

If the server ever had the GUI installed, then the bits are there (under C:\Windows\WinSxS by default). If this a Core install and has never had a GUI before, then the bits are likely to be missing as well.

Have you ever been in the situation where you have a dynamic VHDX disk where you cleaned up some space by deleting unneeded files, but the VHDX file size on the underlying disk remains the same? Take this example: I started with this test disk:

If using 32 Standard (spinning SAS) disks, set as a 16-column single simple storage space for maximum space and performance, we get a 32 TB data disk that delivers 960 MB/s throughput or 8k IOPS (256 KB block size).

32x 1TB GRS Standard (HDD) Page Blobs cost $2,621/month

32x 1TB LRS Standard (HDD) Page Blobs cost $1,638/month

If using 32 Premium (SSD) disks, set as a 16-column single simple storage space for maximum space and performance, we get a 32 TB data disk that delivers 3,200 MB/s throughput or 80k IOPS (256KB block size). Premium SSD storage is available as LRS only. The cost for 32x 1TB disks is 2,379/month

If using a D14 size VM with Cloud Connect, setting up the Veeam Backup and Replication 8, WAN Accelerator, and CC Gateway on the same VM:

16 CPU cores provide plenty adequate processing for the WAN Accelerator which is by far the one component here that uses most CPU cycles. It’s also plenty good for SQL 2012 Express used by Veeam 8 on the same VM.

112 GB RAM is an overkill here in my opinion. 32 GB should be plenty.

800 GB SSD non-persistent temporary storage is perfect for the WAN Accelerator global cache. WAN Accelerator global cache disk must be very fast. The only problem is that it’s non-persistent, but this can be overcome by automation/scripting to maintain a copy of the WAN Accelerator folder on the ‘e’ drive 32 TB data disk or even on an Azure SMB2 share.

In my opinion, cost benefit analysis of Premium SSD Storage for the 32-TB data disk versus using Standard SAS Storage shows that Standard storage is still the way to go for Veeam Cloud Connect on Azure. It’s $740/month cheaper (31% less) and delivers 960 MB/s throughput or 8k IOPS at 256KB block size which is plenty good for Veeam.

An Azure subscription can have up to 50 Storage Accounts (as of September 2014), (100 Storage accounts as of January 2015) at 500TB capacity each. Block Blob storage is very cheap. For example, the Azure price calculator shows that 100TB of LRS (Locally Redundant Storage) will cost a little over $28k/year. LRS maintains 3 copies of the data in a single Azure data center.

However, taking advantage of that vast cheap reliable block blob storage is a bit tricky.

Veeam accepts the following types of storage when adding a new Backup Repository:

I have examined the following scenarios of setting up Veeam Backup Repositories on an Azure VM:

1. Locally attached VHD files:

In this scenario, I attached the maximum number of 2 VHD disks to a Basic A1 Azure VM, and set them up as a Simple volume for maximum space and IOPS. This provides a 2TB volume and 600IOPS according to Virtual Machine and Cloud Service Sizes for Azure. Using 64 KB block size:

Although this option provides adequate bandwidth, its main problem is that it has maximum 1 TB file size which means maximum backup job is not to exceed 1 TB which is quite limiting in large environments.

SysPrep.exe is a tool located under c:\windows\system32\SysPrep folder. It can be used to “generalize” a Windows installation to be used for automated deployment instead of doing every fresh install from the ISO media.

I’ve done a fresh install of Windows Technical Preview build 9879, and attempted to install RSAT normally. That just worked:

In another fresh install of WinTP 9879 I tried using DISM:

That completed successfully as well.

Some have reported errors attempting to install RSAT for Windows TP. I’ve downloaded the latest Windows TP ISO and did a fresh install as a Gen 2 virtual machine on Hyper-V 2012 R2. I downloaded and installed RSAT without any issue. I wan not able to replicate the problem. However, here’s another way to try to install it:

CloudBerry Drive Server for Windows Server is a tool by CloudBerry that makes cloud storage available on a server as a drive letter. I have examined 10 different tools to perform this task, and CloudBerry drive provided the most functionality. The use case I was after is the ability to upload large files from on-prem servers to Azure VMs. Specifically, I’m testing Veeam Cloud Connect with Azure, which allows for off-site backup to Azure. The backup files are multi-TB each.

However, digging deeper into how CloudBerry drive works showed that CloudBerry Drive caches each received file to a local folder on the VM. According to CloudBerry support this is a must and cannot be turned off. This poses several problems:

It defeats the purpose of using CloudBerry in the first place. An Azure VM (as of 10/2/2014) can have a maximum of 16 TB of local storage which is implemented as 16x 1TB VHD files (page blobs). The point of using CloudBerry Drive is to be able to access Azure block blob storage with has a 500 TB maximum per storage account.

It puts a file size limit equivalent to the maximum amount of space on the local drive used for CloudBerry caching.

CloudBerry Drive then takes the uploaded file from the cache folder and copies it to the Azure block blob storage account.

This makes the destination file in Azure block blob storage locked and unavailable for many hours during that 2nd copy process. For example, if the Veeam cloud backup job successfully backed up 10 out of 12 VMs, and we retry the remaining 2 VMs, the job will fail since the destination file in Azure is locked by CloudBerry

The 2nd copy uses a great amount of read IOPS from the local drive (Page Blobs), and write IOPS to the destination Block Blob storage. Which makes any other task on the VM like another backup job not practically possible even if it is a different backup job is using other unlocked files, because CloudBerry is using up all available IOPS on the VM for hours or even days

The copy incurs transnational, IOPs, and bandwidth charges on an Azure VM unnecessarily

There are better ways to copy data within the same Azure Storage account that are much more efficient and much less costly, such as instantaneous shadow copies..

Summary:

CloudBerry Drive Server for Windows Server caches files locally which makes it not suitable for use on Azure VMs.