There’s a maximum of 5TB capacity per share, and a maximum of 1TB capacity per file.

Use a 3rd party tool such as Cloudberry Drive to make Azure block blob storage available to the Azure VM. This approach has the 500TB Storage account limit which is adequate for use with Veeam Cloud Connect. Microsoft suggests that the maximum NTFS volume size is between 16TB and 256TB on Server 2012 R2 depending on allocation unit size. Using this tool we get 128TB disk suggesting an allocation unit size of 32KB.

– Under the Mapped Drives tab, click Add, type-in a volume label, click the button next to Path, and pick a Container. This is the container we created in step 3 above:

– You can see the available volumes in Windows explorer or by running this command in Powershell:

Get-Volume | FT -AutoSize

Add VHD disks to the VM for the CloudBerry Drive cache:

We’ll add VHD disks to the VM for that cache folder to have sufficient disk space and IOPS for the cache.

Highlight the Azure VM, click Attach at the bottom, and click Attach empty disk. Enter a name for the disk VHD file, and a size. The maximum size allowed is 1023 GB (as of September 2014). Repeat this process to add as many disks as allowed by your VM size. For example, an A1 VM can have a maximum of 2 disks, A2 max is 4, A3 max is 8, and A4 max is 16 disks.

In the Azure VM, I created a 2TB disk using Storage Spaces on the VM as shown:

This is setup as a simple disk for maximum disk space and IOPS, but it can be setup as mirrored disks as well.

Create a folder for the CloudBerry Drive cache on the new disk, and configure CloudBerry Drive to use it:

It’s important to have enough disk space on the drive where CloudBerry Caching occurs. The amount of available space on the Caching drive puts a limit on the file size that can be handled through CloudBerry drive which could be much less than the 128TB available space on a CloudBerry Drive that has an Azure Block Blob back end.

One of the frustrating limitations I’ve come across when using Azure Virtual Machines is the limited disk space amount you can have. This limitation is particularly a hurtle when considering Azure storage as a backup target. The maximum amount of disk space you can have on an Azure VM is 16TB. This limitation stems from 3 issues:

Maximum 1,023 GB per disk. Azure VM disks are stored as Page Blobs in Azure storage accounts. A Page Blob has a maximum of 1TB.

Maximum 2,040 GB per disk. Although this limitation is superseded by the 1TB page blob limitation, it’s worth noting. This limitation stems from the fact that Azure VMs must use VHD disk format which has a maximum of 2TB/disk, in spite of the fact that the VM operating system may support disks as large as 64TB each, such as Server 2012 R2.

Although we can store large files in Azure as Block Blobs, many backup applications require a VM in the cloud to do WAN acceleration, multi-site deduplication, and similar functions.

Another annoying feature is the artificial coupling of VM CPU, memory, and disk space resources in Azure VMs. For example, to have a VM with with the maximum allowed 16TB disk space, one must use one of the 5 large VM sizes:

Back to the Azure VM, I created a disk pool, 1 vDisk, and 1 volume using all the 16 TB space, formatted with 64KB blocks:

The 16 disks before and after setting them up as a single vDisk using Storage Spaces

There may be changes coming down the pipeline that would allow up to 30TB/VM or more. For example, the StorSimple 1100 virtual appliance, which is an Azure VM associated with a StorSimple series 8000 storage array has a maximum capacity of 30TB.

AzCopy is a command line tool by Microsoft that allows for easy uploads/downloads to/from Azure storage. In addition of offering a non-programmatic way of transferring files from/to Azure storage, it provides the flexibility of choice between page and block blobs in Azure blob storage. Page blobs have a maximum of 1TB size. Azure VMs’ VHD files for example are implemented as page blobs and suffer from the same limitation. azCopy tool also offers an important feature which is the ability to resume timed-out or interrupted uploads/downloads.

This script will continue to run the same .azCopy.exe command until “Transfer Failed” output is down to zero.

azcopy tool stores 2 journal files under %\AppData%\Local\Microsoft\Azure\AzCopy like C:\Users\samb\AppData\Local\Microsoft\Azure\AzCopy folder. It looks them up upon starting to detect if this is a transfer that did not complete and resumes it.

To be able to use Powershell to run commands, lookup file and container (folder ) lists, transfer files from your computer to your Azure storage account, you need some initial setup. This is a one time setup that uses certificates for authentication.

Azure Powershell cmdlets are part of the Azure PS module. Follow this link to get it.

To get started, we need a certificate. A self signed certificate is OK.

Follow this link to download and install SDK for Windows 8. We need this SDK because it contains the makecert.exe tool that we’ll use in the next step. Alternatively, you can make and export a certificate manually in certmgr.msc

Next, upload the resulting .cer file to your Azure account. In Azure Management Interface, click settings on the bottom left, Management Certificates tab, Upload a management certificate:Browse to “C:\Program Files (x86)\Windows Kits\8.0\bin\x64” folder, and the .cer file created in step 2 above. It will be named after the $certName value above.