“Defcon zero. An entire Azure data center has been wiped out, billions of files have been lost.”

But not to worry, Azure will just fail over to another data center right? It’s automatic and totally invisible.

Well, not entirely. A failover doesn’t happen instantly so there’ll certainly be some downtime. There may also be more local connectivity concerns outside of Microsoft’s control that prevent connection. In these circumstances you might want to be able to access your replicated data until things are working properly again.

In Dec 2013 Microsoft previewed read-access geo redundant replication for storage accounts – which went Generally Available in May 2014. This means blobs, tables and queues are available for read access from a secondary endpoint at any time. Fortunately, third-party tooling and configuration scripts won’t need a complete re-write to support it, since the only thing you really need to do is use a different host for API requests.

Twice the bandwidth

Those who expect high performance from their Azure storage are may already be limiting reporting and other non-production operations. An additional benefit of the replicated data, is that you can divert all lower priority traffic to it, thus reducing the burden on the primary. Depending on the boldness of your assumptions, you could double the throughput to your storage by splitting unessential, ad hoc requests to the secondary endpoint.

Configuration

Replication can be configured in the Azure Management Portal to one of three modes: off, on, and on with read access. Officially these three modes are called:

Locally redundant. Data is replicated three times within the same data center.

Geo redundant. Replication is made to an entirely separate data center, many miles away.

Read access geo redundant. Replication is geo redundant and an additional second API endpoint is available for use at any time, not just after an emergency failover.

What can’t be configured is the choice of secondary location. Each data center is ‘paired’ with another – for example, North Europe is paired with West Europe, and West US is paired with East US. This also keeps the data within the same geo-politcal boundary (the exception being the new region in Brazil, which does its secondary in South Central US).

Behavioural matters

In a simple usage scenario, it’s unlikely you’ll run into issues with consistency between your primary and secondary storage. For small files you might only see a latency of few seconds. Whilst MS have not issued an SLA guarantee at this time, they state that replication should not be more than 15 minutes behind. For reporting purposes, you might not care about such a low latency. In any case, you can query the secondary endpoint to find out when the last synchronisation checkpoint was made.

It’s worth pointing out that transactions may not be replicated in the order that they were made. The only operations guaranteed to be made in order are ones relating to specific blobs, table partition keys, or individual queues. Replication does respect the atomicity of batch operations on Azure Tables though, and will be replicated consistently.

Accessing the endpoint

Accessing the replicated data is done with the same credentials and API conventions, except that ‘-secondary’ is appended to the subdomain for your account.

For example, if the storage account ordinarily has an endpoint for blob access such as https://robinanderson.blob.core.windows.net then the replicated endpoint will be https://robinanderson-secondary.blob.core.windows.net. Note that this DNS entry won’t even be registered unless read access geo redundant replication is enabled. This does mean that if someone knows your storage account name, they can tell if you have this mode enabled by trying to ping your secondary endpoint, for all the good it will do them.

When connecting to the secondary endpoint authentication is performed using the same keys as for the primary. Any delegated access (for example, SAS) will also work since these are validated using these keys.

Analytics

If monitoring metrics are enabled for blob, table or queue access, then those metrics will also be enabled for the secondary endpoint. This means there are twice as many metrics visible on the secondary, as the primary ones are replicated over as well.

Simply replace the word ‘Primary’ with ‘Secondary’ in the table name to access the equivalent metric, thus $MetricsHourlyPrimaryBlobTransactions becomes $MetricsHourlySecondaryBlobTransactions.

At the time of writing, there is no equivalent for the $logs blob container. Ordinarily, you can audit all read, write and delete operations made to your storage account. So whilst the aggregate monitoring analytics mentioned above are available for the secondary endpoint, you won’t know specifically which source IP addresses are issuing reads (though it’s unlikely you’d care).

Support for secondary storage in Azure Management Studio

Accessing the replicated data in AMS is fairly trivial if you’ve already got the original storage account registered – just right click and choose ‘Connect to geo-redundant secondary copy’ from the storage account context menu and a second, rather similar, storage account will be visible next to the first. It will behave entirely as if it were an ordinary storage account, except that it will be read-only and will display the last synchronisation time in the status bar.

Alternatively, there’s a checkbox on the ‘Add storage account’ dialog that allows you to specify access via the secondary endpoint, if you’ve not already registered the primary. Either way, once you’re looking at your data you can use the same UI features to search, query and download.

To try out this new feature download your free trial of Azure Management Studio now. Existing users can get the latest version from within Azure Management Studio (go to Help – Check for Updates).

We’ve now added better support for drag and drop in the latest version of Azure Management Studio (AMS). In this version you can drag block blobs both into and out of the AMS folder views.

So, for example, in the pictures below I drag a single selected file across AMS onto the desktop.

When you start the dragging, you cannot drop to start with (as you’d be copying the file into the folder that contains it), so the no drop annotation is displayed. We use the Windows shell to get a suitable graphic to display next to the cursor, so here we see its representation of a png image file.

Once the cursor is above a target that does support a drop, the drop description changes to reflect the action that will happen if you release the mouse and start the drop.

Of course, we don’t just support drag and drop inside the tool, but also allow other applications to accept the drop. In particular the shell is happy to take the drop of a data stream that we offer it.

When the user elects to drop onto a folder, this will make AMS fetch the blob content and stream it to the shell as a byte stream. The shell can use the name information included within the transfer object to create a file of the correct name which it can then fill with the content.

Of course, we don’t just want to be able to drag blobs out of AMS. We have also improved AMS so that it can handle more types of items that are dragged on to it. Drag and drop is a little complicated, and we’ll try to give a better overview of it below, but essentially the drop target (AMS) can look at the formats of the data which the source offers. Typically a source may offer a list of files on the local file system, and we have been able to handle this kind of source for a long time in AMS. If you drag a file out of a zip file though, this is offered to the target as a byte stream (plus some metadata) and AMS now knows how to handle this kind of information.

When you drag one or more of the files contained in a .zip file:

AMS happily accepts that as a drop target:

And dropping leads to a transfer executing, which is logged in the transfer panel:

As we’ll discuss in a moment, copy and paste uses a fairly similar mechanism behind the scenes so will work in the same way.

So how does Drag and Drop work then?

Drag and drop has been around since the old days and relies on COM interfaces to do its work. It revolves around the IDataObject interface. which essentially describes a dictionary which interested parties can both query for various properties (that correspond to different renderings of data). and also set properties to reflect the progress of any data transfer that is happening.

When a drag operation is started, the source makes an instance of this class, populates it with relevant data and then calls into a shell helper method passing the DataObject as one of the arguments. This shell helper method will then take care of executing the drag as the cursor moves across the screen, interacting with the drop sources that are passed over in the process, until the drop happens on a particular target or it is cancelled (by pressing the Escape key). If you drag from AMS then we put at least two renderings of the data into the DataObject – one that is a serialized .NET object that only AMS understands. which it will use if you drag from AMS into itself, and a second data format that offers the data as a stream. In this second format. the data is offered as a set of metadata about the name of the item together with an OLE stream which the target can use to pull the data in blocks of bytes.

The DataObject is also used to reflect the semantics of the action itself. The target can set values to reflect whether it wants the action to be a move or a copy, and it will also set a value to say whether the drop was successful and whether the source needs to carry out the delete part of any Move. It will also populate the DataObject with the drag image which is shown by a window that the shell creates next to the cursor when you are dragging, and potentially a piece of description text describing the operation.

When the cursor moves over a potential drop target, this target gets a callback and can then freely interrogate the DataObject to determine if it contains suitable data for it to process. It can return a result back to the shell, which can use this to determine which cursor it displays – one showing that the drop is available or the no entry sign which reflects that the target isn’t able to handle the data that is being dragged. The target is also free to change the displayed text.

How do I do it then?

There are many useful blog posts out there that cover the rather arcane methods involved.

One ends up working at the level of COM which is supported fairly well inside .NET. The only lacking feature (as far as I know) is a way to detect that the COM object is no longer being used by external parties via COM… in C++ one can keep an eye on the reference count, but in the .NET world there is no way to see if the .NET created CCW (COM callable wrapper) is being used, and so the only way to detect that the object is no longer used is to add a Finalizer to its type.

You also go back to the days of managing your own memory, with you being needed to do Global Lock and Unlock, and also allocate using Marshal.AllocHGlobal.

There are also a few extra interfaces you might want to implement – IAsyncOperation, for example, which allows the Shell to do a data transfer without blocking.

Getting all these parts to work together took some effort, and was helped a fair amount by a working implementation of some of this inside the Azure Explorer tool that we have made freely available for some time. We started with the Azure Explorer implementation and then merged in bits and pieces from various blog posts as we needed more functionality.

The good news is that you almost get cut-and-paste for free after you’ve done the work implementing drag and drop, as this transfer process is also centred on the idea of a DataObject. The key difference is that you place the DataObject on the clipboard for other applications to find, and in order to enable your paste menu you may need to subscribe to clipboard change events so see if the clipboard contains a suitable format.

Was it worth it?

When you are dealing with the file system on your local machine, and something like Blob storage which is typically displayed using a folder and files metaphor, it feels more natural to drag and drop files around, and have the system interpret this as a series of transfer operations.

Hopefully, our users will find it useful.

To try out this new feature download your free trial of Azure Management Studio now. Existing users can get the latest version from within Azure Management Studio (go to Help – Check for Updates).

Today we’ve released version 1.4 of Azure Management Studio. We’ve added many highly requested new features to this release, including improvements to the drag and drop functionality, and support for accessing files from the secondary of a geo-redundant storage account.

This release also includes many other features to make users lives easier, such improvements to blob search and the ability to kill role instances. Find out more below.

Added support for accessing a geo-redundant secondary

Azure Management Studio (AMS) now supports accessing files from the secondary of a geo-redundant storage account, enabling you to inspect storage without impacting the performance of the primary. The secondary accounts can be added to the Storage Account section of the tree by selecting the check box “Access via geo-redundant backup”:

The read-only account will be added to the list of storage accounts in the tree.

Improved drag and drop support

You can now drag block blobs both into and out of the folder views in Azure Management Studio. So if you have log files stored in a Blob Container, you can drag them out of the file explorer and drop them onto your desktop, or into Windows Explorer, to download them.

We have also improved AMS so that it can handle more types of items that are dragged into it. So you can now drop “Virtual Files” files (for example, the files in a .zip file) into AMS and have them uploaded to Blob Storage. Find out more about this new feature.

Improved blob search

Blob search in AMS has been improved to include the ability to search the blob metadata. To search the metadata, type either of the following into the search box in the tool:

metadataname:NameSearchText
This will search all the child blobs and list any ones that have metadata with the name containing the text “NameSearchText”

metadatavalue:ValueSearchText
This will search all the child blobs and list any ones that have metadata with a value containing the text “ValueSearchText”

To help find specific blobs there is also new functionality to filter the list of blobs returned. You can filter the file list to show:

Page blobs or block blobs

Blobs which have a lease taken out on them

Blobs that have been modified within a certain time frame

Blobs within a range of sizes

Added ability to kill role instances

If you wish to kill an individual instance of a role in a hosted service you can click the Delete button in the Operations menu. Alternatively, you can find it in the right click menu on an instance.

Added ability to create A8 and A9 sized Virtual Machines

In the Create Virtual Machine dialog you can now create the new A8 and A9 sizes of Virtual Machines.

Copy a connection string from the storage account node

You can now get a connection string for a Storage Account by right-clicking on the Storage Account.

Copy Blob URL works with multiple selections

If you select several Blobs in the Blob Explorer, you can right-click and select Copy Blob from the menu to put a list of URLs onto the clipboard.

Menu item to view the release notes

You can now view the release notes for Azure Management Studio from the “Release Notes” menu item on the Help menu.

To try out all these new features, download your free trial of Azure Management Studio now. Existing users can get the latest version from within Azure Management Studio (go to Help – Check for Updates).

We hope you enjoy trying out the new features – as always, we’d love to hear your feedback in the comments below.

We know how important it is to be able to navigate, understand, and start using Microsoft Azure.

So we’ve launched Just Azure, a new site from Cerebrata, providing essential technical resources and educational articles to support you – the Microsoft community – in navigating and understanding the rapidly evolving Azure platform.

Providing a range of educational content – from technical series and how-to articles, to insights into real-world uses of Azure – the site helps both new and experienced Microsoft Azure users share expertise and utilize their tried-and-tested knowledge in their daily tasks.

For all developers on Microsoft Azure

Working closely with Azure MVPs, developers and consultants, Just Azure aims to make it quicker and easier for developers and IT Pros to start using the latest Microsoft Azure technologies in their development and production environments.

As MVP Mike Wood, Just Azure Editor, explains, “Just Azure is providing a great educational resource on all topics Azure-related, and going deeper than most blog posts or Getting Started tutorials. We want readers to have an understanding of how all the features of Azure fit together, how others have used them, and more. I see it as a companion to the great work Microsoft continues to provide with the Azure Training Kit and Azure documentation, adding an essential layer of tried and tested real-world insight and techniques from the experts of the community.”

Covering the key Azure categories of Networks, Application Services, Data Services and Compute, content on the site starts with series tuned for beginners who are just getting to grips with new Azure concepts – including Diagnostics, Cloud Services, and Queues. From there, the resources range to articles exploring real life use cases for Azure features, and examples of people using the platform for everyday tasks such as automated testing. Over the next few months, additional content will continue to expand the article series depth and coverage. Many leading Azure experts are already contributing to the site, including Michael Collier on Azure Diagnostics, Roman Schacherl on Storage Queues and Sandrino Di Mattia on Cloud Services.

So head over to Just Azure to read our first articles, and follow us @justazure for more coming soon!

We hope you enjoy Just Azure, and as always we’d love to hear your feedback – share your thoughts in the comments section of the site.

If you own Azure Diagnostics Manager or Cloud Storage Studio, you can get your free upgrade to Azure Management Studio by emailing support@cerebrata.com with your existing license key(s) and/or order number(s).

If you’re upgrading to Azure Management Studio, we’d love to know what you think. How did you find getting started? Which features caught your eye first? What could we be doing better? Share with us on UserVoice, or leave your comments on this post.