Azure Storage and Azure SQL Database both play an important role in the Microsoft Azure Platform-as-a-Service (PaaS) strategy for storage. Azure Storage enables storage and retrieval of large amounts of unstructured data. You can store content files such as documents and media in the Blob service, use the Table service for NoSQL data, use the Queue service for reliable messages, and use the File service for Server Message Block (SMB) file share scenarios. Azure SQL Database provides classic relational database features as part of an elastic scale service.

In this chapter, you will learn how to implement each of the Azure Storage services, how to monitor them, and how to manage access. You’ll also learn how to work with Azure SQL Database.

There are many ways to interact with and develop against Azure Storage including the management portal, using Windows PowerShell, using client libraries such as those for the .NET Framework, and using the Storage Services REST API. In fact, the REST API is what supports all other options.

Objectives in this chapter:

Objective 4.1: Implement Azure Storage blobs and Azure files

Objective 4.2: Implement Azure Storage tables

Objective 4.3: Implement Azure Storage queues

Objective 4.4: Manage access

Objective 4.5: Monitor storage

Objective 4.6: Implement SQL databases

Objective 4.1: Implement Azure Storage blobs and Azure files

Azure blob storage is the place to store unstructured data of many varieties. You can store images, video files, word documents, lab results, and any other binary file you can think of. In addition, Azure uses blob storage extensively. For instance, when you mount extra logical drives in an Azure virtual machine (VM), the drive image is actually stored in by the Blob service associated with an Azure blob storage account. In a blob storage account, you can have many containers. Containers are similar to folders in that you can use them to logically group your files. You can also set security on the entire container. Each blob storage account can store up to 500 terabytes of data.

Enter a name for the container, and select Blob for the access type, as shown in Figure 4-4.

FIGURE 4-4 The Add A Container blade

The URL for the container can be found in the container list, as shown in Figure 4-5.

FIGURE 4-5 Containers blade with a list of containers and URLs

Finding your account access key

To access your storage account, you need the account name that was used to build the URL to the account and the primary access key. This section covers how to find the access keys for storage accounts.

Finding your account access key (existing portal)

To find your account access key using the management portal, complete the following steps:

Click the Dashboard tab for your storage account.

Click Manage Keys to find the primary and secondary key for managing your account, as shown in Figure 4-6. Always use the primary key for management activities (to be discussed later in this chapter).

FIGURE 4-6 Manage Access Keys dialog box for a storage account

Finding your account access key (Preview portal)

To find your account access key using the Preview portal, complete the following steps:

Reading blobs via a browser

Many storage browsing tools provide a way to view the contents of your blob containers. You can also navigate to the container using the existing management portal or the Preview portal to view the list of blobs. When you browse to the blob URL, the file is downloaded and displayed in the browser according to its content type.

Reading blobs using Visual Studio

You can also use Server Manager in Visual Studio 2013 to view the contents of your blob containers and upload or download files.

Navigate to the blob storage account that you want to use.

Double-click the blob storage account to open a window showing a list of blobs and providing functionality to upload or download blobs.

Changing data

You can modify the contents of a blob or delete a blob using the Storage API directly, but it is more common to do this programmatically as part of an application, for example using the Storage Client Library.

EXAM TIP

Any updates made to a blob are atomic. While an update is in progress, requests to the blob URL will always return the previously committed version of the blob until the update is complete.

The following steps illustrate how to update a blob programmatically. Note that this example uses a block blob. The distinction between block and page blobs is discussed in “Storing data using block and page blobs” later in this chapter.

Setting metadata on a container

Blobs and containers have metadata attached to them. There are two forms of metadata:

System properties metadata

User-defined metadata

System properties can influence how the blob behaves, while user-defined metadata is your own set of name/value pairs that your applications can use. A container has only read-only system properties, while blobs have both read-only and read-write properties.

Setting user-defined metadata

To set user-defined metadata for a container, get the container reference using GetContainerReference(), and then use the Metadata member to set values. After setting all the desired values, call SetMetadata() to persist the values, as in the following example:

Blob metadata includes both read-only and read-write properties that are valid HTTP headers and follow restrictions governing HTTP headers. The total size of the metadata is limited to 8 KB for the combination of name and value pairs. For more information on interacting with individual blob metadata, see http://msdn.microsoft.com/en-us/library/azure/hh225342.aspx.

Reading user-defined metadata

To read user-defined metadata for a container, get the container reference using GetContainerReference(), and then use the Metadata member to retrieve a dictionary of values and access them by key, as in the following example:

Reading system properties

To read a container’s system properties, first get a reference to the container using GetContainerReference(), and then use the Properties member to retrieve values. The following code illustrates accessing container system properties:

Storing data using block and page blobs

The Azure Blob service has two different ways of storing your data: block blobs and page blobs. Block blobs are great for streaming data sequentially, like video and other files. Page blobs are great for non-sequential reads and writes, like the VHD on a hard disk mentioned in earlier chapters.

Block blobs are blobs that are divided into blocks. Each block can be up to 4 MB. When uploading large files into a block blob, you can upload one block at a time in any order you want. You can set the final order of the block blob at the end of the upload process. For large files, you can also upload blocks in parallel. Each block will have an MD5 hash used to verify transfer. You can retransmit a particular block if there’s an issue. You can also associate blocks with a blob after upload, meaning that you can upload blocks and then assemble the block blob after the fact. Any blocks you upload that aren’t committed to a blob will be deleted after a week. Block blobs can be up to 200 GB.

Page bobs are blobs comprised of 512-byte pages. Unlike block blobs, page blob writes are done in place and are immediately committed to the file. The maximum size of a page blob is 1 terabyte. Page blobs closely mimic how hard drives behave, and in fact, Azure VMs use them for that purpose. Most of the time, you will use block blobs.

Streaming data using blobs

You can stream blobs by downloading to a stream using the DownloadToStream() API method. The advantage of this is that it avoids loading the entire blob into memory, for example before saving it to a file or returning it to a web request.

Accessing blobs securely

Secure access to blob storage implies a secure connection for data transfer and controlled access through authentication and authorization.

Azure Storage supports both HTTP and secure HTTPS requests. For data transfer security, you should always use HTTPS connections. To authorize access to content, you can authenticate in three different ways to your storage account and content:

Shared Key Constructed from a set of fields related to the request. Computed with a SHA-256 algorithm and encoded in Base64.

Shared Key Lite Similar to Shared Key, but compatible with previous versions of Azure Storage. This provides backwards compatibility with code that was written against versions prior to 19 September 2009. This allows for migration to newer versions with minimal changes.

Shared Access Signature Grants restricted access rights to containers and blobs. You can provide a shared access signature to users you don’t trust with your storage account key. You can give them a shared access signature that will grant them specific permissions to the resource for a specified amount of time. This is discussed in a later section.

To interact with blob storage content authenticated with the account key, you can use the Storage Client Library as illustrated in earlier sections. When you create an instance of the CloudStorageAccount using the account name and key, each call to interact with blob storage will be secured, as shown in the following code:

Implementing an async blob copy

The Blob service provides a feature for asynchronously copying blobs from a source blob to a destination blob. You can run many of these requests in parallel since the operation is asynchronous. The following scenarios are supported:

Copying a source blob to a destination with a different name or URI

Overwriting a blob with the same blob, which means copying from the same source URI and writing to the same destination URI (this overwrites the blob, replaces metadata, and removes uncommitted blocks)

Copy a snapshot to a base blob, for example to promote the snapshot to restore an earlier version

Copy a snapshot to a new location creating a new, writable blob (not a snapshot)

The copy operation is always the entire length of the blob; you can’t copy a range.

Ideally, you pass state to the BeginStartCopyFromBlob() method so that you can track multiple parallel operations.

EXAM TIP

A storage account can have multiple Copy Blob operations processing in parallel; however, an individual blob can have only one pending copy operation.

Configuring the Content Delivery Network

The Azure Content Delivery Network (CDN) distributes content across geographic regions to edge nodes across the globe. The CDN caches publicly available objects so they are available over high-bandwidth connections, close to the users, thus allowing the users to download them at much lower latency. You may be familiar with using CDNs to download popular Javascript frameworks like JQuery, Angular, and others.

By default, blobs have a seven-day time-to-live (TTL) at the CDN edge node. After that time elapses, the blob is refreshed from the storage account to the edge node. Blobs that are shared via CDN must support anonymous access.

Configuring the CDN (existing portal)

To enable the CDN for a storage account in the management portal, complete the following steps:

In the management portal, click New on the navigation bar.

Select App Services, CDN, Quick Create.

Select the storage account that you want to add CDN support for, and click Create.

Navigate to the CDN properties by selecting it from your list of CDN endpoints.

To enable HTTPS support, click Enable HTTPS at the bottom of the page.

To enable query string support, click Enable Query String Support at the bottom of the page.

To map a custom domain to the CDN endpoint, click Manage Domains at the bottom of the page, and follow the instructions.

EXAM TIP

It can take 60 minutes before the CDN is ready for use on the storage account.

If you are using HTTPS and a custom domain, address your blobs as follows:

https://<your domain>/<your container name>/<your blob path>

Configuring the CDN (Preview portal)

You currently cannot configure the CDN using the Preview portal.

Designing blob hierarchies

Blob storage has a hierarchy that involves the following aspects:

The storage account name, which is part of the base URI

The container within which you store blobs, which is also used for partitioning

The blob name, which can include path elements separated by a backslash (/) to create a sense of folder structure

Using a blob naming convention that resembles a directory structure provides you with additional ways to filter your blob data directly from the name. For example, to group images by their locale to support a localization effort, complete the following steps:

Create a container called images.

Add English bitmaps using the convention en/bmp/*, where * is the file name.

Add English JPEG files using the convention en/jpg/*, where * is the file name.

Add Spanish bitmaps using the convention sp/bmp/*, where * is the file name.

Add Spanish JPEG files using the convention sp/jpg/*, where * is the file name.

Configuring custom domains

By default, the URL for accessing the Blob service in a storage account is https://<your account name>.blob.core.windows.net. You can map your own domain or subdomain to the Blob service for your storage account so that users can reach it using the custom domain or subdomain.

Scaling Blob storage

Blobs are partitioned by container name and blob name, which means each blob can have its own partition. Blobs, therefore, can be distributed across many servers to scale access even though they are logically grouped within a container.

Working with Azure File storage

Azure File storage provides a way for applications to share storage accessible via SMB 2.1 protocol. It is particularly useful for VMs and cloud services as a mounted share, and applications can use the File Storage API to access File storage.

In this thought experiment, apply what you’ve learned about this objective. You can find answers to these questions in the “Answers” section at the end of this chapter.

You are localizing a mobile application for multiple languages. Some of your efforts revolve around having separate images for the regions you are targeting.

How will you structure the files in Blob storage so that you can retrieve them easily?

What can you do to make access to these images quick for users around the world?

Objective summary

A blob container has several options for access permissions. When set to Private, all access requires credentials. When set to Public Container, no credentials are required to access the container and its blobs. When set to Public Blob, only blobs can be accessed without credentials if the full URL is known.

To access secure containers and blobs, you can use the storage account key or a shared access signatures.

AzCopy is a useful utility for activities such as uploading blobs, transferring blobs from one container or storage account to another, and performing these and other activities related to blob management in scripted batch operations.

Block blobs allow you to upload, store, and download large blobs in blocks up to 4 MB each. The size of the blob can be up to 200 GB.

You can use a blob naming convention akin to folder paths to create a logical hierarchy for blobs, which is useful for query operations.

Objective review

Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of this chapter.

Which of the following is not true about metadata? (Choose all that apply.)

Both containers and blobs have writable system properties.

Blob user-defined metadata is accessed as a key value pair.

System metadata can influence how the blog is stored and accessed in Azure Storage.

Only blobs have metadata; containers do not.

Which of the following are valid differences between page blobs and block blobs? (Choose all that apply.)

Page blobs are much faster for all operations.

Block blobs allow files to be uploaded and assembled later. Blocks can be resubmitted individually.

Page blobs are good for all sorts of files, like video and images.

Block blobs have a max size of 200 GB. Page blobs can be 1 terabyte.

What are good recommendations for securing files in Blob storage? (Choose all that apply.)

Always use SSL.

Keep your primary and secondary keys hidden and don’t give them out.

In your application, store them someplace that isn’t embedded in client-side code that users can see.