For the September edition of the Windows Azure Insider MSDN magazine column, Bruno and I write about Big Data, the benefits of the MapReduce model, and HDInsight, the Windows Azure component that offers Hadoop-as-a-Service in the public cloud. We also show how to perform simple analytics against a public dataset using Java code and and Hive.

New approach to auto-replicate and auto-ignore assets based on metadata in the IAsset.AlternateId property using JSON format

As you can see, the most important changes are the last two items which I will go into more detail about below.

FragBlob support

FragBlob is a new storage format that will be used in an upcoming Windows Azure Media Services feature that is not yet available. In this new format, each Smooth Streaming fragment is written to storage as a separate blob in the asset’s container instead of grouping them together into Smooth Streaming PIFF files (ISMV’s/ISMA’s). Therefore, the Replicator has been updated to identify, compare, copy and verify this new FragBlob asset type.

Replicator metadata in IAsset.AlternateId property using JSON format

To decide whether or not an asset should be automatically replicated or ignored, the Replicator Tool needs to get some metadata from your assets. Currently the IAsset interface does not have a property to store custom metadata, so as a workaround the Replicator now uses the IAsset.AlternateId string property to store this metadata with a specific JSON format described below:

{

"alternateId":"my-custom-alternate-id",

"replicate":"No",

"data":"optional custom metadata"

}

The following are the expected fields in the JSON format:

alternateId: this is the actual Alternate Id value for the asset that is used to identify and track assets in both data centers.

replicate: this is a three-state flag that the replicator will use to determine whether or not it should take automatic action for the asset. The possible values are:

No: the asset will be automatically ignored

Auto: the asset will be automatically replicated

Manual: no automatic action will be taken for this asset

Important: If the replicate field is not included in the IAsset.AlternateId (or if this property is not set at all – null value), the default value is No (asset automatically ignored).

data: this is an optional field that you can use to store additional custom metadata for the asset.

The Replicator uses some extension methods for the IAsset interface to easily retrieve and set these values without having to deal with the JSON format. These extensions can be found in the Replicator.Core\Extensions\AssetUtilities.cs source code file.

Using these IAsset extension methods for the IAsset.AlternateId property, you can easily set the Replicator metadata in your media workflows as explained below:

Once your asset is ready (for instance, after ingestion or a transcoding job) and you have successfully created an Origin locator for it, you need to set the alternateId and replicate fields as follows:

// Set the alternateId to track the asset in both WAMS accounts.

string alternateId = "my-custom-id";

asset.SetAlternateId(alternateId);

// Set the replicate flag to 'Auto' for automatic replication.

asset.SetReplicateFlag(ReplicateFlag.Auto);

// Update the asset to save the changes in the IAsset.AlternateId property.

asset.Update();

By setting the replicate field to ‘Auto‘, the Replicator will un-ignore the asset and automatically start copying it to the other WAMS account. When the copy operation is complete, both assets will be marked as verified if everything measures up OK; otherwise, it will report the differences/errors and the user will have to take manual action from the Replicator Dashboard (like manually forcing the copy again).

How to set the IAsset.AlternateId metadata for manual replication

Once your asset is ready and you have successfully created an Origin locator for it, you need to set the alternateId and replicate fields as follows:

// Set the alternateId to track the asset in both WAMS accounts.

// Make sure to use the same alternateId for the asset in the other WAMS account that you want to compare.

string alternateId = "my-custom-id";

asset.SetAlternateId(alternateId);

// Set the replicate flag to 'Manual' for manual replication.

asset.SetReplicateFlag(ReplicateFlag.Manual);

// Update the asset to save the changes in the IAsset.AlternateId property.

asset.Update();

By setting the replicate field to ‘Manual‘, the Replicator will un-ignore the asset and check if there is an asset in the other WASM account with the same alternateId field. If the Replicator finds one, it will compare both assets and mark them as verified if everything checks out OK; otherwise, it will report the differences and the user will have to take manual action from the Replicator Dashboard (like deleting one and forcing a copy of the other). This scenario is useful when comparing assets living in different WAMS accounts and generated from the same source.

The Windows Azure pricing calculator page should open immediately after the script executes. From there you can adjust the slider to the desired storage size, and view the standard price. The current price is $0.095 per GB for geo-redundant storage. So this one storage account is costing me only $0.0027 per month. I can handle that.

In this series of posts, we're looking at how to get started as a mobile developer. In Parts 1-9, we examined a variety of mobile platforms and client app development approaches (native, hybrid, web). We've seen a lot so far, but that's only the front-end; typically, a mobile app also needs a back-end. We'll now start looking at various approaches to providing a back-end for your mobile app. Here in Part 10 we'll look at Microsoft's cloud-based Mobile Backend As A Service (MBAAS) offering, Windows Azure Mobile Services.
About Mobile Back-end as a Service (MBaaS) OfferingsTo create a back-end for your mobile app(s), you're typically going to care about the following:

A service layer, where you can put server-side logic

Persistent data storage

Authentication

Push notifications

You could create a back-end for the above using many different platforms and technologies, and you could do so in a traditional data center or in a public cloud. You'd need to write a set of web services, create a database or data store of some kind, provide a security mechanism, and so on.
What's interesting is that today you can make a "build or buy" decision about your mobile back-end: several vendors and open source groups have decided to offer all of the above as a ready-to-use, out-of-box service. Microsoft's Windows Azure Mobile Services is an example of this. Of course, it doesn't do all of your work for you--you're still going to be responsible for supplying a data model and server-side logic. Nevertheless, MBaaS gives you a huge head start. MBaaS is especially valuable if you are mostly a mobile developer who wants to focus their time on the app and not a [backend implementation.]

The service will also generate for you starter mobile clients for iOS, Android, Windows Phone, Windows 8, or HTML5. You can use these apps as your starting point, or as references for seeing how to hook up your own apps to connect to the service.

PricingSo what does all this back-end goodness cost? At the time of this writing, there are Free, Standard ($25/month), and Premium ($199/month) tiers of pricing. You can read the pricing details here.

TrainingWe reference training resources throughout this post. A good place to start, though, is here:

Provisioning a Mobile ServiceThe first thing you'll notice about WAMS is the care that's been given to the developer experience, especially your first-time experience. Once you have a Windows Azure account, you'll go to azure.com, sign-in to the management portal, and navigate to the Mobile Services tab. From there, you're only a handful of clicks away from rapid provisioning of a mobile back-end.

2 Define a Unique Name and Select a DatabaseOn the first provisioning screen, you'll choose an endpoint name for your service, and either create a database or attach to one you're previously created in the cloud. The service offers a free 20MB SQL database. You'll also indicate which data center to allocate the service in (there are 8 worldwide, 4 in the U.S.)

Provisioning a Mobile Service - Screen 1

Provisioning a Mobile Service - Screen 2

3 Wait for Provisioning to CompleteClick the Checkmark button, and provisioning will commence. It's fast! In less than a minute your service will have been created.

Newly-provisioned Mobile Service Listed in Portal

4 Use the New Mobile Service WizardClick on your service to set it up. You'll be greeted with a wizard that walks you through. This is especially helpful if this is your first time using Windows Azure Mobile Services. On the first screen, you'll indicate which mobile platform you are targeting: Windows Store (Windows 8), Windows Phone 8, iOS, Android, or HTML5 (don't worry, you're not restricted to a single mobile platform and can come back and change this setting as often you wish).

Setup Wizard, Screen 1

5 Generate a Mobile App that Uses your ServiceNext, you can download an automatically generated app for the platform you've selected, pre-wired up to talk to the service you just provisioned. To do so, click the Create a New App link. This will walk you through 1) installing the SDK you need for your mobile project, 2) creating a database table, and 3) downloading and running your app. The app and database will initially be for a ToDo database, but you can amend the database and app to your liking once you're done with the wizard.

Generating a Mobile Client App for Android

In the next section, we'll review how to build and run the app that you generated, and how to view what's happening on the back end.﻿

Building and Running a Generated Mobile AppLet's walk through building and running the To Do app the portal auto-generates for you. The mobile client download is a zip file, which you should save locally, Unblock, and extract to a local folder. Next, you can open the project and run it--it's that simple to get started.

Running the AppWhen you run the app, you'll see a simple ToDo app--one that is live, and uses your back-end in the cloud for data storage. Run the app and kick the tires by adding some tasks. Enter a task by entering it's name and clicking Add. Delete an item by touching its checkbox.

To Do app running on Android phone

Viewing the DataNow, back in the Windows Azure portal we can inspect the data that has been stored in the database in the cloud. Click on the Data link at the top of the Windows Azure portal for your mobile service, and you'll see what's in the ToDo table. It should match what you just entered using the mobile app.

Database Data in the Cloud

Dynamic DataOne of the great features of Windows Azure Mobile Services is its ability to dynamically adjust its data model. This allows you to change your mobile app's data structure in code, and the back-end database will automatically add new columns if it needs to--all by itself.

Dynamic data is a great feature, but some people won't want it enabled, perhaps once you're all ready for production use. You can enable or disable the feature in the Configure page of the portal.

Server-side LogicWindows Azure Mobile Services happens to use node.js, which means server-side logic is something you write in JavaScript.

Mobile Services Server Script ReferenceYou can have scripts associated with your database table(s), where operations like Insert, Update, Delete, or Read execute script code. You set up these up on the Data page of the portal for your mobile service.

Database Action Scripts

You can also set up scheduled scripts, which run on a schedule. On the Schedule page of the portal, click Create a Scheduled Job to define a scheduled job.

Creating a Scheduled Job

Once you've define a scheduled job, you can access it in the portal to enter script code and enable or disable the job.

SummaryWindows Azure Mobile Services provides a fast and easy mobile back-end in the cloud. It offers the essential capabilities you need in a back end and supports the common mobile platforms. If you're comfortable expressing your server-side logic in node.js JavaScript, this is a compelling MBaaS to consider.

Ever wanted to be able to access your Splunk data from Excel or Tableau? This app provides an OData (http://www.odata.org) interface to your saved searches, which you can easily connect to with Excel, Tableau and a myriad of other programs.

This application is currently under private access, and works with Splunk 4.2x and above. If you would like access, please contact us at devinfo@splunk.com, or use the contact links on the site.

The Open Data Protocol (OData) uses REST-based data services to access and manipulate resources defined according to an Entity Data Model (EDM).

The Committee Specification is published in three parts; Part 1: Protocol defines the core semantics and facilities of the protocol. Part 2: URL Conventions defines a set of rules for constructing URLs to identify the data and metadata exposed by an OData service as well as a set of reserved URL query string operators. Part 3: Common Schema Definition Language (CSDL) defines an XML representation of the entity data model exposed by an OData service.

Our congratulations to the OASIS OData Technical Committee on achieving this milestone! As always, we’re looking forward to continued collaboration with the community to develop OData into a formal standard through OASIS.

Uses the access token issued by the identity provider as a key to query via REST the authentication provider and retrieve the user name. For more information on this topic, see Getting user information on Azure Mobile Services by Carlos Figueira.

The mobile service redirects the user to the page of the select authentication provider which validates the credentials (username and password) provided by the user and issues a security token.

The mobile service returns its access token to the client application. The user sends a new todo item to the mobile service.

The insert script for the TodoItem table handles the incoming call. The script validates the inbound data then invokes the authentication provider via REST using the request module (getUserName function).

The script sends a request to the Access Control Service to acquire a security token necessary to be authenticated by the Service Bus Relay Service exposed by BizTalk Server via a WCF-BasicHttpRelay Receive Location. The mobile service uses the OAuth WRAP Protocol to acquire a security token from ACS (getAcsToken funtion). In particular, the server script sends a request to ACS using the https module. The request contains the following information:

wrap_name: the name of a service identity within the Access Control namespace of the Service Bus Relay Service (e.g. owner)

wrap_password: the password of the service identity specified by the wrap_name parameter.

Creates a SOAP envelope to invoke the Service Bus Relay Service. In particular, the Header contains a RelayAccessToken element which in turn contains the wrap_access_token returned by ACS in base64 format. The Body contains the payload for the call.

Uses the request module to send the SOAP envelope to the Service Bus Relay Service. The Service Bus Relay Service validates and remove the security token, then forwards the request to BizTalk Server that processes the request and returns a response message containing the user address. See below for more details on his use case.

The insert script calls the insertItem function that inserts the new item in the TodoItem table.

The insert script retrieves from the Channel table the channel URI of the Windows Phone 8 and Windows Store apps to which to send push a notification (sendPushNotification function)

The sendPushNotification function sends push notifications.

The insert script calls the sendMessageToServiceBus function that uses the azure module to send a notification to BizTalk Server via a Windows Azure Service Bus queue.

Call BizTalk Server via Service Bus Relayed Messaging

The following diagram shows how BizTalk Server is configured to receive and process request messages sent by a mobile service via Service Bus Relay Service using a two-way request-reply message exchange pattern.

Message Flow

The client application sends a new item to the mobile service.

The insert script sends a request to the Access Control Service to acquire a security token necessary to be authenticated by the Service Bus Relay Service exposed by BizTalk Server via a WCF-BasicHttpRelay Receive Location. The mobile service uses the OAuth WRAP Protocol to acquire a security token from ACS (getAcsToken funtion). In particular, the server script sends a request to ACS using the https module. The request contains the following information:

wrap_name: the name of a service identity within the Access Control namespace of the Service Bus Relay Service (e.g. owner)

wrap_password: the password of the service identity specified by the wrap_name parameter.

The insert script calls the getUserAddress function that performs the following actions:

Extracts the wrap_access_token from the security token issued by ACS.

Creates a SOAP envelope to invoke the Service Bus Relay Service. In particular, the Header contains a RelayAccessToken element which in turn contains the wrap_access_token returned by ACS in base64 format. The Body contains the payload for the call.

Uses the request module to send the SOAP envelope to the Service Bus Relay Service. The Service Bus Relay Service validates and remove the security token, then forwards the request to BizTalk Server that processes the request and returns a response message containing the user address. See below for more details on his use case.

The Service Bus Relay Service validates and remove the security token, then forwards the request to one the WCF-BasicHttpRelay Receive Location exposed by BizTalk Server.

The WCF-BasicHttpRelay Receive Location publishes the request message to the BizTalkServerMsgBoxDb.

The message triggers the execution of a new instance of the GetUserAddress orchestration.

The orchestration uses the user id contained in the request message to retrieve his/her address. For demo purpose, the orchestration generates a random address. The orchestration writes the response message to the BizTalkServerMsgBoxDb.

The WCF-BasicHttpRelay Receive Location retrieves the message from the BizTalkServerMsgBoxDb.

The receive location sends the response message back to the Service Bus Relay Service.

The Service Bus Relay Service forwards the message to the mobile service.

The mobile service saves the new item in the TodoItem table and sends the enriched item back to the client application.

Call BizTalk Server via Service Bus Brokered Messaging

The following diagram shows how BizTalk Server is configured to receive and process request messages sent by a mobile service via a Service Bus queue using a one-way message exchange pattern.

Message Flow

The client application sends a new item to the mobile service.

The insert script calls the sendMessageToServiceBus function that performs the following actions:

In the Create a mobile service page, type a subdomain name for the new mobile service in the URL textbox and wait for name verification. Once name verification completes, click the right arrow button to go to the next page.

This displays the Specify database settings page.

Note: As part of this tutorial, you create a new SQL Database instance and server. You can reuse this new database and administer it as you would any other SQL Database instance. If you already have a database in the same region as the new mobile service, you can instead choose Use existing Database and then select that database. The use of a database in a different region is not recommended because of additional bandwidth costs and higher latencies.

In Name, type the name of the new database, then type Login name, which is the administrator login name for the new SQL Database server, type and confirm the password, and click the check button to complete the process.

Configure the application to authenticate users

This solution requires the user to be authenticated by an identity providers. Follow the instructions contained in the links below to configure the Mobile Service to authenticate users against one or more identity providers and follow the steps to register your app with that provider:

Navigate to the My Applications page in the Live Connect Developer Center, and log on with your Microsoft account, if required.

Click Create application, then type an Application name and click I accept.

This registers the application with Live Connect.

Click Application settings page, then API Settings and make a note of the values of the Client ID and Client secret.

Security Note

The client secret is an important security credential. Do not share the client secret with anyone or distribute it with your app.

In Redirect domain, enter the URL of your mobile service, and then click Save.

Back in the Management Portal, click the Identity tab, enter the Client Id and Client Secret obtained at the previous step in the microsoft account settings, and click Save.

Restrict permissions to authenticated users

In the Management Portal, click the Data tab, and then click the TodoItem table.

Click the Permissions tab, set all permissions to Only authenticated users, and then click Save. This will ensure that all operations against the TodoItem table require an authenticated user. This also simplifies the scripts in the next tutorial because they will not have to allow for the possibility of anonymous users.

Define server side scripts

Server scripts are registered in a mobile service and can be used to perform a wide range of operations on data being inserted and updated, including validation and data modification. In this sample, they are used to validate data, retrieves data from identity providers, send push notifications and communicate with BizTalk Server via Windows Azure Service Bus. For more information on server scripts, see the following resources:

To use Windows Azure Service Bus, you need to use the Node.js azure package in server scripts. This package includes a set of convenience libraries that communicate with the storage REST services. For more information on the Node.js azure package, see the following resources:

In the Management Portal, click the Data tab, and then click the TodoItem table.

Click the scripts tab and select the insert, update, read or del script from the drow-down list.

Modify the code of the selected script to add your business logic to the function. …

Paolo continues with source code for the server scripts.

Configure Git Source Control

The source control support provides a Git repository as part your mobile service, and it includes all of your existing Mobile Service scripts and permissions. You can clone that git repository on your local machine, make changes to any of your scripts, and then easily deploy the mobile service to production using Git. This enables a really great developer workflow that works on any developer machine (Windows, Mac and Linux). To configure the Git source control proceed as follows:

Navigate to the dashboard for your mobile service and select the Set up source control link:

If this is your first time enabling Git within Windows Azure, you will be prompted to enter the credentials you want to use to access the repository:

Once you configure this, you can switch to the CONFIGURE tab of your Mobile Service and you will see a Git URL you can use to use your repository:

You can use the GIT URL to clone the repository locally using Git from the command line:

Schemas: contains XML schemas for the messages exchanged by BizTalk Server with the Mobile Service via Windows Azure Service Bus.

Maps: contains the maps uses by the BizTalk Server application.

HTML5: contains the HTML5/JavaScript client for the mobile service.

WindowsPhone8: contains the Windows Phone 8 app that can be used to test the mobile service.

WindowsStoreApp: contains the Windows Store app that can be used to test the mobile service.

NOTE: the WindowsStoreApp project uses the Windows Azure Mobile Services NuGet package. To recuce the size of tha zip file, I deleted some of the asemblies from the packages folder. To repair the solution, make sure to right click the solution and select Enable NuGet Package Restore as shown in the picture below. For more information on this topic, see the following post.

BizTalk Server Application

Proceed as follows to create the TodoItem BizTalk Server application:

Open the solution in Visual Studio 2012 and deploy the Schemas, Maps and Orchestration to create the TodoItem application.

Open the Binding.xml file in the Setup folder and replace the [YOUR-SERVICE-BUS-NAMESPACE] placeholder with the name of your Windows Azure Service Bus namespace.

Open the BizTalk Server Administration Console and import the binding file to ceate Receive Ports, Receive Locations and Send Ports.

Open the WCF-BasicHttpRelay Receive Location, click the Configure button:

Click the Edit button in the Access Control Service section under the Security tab.

Define the ACS STS uri, Issuer Name and Issuer Secret:

Open the SB-Messaging Receive Location, click the Configure button:

Define the ACS STS uri, Issuer Name and Issuer Secret under the Authentication tab:

Open the FILE Send Port and click the Configure button:

Enter the path of the Destination folder where notification messages sent by th Mobile Service via Service Bus are stored:

HTML5/JavaScript Client

The following figure shows the HTML5/JavaScript application that you can use to test the mobile service.

Service Bus and many other cloud services are multitenant systems that are shared across a range of customers. The IP addresses we assign come from a pool and that pool shifts as we optimize traffic from and to datacenters. We may also move clusters between datacenters within one region for disaster recovery, should that be necessary. The reason why we cannot give every feature slice an IP address is also that the world has none left. We’re out of IPv4 address space, which means we must pool workloads.

The last points are important ones and also shows how antiquated the IP-address lockdown model is relative to current practices for datacenter operations. Because of the IPv4 shortage, pools get acquired and traded and change. Because of automated and semi-automated disaster recovery mechanisms, we can provide service continuity even if clusters or datacenter segments or even datacenters fail, but a client system that’s locked to a single IP address will not be able to benefit from that. As the cloud system packs up and moves to a different place, the client stands in the dark due to its firewall rules. The same applies to rolling updates, which we perform using DNS switches.

The state of the art of no-downtime datacenter operations is that workloads are agile and will move as required. The place where you have stability is DNS.

Outbound Internet IP lockdowns add nothing in terms of security because workloads increasingly move into multitenant systems or systems that are dynamically managed as I’ve illustrated above. As there is no warning, the rule may be correct right now and pointing to a foreign system the next moment. The firewall will not be able to tell. The only proper way to ensure security is by making the remote system prove that it is the system you want to talk to and that happens at the transport security layer. If the system can present the expected certificate during the handshake, the traffic is legitimate. The IP address per-se proves nothing. Also, IP addresses can be spoofed and malicious routers can redirect the traffic. The firewall won’t be able to tell.

With most cloud-based services, traffic runs via TLS. You can verify the thumbprint of the certificate against the cert you can either set yourself, or obtain from the vendor out-of-band, or acquire by hitting a documented endpoint (in Windows Azure Service Bus, it's the root of each namespace). With our messaging system in ServiceBus, you are furthermore encouraged to use any kind of cryptographic mechanism to protect payloads (message bodies). We do not evaluate those for any purpose. We evaluate headers and message properties for routing. Neither of those are logged beyond having them in the system for temporary storage in the broker.

The server having access to Service Bus should have outbound Internet access based on the server’s identity or the running process’s identity. This can be achieved using IPSec between the edge and the internal system. Constraining it to the Microsoft DC ranges it possible, but those ranges shift and expand without warning.

The bottom line here is that there is no way to make outbound IP address constraints work with cloud systems or high availability systems in general.

Identity and access management is an anchor for security and top of mind for enterprise IT departments. It is key to extending anytime, anywhere access to employees, partners, and customers. Today, we are pleased to announce the General Availability of Windows Azure Multi-Factor Authentication - delivering increased access security and convenience for IT and end users.

Multi-Factor Authentication quickly enables an additional layer security for users signing in from around the globe. In addition to a username and password, users may authenticate via:

An application on their mobile device.

Automated voice call.

Text message with a passcode.

It’s easy and meets user demand for a simple sign-in experience.

Windows Azure Multi-Factor Authentication can be configured in minutes for the many applications that require additional security, including:

On-Premises VPNs, Web Applications, and More -- Run the Multi-Factor Authentication Server on your existing hardware or in a Windows Azure Virtual Machine. Synchronize with your Windows Server Active Directory for automated user set up.

Cloud Applications like Windows Azure, Office 365, and Dynamics CRM -- Enable Multi-Factor Authentication for Windows Azure AD identities with the flip of a switch, and users will be prompted to set up multi-factor the next time they sign-in.

Active Directory: Richer Directory Management and General Availability of Multi-Factor Authentication Support

Spending Limit: Reset your Spending Limit, Virtual Machines are no longer deleted if it is hit

Storage: New Storage Client Library 2.1 Released

Web Sites: IP and Domain Restriction Now Supported

All of these improvements are now available to use immediately. Below are more details about them.

Compute: New 2-CPU Core 14 GB RAM instance

This week we released a new memory-intensive instance for Windows Azure. This new instance, called A5, has two CPU cores and 14 gigabytes (GB) of RAM and can be used with Virtual Machines (both Windows and Linux) and Cloud Services:

Virtual Machines: Support for Oracle Software Images

Earlier this summer we announced a strategic partnership between Microsoft and Oracle, and that we would enable support for running Oracle software in Windows Azure Virtual Machines.

Starting today, you can now deploy pre-configured virtual machine images running various combinations of Oracle Database, Oracle WebLogic Server, and Java Platform SE on Windows, with licenses for the Oracle software included. These ready-to-deploy Oracle software images enable rapid provisioning of cost-effective cloud environments for development, testing, deployment, and easy scaling of enterprise applications. The images can now be easily selected in the standard “Create Virtual Machine” wizard within the Windows Azure Management Portal:

During preview, these images are offered for no additional charge on top of the standard Windows Server VM rate. After the preview period ends, these Oracle images will be billed based on the total number of minutes the VMs run in a month. With Oracle license mobility, existing Oracle customers that are already licensed on Oracle software also have the flexibility to deploy them on Windows Azure.

Virtual Machines: Management Operations on Stopped VMs

Starting with this week’s release, it is now possible to perform management operations on stopped/de-allocated Virtual Machines. Previously a VM had to be running in order to do operations like change the VM size, attach and detach disks, configure endpoints and load balancer/availability settings. Now it is possible to do all of these on stopped VMs without having to boot them:

Active Directory: Create and Manage Multiple Active Directories

Starting with this week’s release it is now possible to create and manage multiple Windows Azure Active Directories in a single Windows Azure subscription (previously only one directory was supported and once created you couldn’t delete it). This is useful both for development/test scenarios as well as for cases where you want to have separate directory tenants or synchronize with different on-premises domains or forests.

Creating a New Active Directory

Creating a new Active Directory is now really easy. Simply select New->Application Services->Active Directory->Directory within the management portal:

When prompted configure the directory name, default domain name (you can later change this to any custom domain you want – e.g. yourcompanyname.com), and the country or region to use:

In a few seconds you’ll have a new Active Directory hosted within Windows Azure that is ready to use for free:

You can run and manage your Windows Azure Active Directories entirely in the cloud, or alternatively sync them with an on-premises Active Directory deployment - which allows you to automatically synchronize all of your on-premises users into your Active Directory in the cloud. This later option is very powerful, and ensures that any time you add or remove a user in your on-premises directory it is automatically reflected in the cloud as well.

You can use your Windows Azure Active Directory to manage identity access to custom applications you run and host in the cloud (and there is new support within ASP.NET in the VS 2013 release that makes building these SSO apps on Windows Azure really easy). You can also use Windows Azure Active Directory to securely manage the identity access of cloud based applications like Office 365, SalesForce.com, and other popular SaaS solutions.

Additional New Features

In addition to enabling the ability to create multiple directories in a single Windows Azure subscription, this week’s release also includes several additional usability enhancements to the Windows Azure Active Directory management experience:

With this week’s release, we have added the ability to change the name of a directory after its created (previously it was fixed at creation time).

As an administrator of a directory, you can now add users from another directory of which you’re a member. This is useful, for example, in the scenario where there are other members of your production directory who will need to collaborate on an application that is under development or testing in a non-production environment. A user can be a member of up to 20 directories.

If you use a Microsoft account to access Windows Azure, and you use a different organizational account to manage another directory, you may find it convenient to manage that second directory with your Microsoft account. With this release, we’ve made it easier to configure a Microsoft account to manage an existing Active Directory. Now you can configure this even if the Microsoft account already manages a directory, and even if the administrator account for the other directory doesn’t have a subscription to Windows Azure. This is a common scenario when the administrator account for the other directory was created during signup for Office 365 or another Microsoft service.

In this release, we’ve also added support to enable developers to delete single tenant applications that they’ve added to their Windows Azure AD. To delete an application, open the directory in which the application was added, click on the Applications tab, and click Delete on the command bar. An application can be deleted only when External Access is set to ‘Off’ on the configure tab.

As always, if there are aspects of these new Azure AD experiences that you think are great, or things that drive you crazy, let us know by posting in our forum on TechNet.

Active Directory: General Availability of Multi-Factor Authentication Service

With this week’s release we are excited to ship the general availability release of a great new service: the Windows Azure Multi-Factor Authentication (MFA) Service. Windows Azure Multi-Factor Authentication is a managed service that makes it easy to securely manage user access to Windows Azure, Office 365, Intune, Dynamics CRM and any third party cloud service that supports Windows Azure Active Directory. You can also use it to securely control access to your own custom applications that you develop and host within the cloud.

Windows Azure Multi-Factor Authentication can also be used with on-premise scenarios. You can optionally download our new Multi-Factor Authentication Server for Windows Server Active Directory and use it to protect on-premise applications as well.

Getting Started

To enable multi-factor authentication, sign-in to the Windows Azure Management Portal and select New->Application Services->Active Directory->Multi-Factor Auth Provider and choose the “Quick Create” option. When you create the service you can point it at your Windows Azure Active Directory and choose from one of two billing models (per user pricing, or per authentication pricing):

Once created the Windows Azure Multi-Factor Authentication service will show up within the “Multi-Factor Auth Providers” section of the Active Directory extension:

You can then manage which users in your directory have multi-factor authentication enabled by drilling into the “Users” tab of your Active Directory and then click the “Manage Multi-Factor Auth” button:

Once multi-factor authentication is enabled for a user within your directory they will be able to use a variety of secondary authentication techniques including verification via a mobile app, phone call, or text message to provide additional verification when they login to an app or service. The management and tracking of this is handled automatically for you by the Windows Azure Multi-Factor Authentication Service.

Learn More

You can learn more about today’s release from this 6 minute video on Windows Azure Multi-Factor Authentication.

Start making your applications and systems more secure with multi-factor authentication today! And give us your feedback and feature requests via the MFA forum.

Billing: Reset your Spending Limit on MSDN subscriptions

When you sign-up for Windows Azure as a MSDN customer you automatically get a MSDN subscription created for you that enables deeply discounted prices and free “MSDN credits” (up to $150 each month) that you can spend on any resources within Windows Azure. I blogged some details about this last week.

By default MSDN subscriptions in Windows Azure are created with what is called a “Spending Limit” which ensures that if you ever use up all of the MSDN credits you still don’t get billed – as the subscription will automatically suspend when all of the free credits are gone (ensuring your bill is never more than $0).

You can optionally remove the spending limit if you want to use more than the free credits and pay any overage on top of them. Prior to this week, though, once the spending limit was removed there was no way to re-instate it for the next billing cycle.

Starting with this week’s release you can now:

Remove the spending limit only for the current billing cycle (ideal if you know that it is a one time spike)

Remove the spending limit indefinitely if you expect to continue to have higher usage in future

Reset/Turn back on the spending limit from the next billing cycle forward in case you’ve already turned it off

To enable or reset your spending limit, click the “Subscription” button in the top of the Windows Azure Management Portal and the click the “Manage your subscriptions” link within it:

This will take you to the Windows Azure subscription management page (which lists all of the Windows Azure subscriptions you have active). Click your MSDN subscription to see details on your account – including usage data on how much services you’ve used on it:

Above you can see usage data on my personal MSDN subscription. I’ve done a lot of talks recently and have used up my free $150 credits for the month and have $23.64 in overages. I was able to go above $0 on the subscription because I’ve turned off my spending limit (this is indicated in the text I’ve highlighted in red above).

If I want to reapply the spending limit for the next billing cycle (which starts on October 3rd) I can now do so by clicking the “Click here to change the spending limit option” link. This will bring up a dialog that makes it really easy for me to re-active the spending limit starting the next billing cycle:

We hope this new flexibility to turn the spending limit on and off enables you to use your MSDN benefits even more, and provides you with confidence that you won’t inadvertently do something that causes you to have to pay for something you weren’t expecting to.

Billing: Subscription suspension no longer deletes Virtual Machines

In addition to supporting the re-enablement of the spending limit, we also made an improvement this week so that if your MSDN (or BizSpark or Free trial) subscription does trigger the spending limit we no longer delete the Virtual Machines you have running.

Previously, Virtual Machines deployed in suspended subscriptions would be deleted when the spending limit was passed (the data drives would be preserved – but the VM instances themselves would be deleted). Now when a subscription is disabled, VMs deployed inside it will simply move into the stopped de-provision state we recently introduced (which allows a VM to stop without incurring any billing).

This allows the Virtual Machines to be quickly restarted with all the previously attached disks and endpoints when a fresh monetary credit is applied or the subscription is converted into a paid subscription. As a result, customers don’t have to worry about losing their Virtual Machines when spending limits are reached, and they can quickly return back to business by re-starting their VMs immediately.

Storage: New .NET Storage Client Library 2.1 Release

Earlier this month we released a major update of our Windows Azure Storage Client Library for .NET. The new 2.1 release includes a ton of awesome new features and capabilities:

Improved Performance

Async Task<T> support

IQueryably<T> Support for Tables

Buffer Pooling Support

.NET Tracing Integration

Blob Stream Improvements

And a lot more…

Read this detailed blog post about the Storage Client Library 2.1 Release from the Windows Azure Storage Team to learn more. You can install the Storage Client Library 2.1 release and start using it immediately using NuGet.

Developers can use IP and Domain Restrictions to control the set of IP addresses, and address ranges, that are either allowed or denied access to their websites. With Windows Azure Web Sites developers can enable/disable the feature, as well as customize its behavior, using web.config files located in their website.

The example configuration snippet below shows an ipSecurity configuration that only allows access to addresses originating from the range specified by the combination of the ipAddress and subnetMask attributes. Setting allowUnlisted to false means that only those individual addresses, or address ranges, explicitly specified by a developer will be allowed to make HTTP requests to the website. Setting the allowed attribute to true in the child add element indicates that the address and subnet together define an address range that is allowed to access the website.

If a request is made to a website from an address outside of the allowed IP address range, then an HTTP 404 not found error is returned as defined in the denyAction attribute.

One final note, just like the companion DIPR feature, Windows Azure Web Sites ensures that the client IP addresses “seen” by the IP and Domain Restrictions module are the actual IP addresses of Internet clients making HTTP requests.

Summary

Today’s release includes a bunch of great features that enable you to build even better cloud solutions. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today. Then visit the Windows Azure Developer Center to learn more about how to build apps with it.

Building on the recent announcement of strategic partnershi between Microsoft and Oracle; today we are making a number of popular Oracle software configurations available through the Windows Azure image gallery. Effective immediately, customers can deploy pre-configured virtual machine images running various combinations of Oracle Database, Oracle WebLogic Server, and Java Platform SE on Windows Server, with licenses for the Oracle software included. During the preview, these images are offered for no additional charge beyond the regular compute costs. After the preview period ends, Oracle images will be billed based on the total number of minutes VMs run in a month; details on the VM pricing to be announced at a later date.

These ready-to-deploy images enable rapid provisioning of cost-effective cloud environments for development and testing as well as easy scaling of enterprise Oracle applications. With Oracle license mobility, existing customers who are licensed on Oracle software can now deploy on Windows Azure and take advantage of powerful management features, cross-platform tools and automation capabilities.

Additionally, Oracle now offers Oracle Linux, Oracle Linux with Oracle Database, and Oracle Linux with WebLogic Server in the Windows Azure image gallery for customers who are licensed to use these products.

An unexpected restart of an Azure VM is an issue that commonly results in a customer opening a support incident to determine the cause of the restart. Hopefully the explanation below provides details to help understand why an Azure VM could have been restarted.

Windows Azure updates the host environment approximately once every 2-3 months to keep the environment secure for all applications and virtual machines running on the platform. This update process may result in your VM restarting, causing downtime to your applications/services hosted by the Virtual Machines feature. There is no option or configuration to avoid these host updates. In addition to platform updates, Windows Azure service healing occurs automatically when a problem with a host server is detected and the VMs running on that server are moved to a different host. When this occurs, you loose connectivity to VM during the service healing process. After the service healing process is completed, when you connect to VM, you will likely to find a event log entry indicating VM restart (either gracefully or unexpected). Because of this, it is important to configure your VMs to handle these situations in order to avoid downtime for your applications/services.

To ensure high availability of your applications/services hosted in Windows Azure Virtual Machines, we recommend using multiple VMs with availability sets. VMs in the same availability set are placed in different fault domains and update domains so that planned updates, or unexpected failures, will not impact all the VMs in that availability set. For example, if you have two VMs and configure them to be part of an availability set, when a host is being updated, only one VM is brought down at a time. This will provide high availability since you have one VM available to serve the user requests during the host update process. Mark Russinovich has posted a great blog post which explains Windows Azure Host updates in detail. Managing the high availability is detailed here.

While availability sets help provide high availability for your VMs, we recognize that proactive notification of planned maintenance is a much-requested feature, particularly to help prepare in a situation where you have a workload that is running on a single VM and is not configured for high availability. While this type of proactive notification of planned maintenance is not currently provided, we encourage you to provide comments on this topic so we can take the feedback to the product teams.

In upcoming weeks Windows Azure Web Sites will update the default PHP version from PHP 5.3 to PHP 5.4. PHP 5.3 will continue to be available as a non-default option. Customers who have not explicitly selected a PHP version for their site and wish the site to continue to run using PHP 5.3 can select this version at any time from the Windows Azure Management Portal, Windows Azure Cross Platform Command Line Tools, or Windows Azure PowerShell Cmdlets. The Windows Azure Web Sites team will also start onboarding PHP 5.5 as an option in the near future.

Explicitly Selecting a PHP version in Windows Azure Web Sites

If you wish to continue to run PHP 5.3 in your Windows Azure Web Site, follow one of the options below to explicitly set the PHP runtime of your site.

Yesterday I experienced an issue when trying to integrate a Windows Azure Website with Github. Specifically, my code would deploy from the master branch, but if I chose a specific other branch called ‘prototype’ I received a fetch error in the Windows Azure Management Portal:

This error has been reported to the team and I’m sure will be rectified so nobody else will run into it, but at Cory Fowler’s (@syntaxC4) prompting I wanted to document the steps I took to debug this as these steps may be useful to anyone struggling to debug a Windows Azure Website integration.

Scenario

In my scenario I had a project with a series of subfolders in my github repo. The project has progressed from a prototype to a full build but we were required to persist the prototype for design reference. We could have created ‘prototype’ without changing the solution structure, but as in all real world scenarios the requirement to leave the prototype available emerged only when we removed it and had changed the URL structure. We were only happy to continue working on new code if we could label the prototype or somehow leave it in a static state while the codebase moved on. This requirement is easily tackled by Windows Azure Websites and its Github integration; we changed the solution structure to have subfolders, created a new branch ‘prototype’ and continued our main work in ‘master’. Our ‘master’ branch has the additional benefit of having the prototype available for reference and quick applications of code change if we want to pivot our approach.

We then created two Windows Azure Websites (for free, wow!). In order to allow Windows Azure Websites to deploy the correct code for each, we created a .deployment file in each repository. In this .deployment file we inform Windows Azure Websites (through its Kudu deployment mechanism) that it should perform a custom deployment.

For the ‘master’ branch we want to deploy the /client folder, which involves a simple .deployment file containing

[config]
project = client

For the ‘prototype’ branch we want to deploy the /prototype folder, which involves a simple .deployment file containing

[config]
project = prototype

As you can see, these two branches then can evolve independently (although the prototype should be static).

Problems Start

The problems began when I tried to create a Windows Azure Website and integrate it with Github for the ‘prototype’ branch. No matter what I did, I couldn’t get the Github fetch to work:

At this point I stuck an email to some guy and David Ebbo (@davidebbo) prompted me to stop being lazy and look for some deployment logs. Powershell is your friend when it comes to debugging Windows Azure Websites, so I started there.

The first thing to do is to get the Logs using ‘Save-AzureWebsitesLog’:

I missed this at first amongst all the the noise in this file. What I did instead is give up on notepad and xml and run a different powershell command: Get-AzureWebsiteLog -name myWebsite -Tail, which connects powershell to a real time stream of the Website log. Really really neat.

Fantastic! There’s our error, and with less noise than the xml file that I was earlier confused by.

So that’s my problem, ambiguous argument ‘prototype’: both revision and file
name
Use ‘–’ to separate filenames from revisions

This means my branch in Github is called ‘prototype’ and I have a file (folder technically) called ‘prototype’ in the system and this is ambiguous to the deployment system.

Now I can’t use — to separate filenames from revisions – I don’t have that level of control over the deployment process. But what I do have control over is the branch name and the folder name. I chose to rename the prototype folder:

It’s been 6 months since I started Cynapta so I thought I will share with you about what I have been up to for last 6 months and some more details about Cynapta and what we have been doing there.

Cynapta – What’s that?

Well, Cynapta is the name of the new company I founded after Cerebrata. A lot of folks asked me how I come up with such quirky names. Well, the answer is rather simple – all the good names have already been chosen so I am left with such names only. I wanted a small name which is somewhat catchy and I could not come up with a better name than this. Plus the domain name was also available.

What we’ll be doing @ Cynapta?

That’s a million dollar question. I’m still not ready to spill all beans but there’re a few things I can share – At Cynapta, we are building some Software as a Service (SAAS) applications. They will be hosted in Windows Azure. We have some really interesting products in the pipeline (yes, we have over 2 years of product pipeline) and all of them will be for cloud platforms, primarily Windows Azure but we’ll dabble into other cloud platforms as well. Windows Azure has come a long way since its inception in 2008 and is growing stronger day-by-day however there are lots of “partner opportunities” in the platform. To begin with, we will explore these “partner opportunities” and come up with solutions for those.

At Cerebrata, we focused mainly on building desktop tools but at Cynapta the focus will be on building web-based applications. No particular reason other than the fact that for the kind of applications we are building, it made sense for us to make them as web-based applications instead of desktop applications.

We are more or less done with a major part of our first application and hopefully we will do a beta very soon – in a month or so. I will come back here to seek your participation in beta testing. Here’s a screenshot of what we are building:

What else @ Cynapta?

Well, apart from making the applications ready for beta, we need to get a blog up and running. In past few months, we as a team have learnt immensely about Windows Azure and would want to share those learning with you. We will blog about the architecture of our applications, try to walk you through the process we went through on making those architecture decisions and building the products. Being a commercial entity, it may not be possible for us to share source code but wherever we can, we will share the source code.

Team @ Cynapta

We are now a strong team of 8 people. Unlike Cerebrata where I started with all fresh graduates, this time we have a mix of experienced developers and freshers. Freshers still amaze me with the energy and thoughtfulness while the experienced developers in my team have a very open mind so that’s a very good thing for me. Only guy in the team with rigid thoughts is me so I need to work on that.

Here’s a picture of the team @ Cynapta.

What Else?

Oh, we have a twitter account for ourselves. Handle for that account is @cynapta. I would appreciate if you could start following it. Currently it does not do anything but I promise it will be active very soon as that would become the official medium for all news about Cynapta.

Personal Stuff

On the personal front, things have started to become crazy (in a good way). In past 6 months, I did some consulting work; worked with a bunch of super smart students from my alma mater to help them create a WordPress backup to Windows Azure plugin; hung out at Stack Overflow trying to grab as much +10s and +15s as I can. I was hanging there so much that one of the fellow Windows Azure MVP wondered if I am planning on building a business out of it. Jokes apart, its fun to hang out there. You get to learn so many things and it feels great when you end up helping somebody out. Another reason for hanging out at Stack Overflow is that it gives to great ideas about what kind of products one should build. Folks come there with problems for which no solution exist today. For us, that place is like a gold mine of ideas.

Apart from that I learnt a lot of new stuff: Cloud Architecture, ASP.NET MVC 4, jQuery, Knockout.js etc. etc. Fun stuff!!! Wrote a lot of code (I mean really a lot of code) as well.

Closing Thoughts

All in all last 6 months have been pretty exciting. There’re still a lot of uncertainties but what’s life without some unpredictability. I’m really looking forward to the challenges that lie ahead of us.

In this episode Nick Harris and Chris Risner are joined by Haishi Bai - Sr. Technical Evangelist on Windows Azure. During this episode Haishi demonstrates the Windows Azure Cache Service preview including:

Today I’m happy to report the news that our Microsoft Open Technologies, Inc., (MS Open Tech) partner Azul Systems has released the technology preview for Zulu, an OpenJDK build for Windows Servers on the Windows Azure platform. Azul’s new OpenJDK-based offering has passed all Java certification tests and is free and open source.

Azul’s new build of the community-driven open source Java implementation, known as OpenJDK, is available immediately for free download and use under the terms of the GPLv2 open source license.

Built and distributed by Azul Systems, Zulu is a JDK (Java Development Kit), and a compliant implementation of the Java Standard Edition (SE) 7 specification. Zulu has been verified by passing all tests in the Java SE 7 version of the OpenJDK Community TCK (Technology Compatibility Kit).

Azul has a lot of information about this exciting news on their website, including this press release that we would like to share.

With the support of Azul Systems and MS Open Tech, customers will be assured of a high-quality foundation for their Java implementations while leveraging the latest advancements from the community in OpenJDK. The OpenJDK project is supported by a vibrant open source community, and Azul Systems is committed to updating and maintaining its OpenJDK-based offering for Windows Azure, supporting current and future versions of both Java and Windows Server. Deploying Java applications on Windows Azure will be further simplified through the existing open source MS Open Tech Windows Azure Plugin for Eclipse with Java.

Integrated with MS Open Tech’s Windows Azure Plugin for Eclipse with Java tooling

Patches and bug fixes contributed back to the OpenJDK community by Azul

ISV-friendly binary licensing for easy embedding with 3rd party applications

Availability for download and immediate use

Executives of both companies highlighted the benefits of this new effort:

Jean Paoli, president of MS Open Tech said, “Java developers have many development and deployment choices for their applications, and today MS Open Tech and Azul made it easier for Java developers to build and run modern applications in Microsoft’s open cloud platform.”

Scott Sellers, president and CEO of Azul Systems said, “Azul is delighted to announce that Zulu is fully tested, free, open source, and ready for the Java community to download and preview – today. We are looking forward to serving the global Java community with this important new offering for the Azure cloud.”

Customers and partners of Microsoft and Azul interested in participating in future Zulu tech previews are also invited to contact Azul at AzureInfo@azulsystems.com for additional information. And of course, please send questions and feedback to our MS Open Tech team directly through our blog.

Microsoft is looking to invest in the planning and construction of a new data centre at an industry site in Noord-Holland, Netherlands. The price tag on the project? 2 billion euros ($2.7 billion), which would see a new "green" data centre built on a site that will take up 40 acres worth of ground near the A7 highway.

Numerous countries were reportedly in talks with Microsoft to secure a contract, but the decision was awarded to the Netherlands. How would the data centre be power efficient and supplied by green energy? Local greenhouses use ground-coupled heat exchangers and produce more electricity than required, which will open up new doors for Microsoft's new power-hungry project. Energy supplier Tennet is said to be on the plan as backup.

Heat would also be transferred from the datacentre to the greenhouses, making it a rather lucrative deal for both parties. So what will this mean for consumers? We could well be looking at infrastructure being deployed for Xbox One and other services provided by the company. Adding a data centre to Europe will provide yet more scale to Microsoft's operations in the region. Microsoft already has a local data centre in Amsterdam for its Azure web services. [Emphasis added.]

It's still early days for this new project, so it's worth noting this deal could fall through completely.

Source: Tweakers (Dutch); thanks, MartinSpire, for the heads up and translation!

Microsoft has released a free ebook titled: "Microsoft System Center: Designing Orchestrator Runbooks". The book which is written by David Ziembicki, Aaron Cushner, Andreas Rynes and Mitch Tulloch contains 182 pages. The book provides a framework for runbook design and IT process automation which will help you to get the most out of System Center 2012 Orchestrator.

We will provide detailed guidance for creating what we call “modular automation” where small, focused pieces of automation are progressively built into larger and more complex solutions. We detail the concept of an automation library, where over time enterprises build a progressively larger library of interoperable runbooks and components. Finally, we will cover advanced scenarios and design patterns for topics like error handling and logging, state management, and parallelism. But before we dive into the details, we’ll begin by setting the stage with a quick overview of System Center 2012 Orchestrator and deployment scenarios.

Microsoft has released a paper titled:"Implementing Hybrid Cloud at Microsoft". The paper which contains 6 pages details the steps Microsoft IT has taken by emerging and upgrading technologies, realigning organizational goals and redefinition based on lessons learned during the way towards an organization wide goal "All of Microsoft runs in the cloud"

The paper covers the following topics:

Situation

Solution

Planning

Evaluating available technology

Realizing Hybrid Cloud

Determining Applicable Delivery Methods

Assessing the Current State of IT At Microsoft

Infrastructure Readiness

Challenges to Public Cloud Adoption

Calculating Financial Challenges and Opprtunities

Determining organizational readiness

Implementing a Hybrid Cloud Strategy

Implementing Cloud Computing Management

Determining Application Placement and Accelerating Adoption

Benefits

Best Practices

Conclusion:

It is an exciting time to be in IT at Microsoft. The consumerization of IT is enabling agility and business benefits that were unimaginable even five years ago. As Microsoft IT continues to implement and mold its cloud computing strategy, they understand that many of the factors that will affect this strategy in the future are unknown. Furthermore, the technology surrounding cloud computing is constantly evolving and providing new ways to look at how IT is imagined. As such, Microsoft IT must remain flexible and adaptable as an IT organization, and leverage the growing capabilities of cloud computing at Microsoft.

As a follow up to this series of posts introducing the Oracle Self Service Kit, here is a video going over a quick overview of the kit, as well as a demonstration (deploying a new database on a new dedicated server).

You’re building apps that are hosted in the cloud. Your users can be anywhere across the globe. When you’re viewing a customer service call log, and you see a customer called at 4:30pm and was really upset their service was still down, how do you know how long ago they called? Was the time relevant to your time zone, the customers, or the person who took the call? Was it 5 minutes ago, or 3 hours and 5 minutes ago? Or was it the time zone of the server where the app is hosted? Where is the app hosted? Is it in your office on the west coast, the London office, or is the app hosted in Azure. If it’s in Azure, which data center is it located? Does it matter?

Some more questions, this time in the form of a riddle. Yes, time is a theme here.

What time never occurs?

What time happens twice a year?

What we’re dealing with here is a concept I call global time. At any given time, humans refer to time relevant to a given location. Although time is a constant, the way we refer to time is relative.

There is a concept called UTC, in which time is relevant to the same location and would be a constant if the whole world would refer to 5:00pm as the same exact point in time no matter where you are. However, as humans we don’t think that way. We like 5:00pm represents the end of a typical work day. We like to know we all generally eat at 12:00 pm, regardless of where we are on the planet. However, when we have to think about customer call logs being created from multiple locations, at any time, it’s almost impossible to read a string of 4:30 pm, and know its true meaning in the global view of time.

What about time zones?

So we can all wake about the same time to a beautiful sunrise, we can all eat around noon, go out for drinks at sunset, or an 8:00 pm movie and it most likely be dark, time zones were created as chunks of consistent time as our planet rotates around it’s axis.

This seems to make things relatively easy, right? Everyone can refer to 9-5 as common work hours.

What about Daylight Savings Time

Time zones would have been fine if the earth were spinning consistently related to the sun. However, as we happen to spin around and around our own axis, we also spin around the sun.

And, we spin at a slight enough angle that the sun doesn’t always rise at 6am within a given time zone. In 1916, daylight savings time was started where once a year we’d spring ahead, or fall back an hour to try and get the sun to rise and fall about the same time.
To make things a bit worse daylight savings time changed in 2007 when it was felt it would improve our energy consumption.
All of this was fine, when we lived in little towns, didn’t really connect with others across the globe instantly. Even in modern times, when apps were islands upon themselves on each of our “PCs”, possibly connected via floppynet, it wasn’t a problem. When the corporate office was located in Dallas, we all knew that time was relevant to Dallas time. But, in this new world, where there may no longer be a corporate office, or the data center may no longer be located in the basement of the corporate office, we need a better solution.

Problems, problems, but what to do?

In 2008, SQL Server 2008 and .NET Framework 3.5 SP1 introduced a new Date Time datatype called DateTimeOffset. This new type aims to balance the local time relevance humans seek, with the global time needs for our applications to function in a constant.
DateTimeOffset allows humans to view a date & time as we think about it locally, but it also stores the offset relative to UTC. This combination supports a constant, and allows apps to reason over time in their time zone.

An Example

Assume we’re located in the Redmond WA office. It’s 4:35 pm. According to our call log, our upset customer called at 4:30pm. If we store this without the offset, we have no idea how this relates to where we are right now. Did the NY, Redmond or London office take the call? If the user that saved the value was on the east coast, and it used their local time, it would store 4:30pm and -5 as the offset.
Using this combination of time and offset, we can now convert this to west coast time. The app pulls the time from the database. It calculates Pacific Time which is UTC -8, and subtracts another 3 hours (ET (UTC -8) – PT (UTC -5)). Our customer called at 1:30pm Pacific time. That’s 3 hours ago, and there’s no log of activity. Our customer is likely very upset. However, if it were stored as 4:30 pm – 8 (Pacific Time), the customer called just 5 minutes ago, and we can make sure someone from service is tending to their outage.

LightSwitch, Cloud Business Apps and Global Time

With a focus on cloud apps, Visual Studio 2013 LightSwitch and Cloud Business Apps now support the DateTimeOffset data type.

Apps can now attach to databases and OData sources that use DateTimeOffset, and you can now define new intrinsic databases with DateTimeOffset.

Location, Location, Location

When building apps, there are 3 categories of use for the DateTimeOffset data type:

Client Values Values set from the client, user entered, or set in javascript

screen.GlobalTime.ClientValue = new Date();

Mid-Tier Values
Set through application logic, within the LightSwitch server pipeline

Created/Modified Properties
LightSwitch and Cloud Business Apps in Visual Studio 2013 now support stamping entity rows with Created/Modified properties. These values are set in the mid-tier.
A future blog post will explain these features in more detail

Because the tiers of the application may likely be in different time zones, LightSwitch has slightly different behavior for how each category of values are set.

Client Values
No surprise, the client values utilize the time zone, or more specifically the UTC Offset of the client. This obviously depends on the devices time, which can be changed, and does change on cell tethered devices as you travel across time zones.

Mid-Tier Values
Code written against the mid-tier, such as the Entity Pipeline, uses the UTC Offset of the servers clock. This is where it gets a little interesting as the clock varies. In your development environment, using your local machine, it’s going to be your local time zone. For me, that’s Redmond WA UTC-8. If you’re publishing to an on-premise server, your datacenter may likely use the local time zone as well. In our Dallas TX example, that would be UTC –6. However, if you’re publishing to Azure, the servers are always in UTC-0 regardless of the data center. This way your apps will always behave consistently, regardless of where the servers are located.

Created/Modified Properties
We considered making these use the Server time zone, but felt it would be better to be consistent regardless of where the server was located. This solves the problem for data that may span on-prem and the cloud. Created/Modified values are always UTC -0

A Quick Walkthrough

Using the above example, let’s look at how these values would be stored in the database, and viewed in the app from New York and Redmond WA

We’ll create a Entity/Table with the following schema:

Under the covers, LightSwitch will create a SQL Table with the following schema.

Just as noted above, we’ll create a screen, and enter values in the ClientValue on the Client, the MidTierValue on the MidTier, and LightSwitch will automatically set the Created property.

For the sake of simplicity, I normalized the values and removed the variable for how fast someone could type and press save on the client and have it be the exact time on the server. Reality often confuses a message.

Let’s assume its 1:30:30 PM on 9/19/2013, which is Daylight savings time in the pacific northwest. What values would be set for our 3 Properties?

Notice the Created is always in UTC – 0. The mid tier uses the time of the server. And the browser client, displays all values consistently as the UTC Offset normalizes the times regardless of the time zone in which they were captured.

What date is it?

That wasn’t so bad, just a little math on the hour. However, let’s assume we’re working late that same night, and it’s now 10:30 PM on 9/19/2013 in Redmond WA. We’re still in daylight savings time, but what do you see different here?

Although it’s 9/19 in Redmond, New York is 3 hours ahead. It’s now 1:30 AM 9/20. Also notice that our Azure servers are also working in 9/20.

Standing on the edge

It’s now the 2nd Sunday in March 2014. March 9 to be specific. This is the night before daylight savings time begins. Lets see what happens

In this case, not only are the times split across dates between New York and Redmond WA, but we’ve also crossed into daylight savings time. Instead of New York being 3 hours ahead, it’s actually 4 hours ahead, for the same point in time. Ouch…

My Head Hurts

Attempting to calculate all these different time zones, daylight savings time – if it even applies to your time zone, can certainly be confusing. So, what to do? Well, that of course depends. However, just as we learned in math, we need to find the least common denominator. In most cases, if you convert to UTC, you can avoid the variables. You can use TimeSpan to calculate the differences in two DateTime periods. Then, you can re-apply the UTC offset. However, none of this would be possible if the values you’re attempting to calculate didn’t include the UTC Offset.

Thus, LightSwitch, and Cloud Business Apps now support this important element for you to calculate Dates & Times across your globally deployed application.

What about the riddles?

Ahh, you’re still here?

What time never occurs?
2:00 am on the 2nd Sunday of March
As of 2007, the 2nd Sunday in March begins daylight savings time. At 2:00 am, the clocks will “leap” forward to 3:00 am skipping the 2:00 am hour all together

What time happens twice a year?
1:00 am, the first Sunday of November
As of 2007, the first Sunday in November ends daylight savings time. At 2:00 am, the clocks roll back to 1:00 am to end daylight savings time.

Upload a file via the table mechanism

Since I’m a plumber, I leave fancy UI stuff to more UI talented people. My upload screen, simply has a label and a button. The label is for displaying the file name and the button is for doing the file selection:

The button is a custom control but a very simple one, no xaml here.

The reason why we add a button as custom control is for accommodating to an annoying Silverlight security restriction, which I will not try to fully explain over here.

Microsoft checked off another checklist item for Windows Azure when it turned on multifactor authentication on Thursday.

Multifactor authentication requires that a user put in her password or code as the first step but then adds another step to the process. One of my credit card companies, for example, requires me to get an additional passcode via voicemail or text, that I must also key in to access my account information.

It’s that extra layer of security that Microsoft is adding here. Per a blog post by Steve Martin, GM for Windows Azure:

Multi-Factor Authentication quickly enables an additional layer security for users signing in from around the globe. In addition to a username and password, users may authenticate via: 1) An application on their mobile device. 2) Automated voice call. 3) Text message with a passcode. It’s easy and meets user demand for a simple sign-in experience.

Microsoft charges either $2 per user per month or $2 per 10 authentications for this service.

Lessons Learned Building Secure and Compliant solutions in Windows Azure

In this post I’ll be focusing on the last part, which are the lessons learned.

Quick Concepts

When we think about compliance and security there are two concepts we need to consider and master. Those concepts are Data in Transit and Data at Rest. But what is this all about?

Data at Rest

This refers to inactive data which is stored physically in any digital form (e.g. databases , data warehouses, spreadsheets, archives, tapes, off-site backups, mobile devices etc.). In addition, subsets of data can often be found in log files, application files, configuration files, and many other places.

Basically you can think of this as data which is stored on a place which will be able to be retrieved even in case of a restart.

Data in Transit

This is commonly delineated into two primary categories:

Data that is moving across public or “untrusted” networks such as the Internet,

Data that is moving within the confines of private networks such as corporate Local Area Networks (LANs)

When working in with compliant solutions you always need to have this two in consideration because those will be the two topics that the compliances will focus on.

Lessons Learned

In order to make sure that the solution is “acceptable” from a Data Privacy & Compliance aspect I normally use the following process, that I would like to share with you.

Perform an assessment on the organizational structure in order to understand all the information of where the business is being conducted, and which laws and compliances apply.

This is extremely important because if we work the Betting & Gaming industry we might find that they are located in one place but have their gateways on a different one, like Malta, Gibraltar and so on. By understanding this we will be able to understand exactly which compliances should be followed and which ones we should ignore.

The same thing applies for example to the Healthcare industry where you have HIPPA compliance but which is important to understand where the company that builds the product is, as well as doing the same for their customers, since different countries will have different compliance requirements.

Understand which countries both the customer and software vendor is located. This will help understand which rules apply to that specific organization and plan for that.

Identify the specific data you need to encrypt or you need to avoid moving into the cloud because of compliance issues.

This is an extremely complex exercise because you can’t say on a high level that all the data can or can’t go to the cloud, you need to dig into the compliance and understand exactly which fields can’t be.

For example, in the Healthcare industry you have HIPPA compliance which you have to comply with, but you also have to work with both PII (Personal Identifiable Information) and PHI (Personal Health Information), which can’t be in the Cloud at this stage. So normally you hear people saying immediately that this application cannot move into the cloud. That isn’t actually true. If you go and analyze the PHI and PII details you will see that the health information can be anywhere as long as it is not possible to match to which person that information is related to. If you look at it, this isn’t actually that hard to do. You can anonymize the data and place “Patient A” and the full health history in the cloud, do the necessary processing and then just send the information back to on-premises where you have a small table that correlates “Patient A” with the real patient information so doctors can work with.

After understanding everything that is required in terms of requirements and compliances which are applicable to the solution, you need to look at where your Data at Rest is currently being stored inside your customer data center.

Databases

File Servers

Email Systems

Backup Media

NAS

…

Now you should locate your Data in Transit across the network channels both internal and external. You should:

Assess the data trajectory

Assess how data is being transferred between the different elements of the network

Decide how to handle Sensitive Data. There are several options you might take to handle this data.

Eradication

Obfuscation/Anonymization

Encryption

Note: Normally we go more with the Encryption option but anonymization is also really important and in some cases the only way to go. For example look at the PII and PHI. Anonymization would be the way to go there.

If you follow this simple process you will definitely be successful identifying what needs to be handles and how it needs to be handle and make your compliant solutions able to be moved to Windows Azure.

In this post I’ll be focusing on the Windows Azure compliance part.

Introduction to Windows Azure Compliance

Compliance is extremely important when moving/building solutions to the cloud for two main reasons. First because it will provide us with with an understanding of the type of infrastructure that is underneath the cloud offering. Secondly because there are several different solutions and companies which require specific compliances in order to be approved for deployment.

In order to achieve this Windows Azure Infrastructure provides the following compliances:

ISO/IEC 27001:2005

“Specifies a management system that is intended to bring information security under explicit management control” by Wikipedia. More information here.

This is extremely important because it provides us a clear information about how secure our data will be inside Windows Azure.

SSAE 16/ISAE 3402 SOC 1, 2 and 3

“Enhancement to the current standard for Reporting on Controls at a Service Organization, the SAS70. The changes made to the standard will bring your company, and the rest of the companies in the US, up to date with new international service organization reporting standards, the ISAE 3402” by SSAE-16.com. More information here.

Extremely important to understand that Windows Azure is audited and has to follow strict rules un terms of reporting to make it compliance. This give us a view that everything has a specific process that needs to be followed.

HIPPA/HITECH

“The Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted as part of the American Recovery and Reinvestment Act of 2009, was signed into law on February 17, 2009, to promote the adoption and meaningful use of health information technology.” by hhs.gov. More information here.

By having this HIPPA compliance it means that solutions for the healthcare industry can be delivered in Windows Azure because the underlying infrastructure is already HIPPA compliant. This doesn’t mean that anything we do now is HIPPA compliant, it just means that Windows Azure can be used to deploy the solution, but the solution still needs to comply with the rest of the HIPPA compliance, mainly the software compliance part.

PCI Data Security Standard Certification

“Created to increase controls around cardholder data to reduce credit card fraud via its exposure. Validation of compliance is done annually — by an external Qualified Security Assessor (QSA) that creates a Report on Compliance (ROC)[1] for organizations handling large volumes of transactions, or by Self-Assessment Questionnaire (SAQ) for companies handling smaller volumes.[2]” by Wikipedia. More information here.

This doesn’t mean that we can deploy PCI compliant solution in Windows Azure, because this certification is only for the way Windows Azure uses to accept payment, and not for allowing 3rd party applications.

According to FISMA, the term information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide integrity, confidentiality and availability.” by Wikipedia. More information here.

Windows Azure Compliance Roadmap

But Windows Azure has other compliances and so here is the complete roadmap.

Summary

What all this means if that Windows Azure is a secure and highly compliant option, which will allow us to leverage the Cloud on several different occasions.

In the new scenarios where cloud is getting used, integration becomes very important. Luckily, the Windows Azure platform provides a lot of different capabilities and services to make a secure link between your local systems and the Windows Azure services or machines. In this session, an overview will be give of the different technologies and the scenarios to which these technologies are best applicable. The following technologies will be demonstrated and discussed: * Connectivity on messaging level: Service Bus Messaging * Connectivity on service level: Service Bus Relay * Connectivity on data level: SQL Data Sync * Connectivity on network level: Windows Azure Virtual Networking * Connectivity on security level: Active Directory integration

I’ll be speaking again this year at the biggest code camp in the world – Silicon Valley Code Camp – on October 5th & 6th. I’ve been speaking there for years and it’s always a well run and very well attended code camp.
If you’ve never attended and you’re in the San Francisco Bay Area I *highly* encourage you to attend. There are over 200 sessions and 3000 people registered! What are you waiting for?

This year I have two sessions on building HTML5-based mobile business apps. One will focus on how to build a data-centric HTML client and deploy it to the cloud (Azure) using LightSwitch, and the other will focus on the new Office 365 Cloud Business App project in Visual Studio 2013, which streamlines the way you build custom SharePoint 2013 business apps.

Please click on the sessions that interest you and let the organizers know if you’re interested in attending so they can help plan room sizes.

Visual Studio LightSwitch is the easiest way to create modern, data-centric, line of business applications for the enterprise. In this demo-heavy session, we will build and deploy end-to-end, a full-featured business app that runs in Azure and provides rich user experiences tailored for modern devices. We’ll cover how LightSwitch helps you focus your time on what makes your application unique, allowing you to easily implement common business application scenarios—such as integrating multiple data sources, data validation, authentication, and access control. We’ll cover complex business rules and advanced data services for facilitating custom mobile reporting dashboards. You will also see how developers can use their knowledge of HTML5 and JavaScript to customize their apps with custom controls, client-side logic

Office 365 is an ideal business app platform providing a core set of services expected in today’s business apps like presence, social, integrated workflow and a central location for installing, discovering and managing the apps. Office 365 makes these business apps available where users already spend their time – in SharePoint & Office. Visual Studio 2013 streamlines the way developers build modern business applications for Office 365 and SharePoint 2013 with the Office 365 Cloud Business App project. In this demo-heavy session, you’ll see how developers can build social, touch-centric, cross-platform Office 365 business applications that run well on all modern devices.

Do you organize a special interest group on Meetup.com? If your meetup sponsors code camp (at no cost to anyone of course), then people at your meetup will know about code camp, and your meetup logo and link will be shown on practically every code camp page (in the sponsor area). Last year the SVCC site had 200,000+ page views over the month!

The MS Open Tech team has been working with the Node.js community for more than two years to deliver a great experience on Windows and Windows Azure for Node developers. It’s been an exciting and rewarding experience, and we’re looking forward to taking it to the next level as we continue the journey together.

To that end, we’re happy to announce the first Node/Windows Hackathon, sponsored by Microsoft Open Technologies, Inc. This event will take place in Redmond on November 7-8, 2013, at the new “Garage” facility currently under construction in building 27 of the Microsoft campus. The event is open to everyone. We’ll be sharing more details in the next few days, but we’re announcing the dates now so that you can reserve the date and make plans to participate.

This will be a great opportunity for the Node community to get to know the many Microsoft developers who love to work with Node.js as much as they do, and we’ll work together to test new scenarios, explore new features, and make the Node experience even better for Windows and Windows Azure developers. There will be plenty of pizza and beverages, lots of time for hacking as well as socializing, and we’re planning a surprise announcement at the event that we think will make Node developers on Windows very happy.

Please sign up at the EventBrite registration page and get involved if you’d like to participate, or have suggestions for projects and scenarios to explore. We’d love to see you in Redmond for the event, but if you can’t be there in person we’ll also have opportunities for online attendance. (Details for online participation will be posted soon.)

Oracle OpenWorld 2013 kicks off today and it might surprise you to learn how many great Microsoft sessions will be delivered. Oracle products run on the platform that make up the Microsoft Cloud OS vision, so it’s important for us to give you great information on our latest innovations, and how partner products integrate and execute well.

If you’ll be attending OpenWorld in San Francisco, find your way to our booth for an opportunity to see the Cloud OS in action - through both interactive and guided demos. Product experts will be on hand to answer your public and private cloud puzzlers.

Then, make your way to one of our sessions to get first-hand vision and instruction. Here are some sessions we think you’ll find valuable:

Microsoft and Oracle: Partners in the Enterprise Cloud - The Cloud OS is Microsoft’s comprehensive cloud computing vision that that leverages Microsoft’s unmatched legacy of running the world’s highest-scale online services and most advanced datacenters to deliver a modern platform for the world’s applications and devices. Join Brad Anderson, Corporate Vice President of Windows Server and System Center Program Management as he showcases how Microsoft and Oracle are working together to help customers embrace cloud computing by improving flexibility and choice while also preserving first class support for mission-critical workloads. Presented by Microsoft VP Brand Anderson on 9/24 at 1:30pm in Moscone North - Hall D

Traversing the Public and Private Cloud with Windows Azure and Windows Server Hyper-V - Attend this session to learn how you can have your cake and eat it too—moving virtual machines from your own data center to the cloud and back. The presentation discusses some of the factors that go into deciding whether to use the cloud for an Oracle Database deployment and what scenarios benefit from a combination deployment across public and private cloud environments. Presented by Steven Martin and Mike Schutz on 9/24 at 3:45pm in Moscone West – 2010.

Windows Azure: What’s New in the Microsoft Public Cloud - Get familiar with how you can use Windows Azure like an extension of your own data center by running Oracle software in this public cloud environment. We’ll explore common scenarios where Windows Azure has proven value, and provide guidance for getting the most out of Windows Azure. Presented by Steven Martin on 9/25 at 10:15am in Moscone South – 250.

There’s even more to see. Search the sessions below in the Oracle Openworld 2013 content catalog for more details:

For those of you that want to try Windows Server 2012 R2, you can download the Preview and RTM bits. See the Microsoft.com Windows Server 2012 R2 area for more information. See the TechNet Evaluation Center for previews and downloads of other Microsoft products like System Center, SQL Server, and Microsoft Exchange.

Let’s be frank. The past five years at Oracle Open World have disappointed even the faithful. The over emphasis on hardware marketing and revisionist history on cloud adoption bored audiences. The $1M paid advertorial keynotes had people walking out on the presenters 15 minutes into the speech. Larry Ellison’s insistence on re-educating the crowd on his points subsumed the announcements on Fusion apps. Even the cab drivers found the audience tired, the show even more tiring.

Oracle went from hot innovative must attend event to has been while most industry watchers, analysts, and media identified shows such as Box’s BoxWorks, Salesforce.com’s DreamForce, and Exact Target’s Connections as the innovation conferences in the enterprise. These events such as Constellation’s Connected Enterprise, capture not only the spirit of innovation but also provide customers a vision to work towards. Hence, most believe Open World could use much needed rejuvenation and a shot of innovation juju (see Figure 1.)

Figure 1. Oracle Open World Lights Up San Francisco From September 22nd to September 27th

Join a Microsoft Developer Camp and learn critical skills that all modern developers need to know.

Roll up your sleeves at this free, one-day, instructor-led workshop where you can learn the critical skills that all modern developers need to know. We will start with basic Microsoft Modern Platform principles and build up to more advanced topics. Instructor-led, hands-on labs will focus on:

Updating an App to a Modern Architecture

Modern Dev & Test Practices

Configuring and Testing a Modern Application

Throughout the day, you’ll hear from local Windows Azure partner specialists and Microsoft product team members.
We’ll talk about the best way to take advantage of modern platforms and tools, as well as how to fix the worst sins of testing. Developers of all languages are welcome!

Be fully prepared for this hands-on day of coding by bringing your laptop and signing up for the free Windows Azure Trial.

Customers were referred to IBM SoftLayer — a Nirvanix partner. IBM had already told GigaOM it was working to transition customers over. Meanwhile, nearly every other cloud player in the universe has been circling to scoop up Nirvanix customers.

Earlier this week, Leo Leung, VP of marketing of Oxygen Cloud, a cloud broker service, said Oxygen had successfully migrated several joint customers to other cloud providers. One was a real estate company with several terabytes of data. (He wrote about the issue on his blog.)

Some said customers would be crazy to trust their data post-Nirvanix to anyone but the biggest cloud storage providers. To Andres Rodriguez, CEO of Nasuni, a company that manages enterprise cloud storage, that means Amazon S3 and Microsoft Windows Azure. [Emphasis added.]

What concerned Nirvanix customers — and spooked others — is that the company gave so little notice, initially just two weeks, to move their stuff. In Friday’s statement Nirvanix extended that another two weeks till October 15. Still, that’s not a lot of time to provision and move a lot of data storage.

It also put the scare into people that other cloud startups that appear to be well funded may not be all that solid after all. Nirvanix itself had raised about $70 million in venture funding including a $25 million round just six months ago.

It makes you wonder what other cloud companies are on the cusp.

Related research

Subscriber Content comes from GigaOM Pro, a revolutionary approach to market research without the high price tag. Visit any of our reports to subscribe.

Amazon CloudFront distributes dynamic and static web content produced by an origin server to viewers located anywhere in the world. If the user requests objects that don't exist (i.e., a 404 Not Found response) or an unauthorized user might attempt to download an object (i.e., a 403 Forbidden response), CloudFront used to display a brief and sparsely formatted error message:

Today we are improving CloudFront, giving you the ability to control what's displayed when an error is generated in response to your viewer's request for content. You can have a distinct response for each of the supported HTTP status codes.

The CloudFront Management Console contains a new tab for Error Responses:

Click on the Create Custom Error Response button to get started, then create the error response using the following form:

You can create a separate custom error response for each of the ten HTTP status codes listed in the menu. The Response Page Path points to the page to be returned to signify the response. For best results, point this to an object in an Amazon S3 bucket. This will prove to be more reliable than storing the pages on the origin server in the event that the server returns any of the 5xx status codes.

You can also choose the HTTP status code that will be returned along with the response page (in most cases you'll want to use 200):

Finally, you can set the Error Caching Time To Live (TTL) for the error response. By default, CloudFront will cache the response to 4xx and 5xx errors for five minutes. You can change this value as desired. Note that a small value will cause CloudFront to forward more requests to the origin server; this may increase the load on the server and cause further issues.

Your origin server can also control the TTL by returning Cache-Control or Expires headers as part of the error response.

It has been awhile since I have written anything about Google Cloud Computing. I started to take a look at Google Compute Engine over a year ago but I was stopped because it was in limited preview and I could not access it. It looks like GCE has been made generally available since May so I thought I’d check back to see what has happened.

To use GCE you sign into Google’s Cloud Console using your Google account. From the Cloud Console you can also access the other Google cloud services: App Engine, Cloud Storage, Cloud SQL and BigQuery. From the Cloud Console you can create a Cloud Project which utilizes the various services.

Figure 1. Google Cloud Console

Unlike App Engine, which lets you create projects for free, GCE requires billing to be enabled up front. This, of course, will require you to create a billing profile and provide a credit card number. After that is done you can walk through a series of steps to launch a virtual machine instance. This is pretty standard stuff for anyone who has used other IaaS offerings.

Figure 2. Creating a new GCE instance

The choice of machine images is certainly much more limited than other IaaS vendors I’ve used. At this time there seems to be only four available and they are all Linux based. Probably Google and/or the user community will add more as time passes. It is nice to see the per-minute charge granularity which, in actual fact, is based on a minimum charge of 10 minutes and then 1 minute increments beyond that. The smallest instance type I saw, though, was priced at $0.115 per hour which makes GCE considerably more expensive than EC2, Azure and Rackspace. When you click the Create button it only takes a couple of minutes for your instance to become available.

Connecting to the instance seemed to me to be a little more complicated than other providers. I am used to using PuTTY as my ssh client since I work primarily on a Windows machine. I had expected to be able to create a key pair when I launched the instance but I was not given that option. To access the newly created instance with PuTTY you have to create a key pair using a third party tool (such as PuTTYgen) and then upload the public key to GCE. You can do this through the Cloud Console by creating an entry in the instance Metadata with a key of sshKeys and a value in the format <username>:<public_key> where <username> is the username you want to create and <public_key> is the actual value of the public key (not the filename) you create. This can be copied from the PuTTYgen dialog. A bit of extra work but arguably a better practice anyway from a security perspective.

Figure 3. Creating Metadata for the public key

After that is done it is straightforward to connect to the instance using PuTTY.

Figure 4. Connected to GCE instance via PuTTY

At this point I do not believe that Google Compute Engine is a competitive threat to established IaaS providers such as Amazon EC2, Microsoft Azure or Rackspace. To me the most compelling reason to prefer GCE over other options would be the easy integration with other Google cloud services. No doubt GCE will continue to evolve. I will check back on it again soon. [Emphasis added.]

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.