Sometimes when running the C# SDK for HDInsight, you can come across the following error:

The system cannot find the batch label specified – jar Error: Could not find or load main class c:\apps\dist\hadoop-1.1.0-SNAPSHOT\lib\hadoop-streaming.jar

To get around this, close the command shell that you are currently in and open up a new hadoop shell, and try your command again. It should work immediately.

This tends to occur after killing a hadoop job, and so I am assuming something that this activity does changes the context of the command shell in such a way that it can no longer find the hadoop javascript files. I’ve yet to get to the bottom of it, so if anyone has any bright ideas, let me know on comments

If you worked with the Windows Azure SQL Database in the past you’ll know that there is no support for SQL Server Agent jobs. According to the official guidelines you should use a SQL Server Agent which runs on-premises and connect it to your Windows Azure SQL Database. But this only works if you have the required infrastructure available to you on-premises (or you could host it in a VM).

Besides that you also have the SQL Azure Agent project on CodePlex which is the result of a series of blog posts on the SQL Azure blog (part 1, part 2 and part 3). This project is just a proof of concept but it’s a good base to go and build your own SQL Azure Agent. The downside to this is that you need to run it in a Web/Worker Role which might be overkill in some cases.

Let’s look at how the (Mobile Services) Scheduler can be used to create an alternative to the SQL Server Agent. Before we get started I advise you to check my previous post which covers the basics of the Scheduler: Job scheduling in Windows Azure

The database

Take the following scenario: you have a customer which would like to move an application to Windows Azure. It was pretty easy to move their web application to Windows Azure Web Sites. The migration of the database also worked out pretty well and here is the result:

Now the only thing that didn’t work was migrating a SQL Server Agent job. The customer has a job which runs once a day and deletes records in the Logs database which are older than 1 month (not very original, I know). The job is actually very simple: it calls the sp_ClearOldLogs stored procedure. If your jobs contain lots of code I suggest you move this to a stored procedure first. If all the logic which we want to execute resides in a stored procedure, then the only thing we need is a way to schedule when this stored procedure should run.

Scheduling the stored procedure

At the moment the Scheduler is only available in Windows Azure Mobile Services (WAMS). Before we can start configuring the scheduler we’ll need to set up a new WAMS application. When you do so, make sure you choose the database which contains the stored procedure you want to execute:

After you created the WAMS application you’ll see a new login and user appear in the database. The scheduler will use this new user to execute the stored procedures:

Keep in mind that the new user won’t have the required permissions to execute the stored procedure. That’s why you’ll need to grant the EXECUTE permission first:

GRANT EXECUTE ON [dbo].[sp_ClearOldLogs] TO [GcQuMKtYVILoginUser]

GO

You can now open the WAMS application and go to the Scheduler tab. This is where you’ll be able to create a new job and choose the schedule. In free mode you are limited to 1 job for each WAMS application (but you can create 10 free WAMS applications which means you can create up to 10 free jobs).

Open the newly created job and go to the script tab. This is where you’ll be able to write code which will execute the stored procedure. Here is an example which does some logging and executes the stored procedure:

function Call_sp_ClearOldLogs() {

console.log("Executing sp_ClearOldLogs...");

mssql.query('EXEC dbo.sp_ClearOldLogs', {

success: function(results) {

console.log("Finished executing sp_ClearOldLogs.");

}

});

}

Even if you’re not familiar with this Javascript syntax that shouldn’t be a problem. You can write all your logic in a stored procedure and just create a small script like I did.

One last thing, after you save the script make sure you also press the Enable button. If you don’t, the script will never run.

To test if everything works I just press the Run button and look at my Logs table. After a few seconds the stored procedure was executed and removed all records in the Logs table older than one month. And that’s all you need to run a job!

Alerts and Notifications

When you work with the SQL Server Agent you can configure alerts and notifications for your jobs, let’s see what we can do about that.

If you look back at the script you’ll see that I call console.log, which will write to the log of your WAMS application. If you open the application in the portal you can view your logs under the Logs tab:

This is a great way to keep track of when your job was executed and if there were any issues. If you’re more a command line person you can also use the Windows Azure CLI to fetch the logs: azure mobile log

At the moment the scheduler script supports 3 modules: “azure”, “request” and “sendgrid”. But the request and sendgrid modules allow you to do virtually anything. You can use the request module to send SMS messages with Twilio (this is something you might want to do in case of an issue):

// If the email failed to send, log it as an error so we can investigate

if (!success) {

console.error(message);

}

});

And there you go. You can now hook up a WAMS Scheduler to your database, schedule the execution of stored procedures, follow up on these jobs through the logs and even send out notifications. In most cases this should cover everything you need to replace the SQL Server Agent and make it easier to move your database to the cloud. And once you see that this doesn’t cover all your requirements you can always move to a full-blown Worker Role solution afterwards.

Cabaret not withstanding, it’s data that makes the world go ‘round, and one of the incredibly awesome capabilities you have when using SQL Server and SQL Database in Windows Azure, is the ability to move rather seamlessly between on-premises assets and the cloud.

This sample provides an end to end location scenario with a Windows Store app using Bing Maps and a Windows Azure Mobile Services backend. It shows how to add places to the Map, store place coordinates in a Mobile Services table, and how to query for places near your location.

My Store - This sample demonstrates how you can enqueue and dequeue messages from your Windows Store apps into a Windows Azure Service Bus Queue via Windows Azure Mobile Services. This code sample builds out an ordering scenario with both a Sales and Storeroom and app.

This demonstrates how to store your files such as images, videos, docs or any binary data off device in the cloud using Windows Azure Blob Storage. In this example we focus on capturing and uploading images, with the same approach you can upload any binary data to Blob Storage.

The My Trivia sample demonstrates how you can easily add, update and view a leaderboard from your Windows Store applications using Windows Azure Mobile Services.

If you have just returned from vacation and have not yet had the opportunity to check out the Windows Azure Mobile Services below, I would encourage you to [investigate the linked articles, which] detail a wealth of up to date content made available to help you both get started and to use at your local events.

Editor's Note: This post was written by Nick Harris, Windows Azure Technical Evangelist.

It’s been less than five months since we introduced the first public preview for Windows Azure Mobile Services and in this short time we have seen continual additions to the Mobile Service offering including:

So a couple of weeks ago I posted this blog post on how to upload files to blob storage through Mobile Services. In it, I described how one could do a Base64 encoded string upload of the file, and then let the mobile service endpoint convert it and send it to blob storage.

The upsides to this is that the client doesn’t have to know anything about where the files are actually stored, and it doesn’t need to have blob storage specific code. Instead, it can go on happily knowing nothing about Azure except Mobile Services. It also means that you don’t have to distribute the access keys to your storage together with the application.

I did however mention that there was another way, using shared access signatures (SAS). Unfortunately, these have to be generated by some form of service that has knowledge of the storage keys. Something like a Azure compute instance. However, paying for a compute instance just to generate SASes (plural of SAS…?) seems unnecessary, which is why I opted to go with the other solution.

However, Ryan CrawCour, a dear friend of mine, just had to say that he wasn’t convinced, which has now been nagging me for a while. So to solve that, I have deviced another way to use SAS while using only Mobile Services. And even though he is likely to have some opinion about this as well, it at least made the nagging feeling go away for a while.

DISCLAIMER: This is somewhat of a hack. I assume that there will be better ways to do this in the future, but for now it works even if it might not be my finest solution to date. My biggest issue with it is a part of the JavaScript that I will point out later, but it works. But don’t blame me when it causes Azure to explode and tear down the internet when you use it…

Ok, let’s go! Like everything else in the current version of Mobile Services, we need a table to create an endpoint to play with. In this case, I have created a table called “sas”. The table itself will not be used, it is only there to enable me to execute my serverside scripts… Because of this, I have restricted access to everything but “read”, as that is the only thing that will be used…

The next part is to create an entity to be used to send and receive data from the service. I called it SAS and it looks like this

As you can see, it includes a Name and a FileName property as well as the mandatory Id property. These properties will be used to pass the requred information to the endpoint. The Url property will be used for returning the signed Url.

(You could get away with removing the SAS entity and writing an OData query instead, but I prefer LINQ…)

The real functionality is obvously in the other end, at the server. Here, I have created a “read script” for the table. This script will take the query and use the information in it to create a signed url.

As you can see, it populates the filename and containername variables using the query’s _parsed member. I know that JavaScript members starting with an underscore are supposed to be private, and using the _parsed member is really not a good practice, but it was the only way I could find to easily get hold of the data sent to the server. There might be better ways to solve this, and I will look into it, but for now, this works…

Next it uses a method called getSignedBlobUrl(), which I will talk about in just a minute. Once the signed url has been generated, it is returned to the client using request.respond() instead of actually executing the query.

Ok, so what does the getSignedBlobUrl() do? Well, it just creates a well-formed signed url to the specified blob. Like this

First it creates a timespan, within which the signature is valid, by using 2 Date objects. Azure limits this to 60 minutes or something, but that should be more than enough.

As you can see, it uses a method called generateSignature() to generate the actual signature. This method generates a HMAC-SHA256 signature generated using the blob storage key and a predefined string presentation of the parameters used in the querystring that is passed to the blob storage.

The actual Uri is then created by combining the path to the blob and a very funky querystring. The querystring includes a bunch of parameters such as start och end time for the access, what type of access (blob or container) it should have, what access rights it needs (read or write), and finally it includes the newly generated signature.

It isn’t very complicated. It concatenates a string using a predefined format and then uses the crypto package to create the signature.

The “w” at the start of the stringToSign string defines the access, in this case write access, then it is the start time and end time of the SAS in the correct format, and finally it is the path to the blob to access.

Ok, that’s about it! The only thing that the very focused people will have noticed is that JavaScript does not include a toIsoString() method on the Date object. That is a separate method I have declared on the Date object’s prototype as follows

It is just a helper to get the date string in a format that works for the call…

Ok, that’s it! For real this time!

Except for the somewhat annoying use of the _parsed member in the JavaScript and the slightly odd way to execute the query on the client, it is actually quite a neat solution. Being able to generate SAS urls without a compute instance is actually quite useful in some cases. And even though I prefer uploading files the other way, this could still be really useful. And cheaper… Incoming data is free in Azure, so uploading the file is free either way, but if your storage is not in the same datacenter as the Mobile Service instance, then doing it the other way would incur charges when passing the file from the Mobile Service to the blob storage. Something that this solution does not.

Well, I guess it is better that I end this post before I get into talking about all the pros and cons of the 2 different solutions. They both do the job, so it is up to you to decide…

And no…there is no code for download this time. I have already shown it all, and it wasn’t that much…

I was surprised, yet delighted, that Windows Azure Mobile Services uses a SQL database. Schema-less table storage has its place and is the right solution at times, but for most data driven applications, I’d argue otherwise.

In my last post, I wrote about sending notifications by writing the payload explicitly from a Windows Azure Mobile Service. In short, this allows us to include multiple tiles in the payload, accommodating users of both wide and square tiles.

In my application, I want to execute a query to find push notification channels that match some criteria. If we look at the Windows Azure Mobile Services script reference, the mssql object allows us to query the database using T-SQL and parameters, such as:

In my case, the query is a bit more complicated. I want to join another table and use a function to do some geospatial calculations – while I could do this with inline SQL like in the above example, it’s not very maintainable or testable. Fortunately, calling a stored procedure is quite easy.

Consider the following example: every time the user logs in, the Channel URI is updated. What I’d like to do is find out how many new locations (called PointsOfInterest) have been modified since the last time the user has logged in. To do that, I have a stored procedure like so:

Writing something like that inline to the mssql object would be painful. As a stored procedure, it’s much easier to test and encapsulate. In my WAMS script, I’ll call that procedure and send down a badge update:

This section of code only updates the badge of the Windows 8 Live Tile, but it works out nicely with tile queuing:

Note: this app is live in the Windows 8 Store, however, at the time of this writing, these features have not yet been released. In the next few posts, we’ll look at the notifications a bit more, including how to pull off some geospatial stuff in WAMS.

Those that know me know I am not a fan of javascript, in pretty much all of its forms (including node.js), however, I’m really digging Windows Azure Mobile Services (WAMS). WAMS allows you to easily provide a back end to applications for storing data, authenticating users, and supporting notifications on not just Windows and Windows Phone, but also iOS with future plans of supporting Android soon.

Now, I mention javascript because WAMS provides a slick node-like powered data service that makes it really easy to store data in the cloud. The ToDoList example exercise illustrates the ease at storing user data in the cloud and hooking it up with authentication and notification support. The nice thing about the authentication is that it’s easily integrated into the backend:

But, more on this later. Right now, I want to deal with notifications in WAMS. In WAMS, you have the opportunity to right custom server-side javascript to do things like send notifications on insert/update/delete/read access:

In my case, I want to send a tile update if the new data meets some criteria. Let’s start all the way down the code and work our way out, starting with the notification piece. One page you MUST have bookmarked is the tile template catalog on MSDN. This page defines the XML syntax for all possible tiles your tile can have, including both small/square tiles, and large/wide tiles. All of these have a defined schema, such as this for TileSquarePeekImageAndText04:

Which produces a tile that “peeks”, such as this (which flips between the top half and bottom half):

Yes, it’s easy to laugh at the magic “04” in the template title. I like to joke that my personal favorite is TileWideSmallImageAndText03. But, there variety is crucial to creating the ideal app experience and that depends on how you want to display the data -- and that requires knowing the XML template.

Now, at first glance, this is very nice because WAMS will write the XML for you. However, you still must know what data the template requires. Does it need an image? One text line? Two? You get the point. Unsurprisingly, calling that method will generate XML like:

You can learn more about this in the WAMS script reference. Another must-have bookmark. However, I recommend you don’t use these at all, and instead write the XML payload directly. This is for a few reasons, but primarily, it’s for control – and, really, you have to know the fields required anyway and you’ll still have the tile catalog page open for reference.

In looking at the mpns (Microsoft Push Notification Service) library a bit closer (awesome job by the guys, by the way) up on git, it has this method:

var raw = new mpns.rawNotification('My Raw Payload', options);

When developing my app, I realized I had no idea what tile size the user has. Some may opt to use a wide tile, others a small tile. I needed different tiles to support both. I didn’t like sending two notifications (seems wasteful, doesn’t it?) and to do this efficiently, it’s easier to just create the payload explicitly that includes all tiles. For example, this includes two completely different tiles:

Sure, it doesn’t look as clean and (gasp!) we have to do string concatenation. But, it’s only a couple of minutes more work and just more flexible. Like I said: either way, you need to know the template. In my case, I’m sending both notifications in one payload. The first is TileWideImageAndText02, which produces a nice image with the text on the bottom describing the image. If the user has a small tile, it will use TileSquareImage, which basically just forgoes the text and just displays the image. After trying a few, I settled on this combination as the best user experience. This is an easy way, with minimal effort, to support both wide and narrow tiles.

As an aside, I recommend setting the tag (X-WNS-Tag) header, particularly if your app cycles tiles and you want to replace a specific tile. Also, it’s a good ideal to XML escape all data, which I’m doing with the long image URLs … and this, I believe, is taken right from the mpns library:

BigML has made the process of creating a predictive model from a dataset one-click easy. For example, a store owner can use her sales data to predict the optimal inventory levels given the time of year. But what if you have an idea for a model, for example predicting the unemployment rate for London Boroughs based on demographic information, but you don’t have the data? While it is trivial to build this model with BigML once you have data, where do you get it and how do you know if the data is from a reliable source?

This is where data markets shine, providing an easy to search repository of curated datasets that can be combined with your own data to build models with more meaningful insights. A prominent example of a data market is the Windows Azure Marketplace DataMarket. It offers a wide spectrum of public and commercial datasets that are exposed via an OData API. While BigML can already import datasets from the Azure DataMarket via the OData API using our remote sources or directly from an Azure blob, we wanted to make it even easier. With the new BigML Data Marketplaces widget, you can now directly browse datasets from Azure DataMarket and import them into BigML with just one click.

The BigML Data Marketplaces widget can be enabled in the “Data Marketplaces” section of your account settings. You will need to grant BigML access to your dataset subscriptions by entering your Account Key and Customer ID from your Azure Marketplace Account.

Once enabled, you will see a new icon in the sources tab of your dashboard which can be used to activate the Azure DataMarket browser.

This will bring up a list of Azure DataMarket datasets. Clicking on a dataset will reveal a full description and a link that can be used to subscribe to the dataset, if you were not already subscribed.

Once you find a dataset that you want to analyze using BigML, you can select one of its Entity Sets and how many rows you want to use to create a new BigML Source.

So once you create a new BigML Source:

your BigML Dataset and predictive model are just a few clicks away.

The new BigML Data Marketplace widget makes it easier than ever to make insightful models by providing easy access to the well structured and rich selection of datasets from the Azure DataMarket. This shows how fabulous combining cloud-based applications is becoming: without installing and configuring any software, downloading or uploading any data and just with a browser you can analyze hundreds of datasets to derive many powerful insights.

We are very thankful to the folks at Azure DataMarket and specially to Rene Bouw for his help and support during the integration process.

Yesterday I sat down with my teammate Abhishek and talked about how we build Service Bus - not how we code it, but how we run the process inside the team and how we get from features sitting on the humongous backlog to working features in the service.

We also talk about the three different disciplines Program Management, Development, and Test/QA and how the checks and balances between the disciplines helps with getting things out on schedule and at great quality.

Over the holidays, the topic of transactions flared up on Twitter amongst a number of distributed systems .NET luminaries and it turned out that there isn't always clear agreement even about the basic notions around transaction technology as the overall technology stack has evolved and there are now databases that sit entirely in memory, for instance. Can those databases participate in a distributed transaction even if they're not "durable"?

What are the challenges around making two or more things work together? Do I even care?

To start that discussion, this episode is an introduction to what transactions are and what they're for and I am explaining the "traditional" transaction properties using a few low-tech non-code examples and a little role play with the help of my teammates Will Perry (@willpe) and Abhishek Lal (@AbhishekRLal)

One of the common misconceptions about OAuth is that it provides identity federation by itself. Although supporting OAuth with federated identities is a valid pattern and is essential to many API providers, it does require the combination of OAuth with an additional federated authentication mechanism. Note that I’m not talking about leveraging OAuth for federation (that’s OpenID Connect), but rather, an OAuth handshake in which the OAuth Authorization Server (AS) federates the authentication of the user.

There are different ways to federate the authentication of an end user as part of an OAuth handshake. One approach is to simply incorporate it as part of the authorization server’s interaction with the end user (handshake within handshake). This is only possible with grant types where the user is redirected to the authorization server in the first place, such as implicit or autz code. In that case, the user is redirected from the app, to the authorization server, to the idp, back to the authorization server and finally back to the application. The federated authentication is transparent to the client application participating in the OAuth handshake. The OAuth spec (which describes the interaction between the client application and the OAuth Authorization Server) does not get involved.

Another approach is for the client application to request the access token using an existing proof of authentication in the form of a signed claims (handshake after handshake). In this type of OAuth handshake, the redirection of the user (if any) is outside the scope of the OAuth handshake and is driven by the application. However, the exchange of the existing claim for an OAuth access token is the subject of a number of extension grant types.

One such extension grant type is defined in SAML 2.0 Bearer Assertion Profiles for OAuth 2.0 specification according to which a client application presents a SAML assertion to the OAuth authorization server in exchange for an OAuth access token. The Layer 7 OAuth Toolkit has implemented and provided samples for this extension grant type since its inception.

Because of the prevalence of SAML in many environments and its support by many identity providers, this grant type has the potential to be leveraged in lots of ways in the Enterprise and across partners. There is however an alternative to bloated, verbose SAML assertions emerging, one that is more ‘API-friendly’, based on JSON: JSON Web Token (JWT). JWT allows the representation of claims in a compact, JSON format and the signing of such claims using JWS. For example, OpenID Connect’s ID Tokens are based on the JWT standard. The same way that a SAML assertion can be exchanged for an access token, a JWT can also be exchanged for an access token. The details of such a handshake is defined as part of another extension grant type defined as part of JSON Web Token (JWT) Bearer Token Profiles for OAuth 2.0.

Give me a JWT, I’ll give you an access token. Although I expect templates for this extension grant type to be featured as part of an upcoming revision of the OAuth Toolkit, the recent addition of JWT and JSON primitives enables me to extend the current OAuth authorization server template to support JWT Bearer Grants with the Layer 7 Gateway today.

The first thing I need for this exercise is to simulate an application getting a JWT claim issued on behalf of a user. For this, I create a simple endpoint on the Gateway that authenticates a user and issues a JWT returned as part of the response.

Pointing my browser to this endpoint produces the following output:

Then, I extend the Authorization Server token endpoint policy to accept and support the JWT bearer grant type. The similarities between the SAML bearer and the JWT bearer grant types are most obvious in this step. I was able to copy the policy branch and substitute the SAML and XPath policy constructs for JWT and JSON path ones instead. I can also base trust on HMAC type signatures that involve a share secret instead of a PKI based signature validation if desired.

I can test this new grant type using REST client calling the OAuth Authorization Server’s token endpoint. I inject in this request the JWT issued by the JWT issuer endpoint and specify the correct grant type.

I can now authorize an API call based on this new access token as I would any other access token. The original JWT claim is saved as part of the OAuth session and is available throughout the lifespan of this access token. This JWT can later be consulted at runtime when API calls are authorized inside the API runtime policy.

Editor’s Note: Today’s post is brought to us by Eric Weidner, OpenLogic Co-founder and Director of Engineering, describing how the company provides support and services for CentOS customers, including details on how to get OpenLogic CentOS images running on Windows Azure Virtual Machines.

OpenLogic provides services and support for over 700 different open source packages, including commercial-grade support for CentOS, an enterprise-class Linux distribution derived from the publicly-available source code for Red Hat Enterprise Linux. Our goal with supporting CentOS is to enable Enterprises to take advantage of a fully open alternative to the popular enterprise Linux distributions that customers already know and use.

Since April or so of last year, we have had a close working relationship with the Windows Azure team, with the goal of making Open Logic CentOS images available in the image gallery of the Windows Azure Preview Management Portal. Our counterparts, like Henry Jerez, have been very focused on delivering a great solution for our mutual customers, working with us to meet a series of goals in order to make our “go live” dates.

What’s great about CentOS and Windows Azure is that there’s really very little required to get the OpenLogic images running. For users, it’s as easy as picking the Open Logic CentOS image in the Windows Azure portal, answering a few questions for the basic setup, and then a CentOS server can be launched in about five minutes. There are also tools available to give developers the ability to automate interactions with the platform.

Customers running OpenLogic CentOS images on Windows Azure can expect to have a vast, selectable and truly predictable deployment process in place. Additionally, they get servers running a distribution they are already familiar with using in their traditional data centers.

To illustrate just how easy it is, below is the step-by-step detail for how to create a custom virtual machine running an OpenLogic CentOS image using the Windows Azure Management Portal:

Sign in to the Windows Azure Management Portal. On the command bar, click New.

The VM OS Selection dialog box opens. You can now select an image from the Image Gallery.

Click Platform Images, select the OpenLogic CentOS 6.2 image, and then click the arrow to continue.

In Virtual Machine Name, type the name that you want to use for the virtual machine. The name must be 15 characters or less. For this virtual machine, type MyTestVM1.

In New User Name, type the name of the account that you will use to administer the virtual machine. You cannot use root for the user name. For this virtual machine, type NewUser1.

In New Password, type the password that is used for the user account on the virtual machine. For this virtual machine, type MyPassword1. In Confirm Password, retype the password that you previously entered.

In Size, select the size that you want to use for the virtual machine. The size that you choose depends on the number of cores that are needed for your application. For this virtual machine, accept the default of Extra Small.

Click the arrow to continue.You can connect virtual machines together under a cloud service to provide robust applications, but for this tutorial, you only create a single virtual machine. To do this, select Standalone Virtual Machine.

You can connect virtual machines together under a cloud service to provide robust applications, but for this tutorial, you only create a single virtual machine. To do this, select Standalone Virtual Machine.

A virtual machine that you create is contained in a cloud service. In DNS Name, type a name for the cloud service that is created for the virtual machine. The entry can contain from 3-24 lowercase letters and numbers. This value becomes part of the URI that is used to contact the cloud service that the machine belongs to. For this virtual machine, type MyService1.

You can select a storage account where the VHD file is stored. For this tutorial, accept the default setting of Use Automatically Generated Storage Account.

In Region/Affinity Group/Virtual Network, select West US for where the location of the virtual machine.

Click the arrow to continue.

The options on this page are only used if you are connecting this virtual machine to other machines or if you are adding the machine to a virtual network. For this virtual machine, you are not creating an availability set or connecting to a virtual network. Click the check mark to create the virtual machine.

The virtual machine is created and operating system settings are configured. When the virtual machine is created, you will see the new virtual machine listed as Running in the Windows Azure Management Portal.

Easy as that! As I said previously, about a five minute process in total.

It’s great to see the commitment to open source projects by Microsoft, and the moves they’ve been making to open up Windows Azure to Linux. Not only has Microsoft open sourced the drivers, including contributing them to the upstream kernel projects, to allow people to run Linux on their hypervisors and platforms, but they’ve also created open source tools for developers to use to interact with the platform. You can also find the source code and instructions for building from source and running the drivers on Github and Codeplex.

For a summary of how this work is benefiting CentOS and Windows Azure customers, check out my interview alongside OpenLogic’s CEO Steven Grandchamp on the Microsoft Openness blog . To start running OpenLogic’s CentOS images as part of the current Virtual Machines Preview, go to the Windows Azure site.

Yesterday, Microsoft Open Technologies announced a complementary service to Windows Azure VMs - the VM Depot. The depot is a community-driven catalog of open source VM images. This lets you create and share VMs with custom configurations or specific software stacks installed.

Doug Mahugh also posted a getting started article that gives the basics of using the service.

I spent some time working with the depot last night, and here are the things I learned.

Requirements

You can probably guess that you'll need a Windows Azure subscription, but there's a few more things you'll need to do.

Make sure the VM preview feature is enabled for your service. You can do this by signing into your subscription and going to https://account.windowsazure.com/PreviewFeatures. Once here, sign up for the Virtual Machines & Virtual Networks option if it is not already active.

Make sure you have the latest version of the Windows Azure Command-line tools, as the depot produces a deployment script that uses a newish parameter (-o). You can update the command line tools by doing one of the following:

Using a community image

Find an image you want from the list. You can either scroll through the list or use the search bar at the top. The following image illustrates using the search field to find a VM that has Riak.

At this point you can either click the Deployment Script link to the far right of the VM entry you want to use or the Deployment Script icon at the top to retrieve a deployment script. You'll need to agree to the terms of use and select a region, and then you'll be given a command similar to the following:

After the VM status changes to running, you should be able to use SSH to connect to the VM and use it as you normally would.

Publishing a VM

Doug Mahugh's article provides information on publishing an image to the VM Depot. I didn't go through the entire process of publishing a VM because I didn't want to clutter up the VM Depot with "Larry's great generic Linux VM for testing purposes". The steps seem relatively straight forward though.

Set the storage container that contains the .VHD to public. You can do this in the Windows Azure Portal by:

Selecting the storage account.

Selecting Containers.

Selecting the container (vhds in this case) and clicking edit container. You'll get a dialog similar to the following:

Select Public Container for the access level of this container, and then click the checkbox.

Publishing the VHD to the VM Depot

You'll need to create an account on the VM Depot for this step. It allows you to use a Windows LiveID, Google ID, or Yahoo! ID. To create an account, just click on the Sign In link in the upper right to set this up.

After you've created an account and signed in, perform the following steps to share your VM with the community:

The URL of the VHD to publish is the full URL to the VHD in your public container. You can get this by performing the following steps:

Go to the Windows Azure Portal.

Go to the storage account that contains this VHD.

Select Containers, and then select the container.

A list of the objects in the container, along with the full URL to each item, will be displayed. Just note the URL and use it in the URL of the VHD to publish field in the VM Depot.

Once you've specified the VHD path and filled out all the information, agree to the terms and click the publish button.

Final Thoughts

The VM Depot is a great addition to the Windows Azure VM story. Previously you had to select a raw OS image and manually install your software stack on it. With the Depot, you can now pick an image that already has the stack you need, as well as share your custom stack with the community. And since it's based on the Windows Azure command-line tools, it allows you to create the command-line once in the portal and then use it in your automation scripts, or hand it out to co-workers who need to create their own VMs.

There's already a lot of VMs in the Depot for both specific OS releases (Debian Wheezy and Mageia) and specific software and software stacks (LAMP, Ruby, JRuby, WordPress, Joomla, Drupal). It will be interesting to see what new VMs show up now that this is open to the community.

Any thoughts on specific OS or software stacks that you'd like to see in the VM Depot?

We’ve seen the basic concept of Azure IaaS in my last article. This article will take a deeper look at how Images and Disks are being used in Windows Azure Virtual Machine. Later in the article I’ll bring you another tutorial to let you have better understanding and hands-on experience.

There are two basic yet important concepts in Windows Azure Virtual Machine: Images and Disks. Although both of them are eventually in VHD format, there are significant differences between them.

Images

Images are virtual hard drives (VHDs) that have already been generalized (technically, beensys-prepped /generalize). They are basically templates that will be used to clone the Virtual Machine. They come without any specific settings such as computer name, user account, and network settings.

Predefined / Platform Images

Windows Azure provides numbers of predefined images including Windows and Linux. The following figure shows the predefined images on Windows Azure as of today.

Both techniques require us to sysprep the VHD properly. Eventually, the image should be created in the portal.

Figure 2 – Creating Image from VHD

Disk

Disks are the actual VHDs that are ready to be mounted by the Virtual Machine. There are two kinds of Disk: OS Disk and Data Disk.

OS Disk

OS Disk is a VHD that is being instantiated by an image and obviously contains operating system files. At the time a VM is being provisioned, the OS Disk will be automatically created and mounted as C:\ drive.

The default caching policy for OS Disk is enabled for ReadWrite. Meaning that, although the OS Disk is stored at Windows Azure Storage as Page Blob, there will be a caching disk sitting on the host OS. At any time reading / writing happens on the OS Disk, it will always reach the caching disk first and gradually flush them to Blob Storage. The reason why ReadWrite cache being enabled for OS Disk is because the usage pattern that Azure team expects. As the working sets of data being read and written are relatively small, it fits perfectly to have a local cache so that it can perform efficiently.

The maximum size of OS Disk is 127 GB as of today. The recommended approach is to let customers store larger data in the Data Disk.

Data Disk

Data Disk is VHD that allows us to store any data. The Data Disk can then be mounted on the VM. T As the data disks are stored in Windows Azure Blob Storage as page blobs, it inherits from the maximum size of 1 TB. However, there are limits on how many disk can be mounted. This depends on the size of Virtual Machine as presented below.

The default caching policy for Data Disk is “None” or No Cache. This means that when any reading or writing happens, it always goes directly to Blob Storage.

*Temporary Disk

Apart from OS Disk and Data Disk, there is also a temporary disk stored in the VM itself. This is used for the OS Paging file. Importantly, the disk is considered not persistent.

The following diagram illustrates how the disks are being stored in Windows Azure Storage.

Figure 4 – How disks are stored

A hands-on tutorial

We have talked about the concepts above. Now let’s jump into the demo to see them in action. I assume you have gone through the tutorial in my previous article, please do so if you have not.

Attaching Disks to Virtual Machines

1. Log in to New Management Portal with your Live Id. After successfully logging in, navigate to the Virtual Machine section and you will see the Dashboard tab appear. At the bottom part of Dashboard, you will notice the “disk” section. By default, there is only one disk attached, which type is OS Disk. If you notice carefully, the OS Disk VHD refers to Windows Azure Storage URL.

Figure 5 – Virtual Machine dashboard

2. Now, click on the “Attach” button and select the “Attach Empty Disk”.

Figure 6 – Attaching Disk to VM

As the dialog box show up, define the File Name as “DataDisk1” and Size as “1023”.

Figure 7 – Attaching an Empty Disk

It may take a while (2 to 3 minutes) to get the Data Disk ready.

3. Repeat Step 2 one more time. Define File Name as “DataDisk2” andlet the Size remain the same as “1023”.

4. After a while, you can see that there are two additional data disks being attached besides the original OS Disk.

Figure 8 – OS Disk and Data Disk on VM

5. Click “Connect” to remote desktop inside the VM. When the RPD file is prompted, simply open it.

6. Once you have successfully remote desktopped inside the VM, open up Server Manager and expand the Storage – Disk Management Menu.

7. You might be prompted with the Initialize Disk dialog. This dialog appears since we have just attached two disks on the VM but haven’t initialized them yet. We are required to select the partition type either: MBR and GPT. In this demo, we select “MBR” and click “OK”.

Figure 9 – Initializing Data Disks

Striping Volume to Data Disks in VM

The earlier section of this article mentions that the maximum size of each blob is 1 TB. People often make the mistake of thinking that the maximum size of data you can store in Azure Disk is 1 TB. This is not really true, as we can actually store up to 16 TB data (for Extra Large VM). The idea is to use Striped Volume in Windows.

8. Right click on “Disk 2″ which we have just initialized and click “New Striped Volume”.

Figure 10 – New Striped Volume

9. As the dialog comes up, add the “Disk 3” on the Available list and click “Add”. Click “Next” to proceed.

There is a situation [with] Windows Azure Virtual machines where either you have deleted the Virtual Machines for any reason or Virtual Machine is deleted due some other reason. You may have already know that the OS disk vhd is still saved in your Azure Storage because when virtual machine is deleted the OS disk and other data disk are still saved at their respective Windows Azure Storage location

When you want to reuse the OS disk vhd you might encounter the following problems:

1. You can see that OS Disk is still showing attached to Virtual Machine as shown below.

2. You can deleted the OS VHD from storage as while deleting VHD you get the following error:

There is currently a lease on the blob and no lease ID was specified in the request.

3. If you decide to use the same OS VHD to create an OS image for creating your Virtual Machine, you will get error as below:

The VHD http://portalvhds63*.blob.core.windows.net/vhds/avkashsql2012-avkashsql2012-2012-07-18.vhd is already registered with image repository as the resource with ID avkashsql2012-avkashsql2012-0-20120718183454.

The bottom line is OS VHD is there but you cannot use it for any purpose.

Root cause:

- The root cause of this problem is that the VHD blob is still in lease due to some code issue and locked up in a way that it is not re-usable until you break the lease so it is free to use.

Solution:

You cannot remove the blob lease directly at portal so you would need to use the PowerShell script as described below:

Happy New Year Windows Azure community! One of the first pieces of 2013 Windows Azure news is something that I am particularly excited about. Some of you might know that I was recruited to Microsoft in 2004 to help with the company’s approach to open source software. So each time we take a step to help the overall open source community, I cannot help but get excited.

Microsoft Open Technologies, Inc. has just announced VM Depot, a community-driven catalog of open source virtual machine images for Windows Azure. On VM Depot the community can build, deploy and share their favorite Linux configurations, create custom open source stacks, work with others and build new architectures for the cloud that benefit from the openness and flexibility of Windows Azure. You can find the announcement from Microsoft Open Technologies, Inc. at the Port 25 BLOG.

Here’s a quick look at VM Depot, where Azure users can bring their own custom Linux images to Azure Virtual Machines (currently in preview).

Here is where you get started: VM Depot. Remember to rate the images; our tech community thrives on your feedback. I’m looking forward to seeing this community develop, and special thanks to Bitnami, Alt Linux, Basho and Hupstream for helping us get this launched!

Do you need to deploy a popular OSS package on a Windows Azure virtual machine, but don’t know where to start? Or do you have a favorite OSS configuration that you’d like to make available for others to deploy easily? If so, the new VM Depot community portal from Microsoft Open Technologies is just what you need. VM Depot is a community-driven catalog of preconfigured operating systems, applications, and development stacks that can easily be deployed on Windows Azure.

You can learn more about VM Depot in the announcement from Gianugo Rabellino over on Port 25 today. In this post, we’re going to cover the basics of how to use VM Depot, so that you can get started right away.

Deploying an Image from VM Depot

Deploying an image from VM Depot is quick and simple. As covered in the online documentation, VM Depot will auto-generate a deployment script for use with the Windows Azure command-line tool for Mac and Linux that you can use to deploy virtual machine instances from a selected image. You can use the command line tool on any system that supports Node.js – just install the latest version of Node and then download the tool from this page on WindowsAzure.com. For more information about how to use the command line tool, see the documentation page.

Regardless of which approach you used to create your image, you’ll then need to save it to a public storage container in Windows Azure as a .VHD file. The easiest way to do this is to deploy your image to Azure as a virtual machine and then capture it to a .VHD file. Note that you’ll need to make the storage container for your .VHD file public (they’re private by default) in order to publish your image – you can do this through the Windows Azure management portal or by using a tool such as CloudXplorer.

Step 2: publish your image on VM Depot. Once your image is stored in a public storage container, the final step is to use the Publish option on the VM Depot portal to publish your image. If it’s your first time using VM Depot, you’ll need to use your Windows Live™ ID, Yahoo! ID, or Google ID to sign in and create a profile.

See the Learn More section for more detailed information about the steps involved in publishing and deploying images with VM Depot.

As you can see, VM Depot is a simple and powerful tool for efficiently deploying OSS-based virtual machines from images created by others, or for sharing your own creations with the developer community. Try it out, and let us know your thoughts on how we can make VM Depot even more useful!

If you want a great way to get kick started with Riak and you’re setup with Windows Azure, now there is an even easier way to get rolling.

Over on the Basho blog we’ve announced the MS Open Tech and Basho Collabortation. I won’t repeat what was stated there, but want to point out two important things:

Once you get a Riak image going, remember there’s the whole community and the Basho team itself that is there to help you get things rolling via the mail list. If you’re looking for answers, you’ll be able to get them there. Even if you get everything running smoothly, join in anyway and at least just lurk.

The RTFM value factor is absolutely huge for Riak. Basho has a superb documentation site here. So definitely, when jumping into or researching Riak as software you may want to build on, use for your distributed systems or the Riak Key Value Databases, check out the documentation. Super easy to find things, super easy to read, and really easy to get going with.

New Relic & The Rise of the New Kingmakers

In other news, my good friends at New Relic have released a new book in partnership with Redmonk Analyst Stephen O’Grady @, have released a book he’s written titled The New Kingmakers, How Developers Conquered the World. You may know New Relic as the huge developer advocates that they are with the great analytics tools they provide. Either way, give a look see and read the book. It’s not a giant thousand page tomb, so it just takes a nice lunch break and you’ll get the pleasure of flipping the pages of the book Stephen has put together. You might have read the blog entry that started the whole “Kingmakers” statement, if you haven’t, give that a read first.

I personally love the statement, and have used it a few times myself. In relation to the saying and the book, I’ll have a short review and more to say in the very near future. Until then…

In this blog post, I will talk about some of the best practices for building cloud applications. I started working on it as a presentation for a conference however that didn’t work out thus this blog post. Please note that these are some of the best practices I think one can follow while building cloud applications running in Windows Azure. There’re many-many more available out there. This blog post will be focused on building Stateless PaaS Cloud Services (you know that Web/Worker role thingie) utilizing Windows Azure Storage (Blobs/Queues/Tables) and Windows Azure SQL Databases (SQL Azure).

So let’s start!

Things To Consider

Before jumping into building cloud applications, there’re certain things one must take into consideration:

Cloud infrastructure is shared.

Cloud infrastructure is built on commodity hardware to achieve best bang-for-buck and it is generally assumed that eventually it will fail.

A typical cloud application consist of many sub-systemswhere:

Each sub-system is a shared system on its own e.g. Windows Azure Storage.

Each sub-system has its limits and thresholds.

Sometimes individual nodes fail in a datacenter and though very rarely, but sometimes entire datacenter fails.

You don’t get physical access to the datacenter.

Understanding latency is very important.

With these things in mind, let’s talk about some of the best practices.

Best Practices – Protection Against Hardware Issues

These are some of the best practices to protect your application against hardware issues:

Deploy multiple instances of your application.

Scale out instead of scale up or in other words favor horizontal scaling over vertical scaling. It is generally recommended that you go with more smaller sized Virtual Machines (VM) instead of few larger sized VMs unless you have a specific need for larger sized VMs.

Don’t rely on VM’s local storage as it is transient and not fail-safe. Use persistent storage like Windows Azure Blob Storage instead.

Best Practices – Cloud Services Development

Now let’s talk about some of the best practices for building cloud services:

It is important to understand what web role and worker role are and what benefit they offer. Choose wisely to distribute functionality between a web role and worker role.

Decouple your application logic between web role and worker role.

Build stateless applications. For state management, it is recommended that you make use of distributed cache.

Identify static assets in your application (e.g. images, CSS, and JavaScript files) and use blob storage for that instead of including them with your application package file.

Make proper use of service configuration / app.config / web.config files. While you can dynamically change the values in a service configuration file without redeploying, the same is not true with app.config or web.config file.

To achieve best value for money, ensure that your application is making proper use of all VM instances in which it is deployed.

Best Practices – Windows Azure Storage/SQL Database

Now let’s talk about some of the best practices for using Windows Azure Storage (Blobs, Tables and Queues) and SQL Database.

Some General Recommendations

Here’re some recommendations I could think of:

Blob/Table/SQL Database – Understand what they can do for you. For example, one might be tempted to save images in a SQL database whereas blob storage is the most ideal place for it. Likewise one could consider Table storage over SQL database if transaction/relational features are not required.

It is important to understand that these are shared resources with limits and thresholds which are not in your control i.e. you don’t get to set these limits and thresholds.

It is important to understand the scalability targets of each of the storage component and design your application to stay within those scalability targets.

It is recommended that your application uses retry logic to recover from these transient errors.

You can use TOPAZ or Storage Client Library’s built-in retry mechanism to handle transient errors. If you don’t know, TOPAZ is Microsoft’s Transient Fault Handling Application Block which is part of Enterprise Library 5.0 for Windows Azure. You can read more about TOPAZ here: http://entlib.codeplex.com/wikipage?title=EntLib5Azure.

For best performance, co-locate your application and storage. With storage accounts, the cloud service should be in the same affinity group while with WASD, the cloud service should be in the same datacenter for best performance.

From disaster recovery point of view, please enable geo-replication on your storage accounts.

Best Practices – Windows Azure SQL Database (WASD)

Here’re some recommendations I could think of as far as working with WASD:

It is important to understand (and mentioned above and will be mentioned many more times in this post ) that it’s a shared resource. So expect your requests to get throttled or timed out.

It is important to understand that WASD != On Premise SQL Server. You may have to make some changes in your data access layer.

It is important to understand that you don’t get access to data/log files. You will have to rely on alternate mechanisms like “Copy Database” or “BACPAC” functionality for backup purposes.

Co-locate your application and storage account in same affinity group (best option) or same data center (next best option) for best performance.

Table Storage does not support relationships so you may need to de-normalize the data.

Table Storage does not support secondary indexes so pay special attention to querying data as it may result in full table scan. Always ensure that you’re using PartitionKey or PartitionKey/RowKey in your query for best performance.

With Table Storage, pay very special attention to “PartitionKey” as this is how data in a table is organized and managed.

Best Practices – Managing Latency

Here’re some recommendations I could think of as far as managing latency is concerned:

Co-locate your application and data stores. For best performance, co-locate your cloud services and storage accounts in the same affinity group and co-locate your cloud services and SQL database in the same data center.

Make appropriate use of Windows Azure CDN.

Load balance your application using Windows Azure Traffic Manager when deploying a single application in different data centers.

Some Recommended Reading

Though you’ll find a lot of material online, a few books/blogs/sites I can recommend are:

Summary

What I presented above are only a few of the best practices one could follow while building cloud services. On purpose I kept this blog post rather short. In fact one could write a blog post for each item. I hope you’ve found this information useful. I’m pretty sure that there’re more. Please do share them by providing comments. If I have made some mistakes in this post, please let me know and I will fix them ASAP. If you have any questions, feel free to ask them by providing comments.

In this episode I talk with Craig Blessing, Vice President at Datacastle. We discuss how his company uses Windows Azure to protect business data. Tune in as he outlines for us Datacastle’s innovative cloud solutions which help organizations have secure, anytime, anywhere access to their data.

One of the things that may have slipped past you in the holiday madness was this: PHP 5.4 is available in Windows Azure Web and Worker Roles. This is just a quick post to bring you up to speed on this feature. (If you are wondering about PHP versions in Windows Azure Web Sites, see PHP 5.4 available in Windows Azure Web Sites.)

With the latest Windows Azure SDK for PHP, you can use PowerShell cmdlets to create a Windows Azure project, add PHP web and worker roles, and now, specify the version of PHP to be used in the roles…including PHP 5.4. Here are the steps to creating a project and selecting a specific version of PHP (more details are in this article: How to create PHP web and worker roles):

1. Create an Azure project:

New-AzureServiceProject projectName

2. Add a PHP Web or Worker role:

Add-AzurePHPWebRole roleName

-OR-

Add-AzurePHPWorkerRole roleName

3. Specify the PHP version to be used in the role (the currently available versions are 5.3.17 or 5.4.0):

Set-AzureServiceProjectRole roleNamephp5.3.17

-OR-

Set-AzureServiceProjectRole roleNamephp5.4.0

That’s it. However, note that currently you can only choose from two versions of PHP: 5.3.17 and 5.4.0. The team in charge of supporting PHP in web and worker roles is working hard to make several versions of PHP available in the near future. To see what versions are available, use the Get-AzureServiceProjectRoleRuntime cmdlet:

PS C:\MyProject> Get-AzureServiceProjectRoleRuntime

At the time of this writing, here’s what you will see (note the IsDefault flag is set to true for PHP 5.3.17, indicating that it will be the default PHP version installed):

After you have selected your PHP version, you can customize the it (change configuration settings and enable/disable extensions). Or, you can provide your own PHP runtime. These articles will show you how:

Sitecore’s new CMS Azure Edition delivers “access to the cloud without all the drama”. This Platform as a Service offering on Microsoft Windows Azure leverages the considerable advantages of the Azure platform to enable scalable, enterprise-class deployment of Sitecore-powered websites. With the Sitecore PaaS solution, organizations can:

Currently, the SignalR libraries are moving to a first major release. As I write, SignalR has a Release Candidate (RC 1) version. Unfortunately this makes that the silverlight nuget package is a bit out of sync. This will be probably fixed when SignalR has a final release.

I’m getting quite some questions from LightSwitch developers about SignalR/LightSwitch. Unfortunately, they run into the above problem.

Therefore, I created a small sample application which has all binary references in place. Furthermore the sample contains also a winforms client making a SignalR connection to the LightSwitch server.

The purpose of the sample is not to show you signalR (the above links are more useful for that), but just to make sure you have the correct binaries at your disposal.s

Update: The Winforms project might miss the signalR assemblies. Simply take the latest nuget bits for the Winforms project (or if you only want to focus on the LightSwitch projects, simply unload the Winforms project):

The Office Store, which is similar to the Windows Store for Windows 8 apps, enables ISVs and individual developers to distribute free or paid SharePoint apps to Office 365 subscribers and SharePoint 2013 clients. Vivek Narasimhan announced ­­The Office Store is now open! in an 8/26/2012 post to the Apps for Office and SharePoint blog.

◊ On 1/8/2012, I learned from a member of the Commerce UX team that even if I was to be able to upload the SharePoint.app file, the Office Store team would reject it because it's autohosted in SharePoint.

The infrastructure for autohosted apps will remain in preview status for a period of time after SharePoint 2013 releases. Autohosted apps (which includes all apps that depend on Microsoft Access) will not be accepted by the Office Store during this preview phase. [Emphasis added.]

I assume that the author of the foregoing forgot to include Visual Studio LightSwitch HTML 5 Client Preview 2 and Windows Azure Mobile Services with the Microsoft Access reference.

This issue kills any chance devs have to work through the issues of developing and certifying SharePoint apps that use LightSwitch HTML 5 Client Preview 2 and Windows Azure Mobile Services in the configuration demonstrated by Scott Guthrie at the SharePoint Conference 2012.

Hi, I'm Ricky Kirkham from the developer documentation team for SharePoint 2013. I'd like to let you all know how you can use the app manifest to specify which locales are supported by your app for SharePoint. You are required to specify supported locales, or your app will not be accepted by the SharePoint Store.

The <Properties> element of the app manifest must always contain a child element that identifies the locales that the app supports. For the final release version of SharePoint 2013, the element is <SupportedLocales> and it must have a <SupportedLocale> child for every locale that the app supports, even if there is just one locale. Note that you identify the locale with the CultureName attribute. The value of this attribute is a locale identifier in the Internet Engineering Task Force (IETF)-compliant format LL-CC. The following is an example.

There are a couple of small, and temporary, extra points to know. First, for an undetermined period of time after the release of SharePoint 2013, the SharePoint Store will not have any UI to tell potential app purchasers which locales are supported by your app, meaning that all users will assume that all apps support the en-us locale. But, if you have localized your app for other locales, you should include them in your <SupportedLocales> element so that you will not have to upload a new version of the app when the store's UI is expanded. And, so that users will automatically start seeing more locale options for your app when the UI supports this.

Second, if you have signed up for a SharePoint Online developer site, please note that it might not be converted to use the release version of SharePoint 2013 for a few weeks after release. While your developer site is still based on SharePoint 2013 Preview, you have to use the <SupportedLanguages> element instead of the <SupportedLocales> element. The SharePoint
Store will accept either element for now, but will switch in the future to allow only the SupportedLocale element.

The <SupportedLanguages> element has no child elements or attributes. Its value is a simple semi-colon delimited list of locales. The following is an example.

The <SupportedLanguages> element will continue to work even on the release version for some time, but it is deprecated in favor of <SupportedLocales>, so on new apps you should Always use <SupportedLocales>. And if you have a reason to update an app that uses <SupportedLanguages>, we recommend that you switch to <SupportedLocales> as part of the
update.

For more information about app manifest markup, see the documentation under App Manifest.

I realize I’m a week late for this one but like most folks I’ve been on vacation for the holidays. HAPPY NEW YEAR from all of us on the LightSwitch team! It’s great to be back to work and I’m looking forward to a happy, healthy, and geeky 2013. Although December is normally a very quiet month, a lot of goodness around Visual Studio LightSwitch happened. Check it out!

LightSwitch Cosmopolitan Shell Source Code Released!

I know it took a while (we were tied up in some mumbo jumbo with our legal department) but we finally released the source code to the LightSwitch Cosmo Shell! This is the default shell used in new LightSwitch projects created with Visual Studio 2012. If you want to tweak the current theme & shell to suit your specific needs, this will give you a great starting point with this easily customizable sample. The code and XAML is structured to facilitate making incremental changes to the default shell.

A lot of developers had misconceptions about LightSwitch and had never tried it themselves but were immediately impressed with what it could do once I showed them

Adding mobile HTML as an alternate client can really fill a gap in the development community. Most developers I spoke with are being “forced” to learn JavaScript & HTML to keep up with business demands and the plethora of mobile devices being used in the enterprise.

Being able to use LightSwitch as a way to build and deploy data services to Azure is very compelling for native (Win8, iOS, Android, etc.) developers. They can quickly create the shared backend services and concentrate on the clients.

More Notable Content this Month

In December, the team continued to write articles on how to write JavaScript with the HTML Client Preview.( We promise we have a lot more on the way!) A couple of our devs also wrote up some tips & tricks posts…

I was on a pretty long vacation this month so I may have missed some articles of note. If so, feel free to add a comment below. Many thanks to all our rock star bloggers for contributing in December, particularly Michael Washington!

LightSwitch Team Community Sites

Become a fan of Visual Studio LightSwitch on Facebook. Have fun and interact with us on our wall. Check out the cool stories and resources. Here are some other places you can find the LightSwitch team:

In this post I’ll walk through the options for upgrading an existing Cloud Service using OS Family (1 or 2) to use Windows Server 2012 (OS Family 3) and .NET 4.5.

Traditionally, to upgrade a Cloud Service all that is required is to click the update button in the Windows Azure portal after uploading your updated package or publishing with Visual Studio. However, when changing OS Families from (1 or 2) to 3 you will receive the error saying “Upgrade from OS family 1 to OS family 3 is not allowed”. This is a temporary restriction on the update policy that we are working to remove in an upcoming release.

In the mean time there are two workarounds to updating your existing Windows Azure Cloud Service to run with .NET 4.5 and OS family 3 (Server 2012):

VIP Swap (recommended approach)

Delete and Re-deploy

Both have different pros/cons that are outlined in the table below. Detailed walkthroughs of both the options are provided below:

Can make any change to the updated application. VIP swap restrictions do not apply.

Loss of availability while the application is deleted and then redeployed. Potential change in the public IP address after the redeployment.

Configuring Your Project for Upgrade

Step 1: Open the solution in Visual Studio.

In the example below the solution is named Sdk1dot7 and there are two projects. The first is an MVC Web Role project (MvcWebRole1) and the second is the Windows Azure Cloud Service project (Sdk1dot7).

Step 2: Upgrade the Project

Right click on the Cloud Service project (Sdk1dot7) and select properties. Note in the picture below, the properties page shows that this project was built the June 2012 SP1 Windows Azure Tools version. Click the “Upgrade” button. After the upgrade, if you check this properties sheet again, it should show that the current Windows Azure Tools
version is October 2012.

Step 3: Open ServiceConfiguration.Cloud.cscfg:

Change OSFamily from (1 or 2) to 3.

New Value: osFamily=”3″

Step 4: Modify the Web Role to use .NET 4.5

Open the properties sheet of the WebRole project by right-clicking the WebRole project and clicking on “Properties”.

In the “Application” tab look for the “Target framework” dropdown. It should show .NET 4.0.

Open the dropdown and select .NET 4.5. You’ll get a “Target Framework Change” dialog box, click “Yes” to proceed. The Target Framework should now read .NET 4.5.

Rebuild by hitting ‘F6’. You might get some build errors due to namespace clashes between some new libraries that have been introduced in .NET 4.5. These are easy enough to fix. If you cannot, feel free to add the comment and I’ll respond.

Deploying using VIP Swap

You can deploy from within Visual Studio or from the Windows Azure Portal. In this post, I’ll show the steps to deploy through the portal.

Step 1: Generate the .cspkg and .cscfg files for upload.

Right click on your Cloud Service project (Sdk1Dot7) and select Package:

After the packaging is complete, a file explorer window will open with the newly created .cspkg and .cscfg files for your Cloud Service.

Step 2: Uploading the Files to the Staging Slot using the Windows Azure Portal

Open the Windows Azure portal at https://manage.windowsazure.com and select your cloud service. Click on the “Staging” tab (circled in red in the accompanying picture below).

Once on the staging tab, click on the “Update” button on the bottom panel (circled in green in the accompanying picture below).

From there a dialog will open requesting the newly created files packaged from Visual Studio.

Select “From Local” button for both and upload the files that were generated during packaging. Remember to check the “Update even if one or more roles contain single instance” if you have a single instance role. These options are circled in red in the picture below. Click on the check marked circle to proceed.

Step 3: Test the new deployment

At this point you will have your application running on OS family 3 and using .NET 4.5 in the staging slot and your original application using OS family 1/2 and .NET 4.0 in the production slot. Browse to the application by clicking the Site URL on the dashboard under the staging slot.

Step 4: Perform the VIP Swap to Production

On either the production or the staging tab, click on the “swap” button located in the bottom panel next to the update button (circled in green in the accompanying picture).

After this operation completes, you will have your application running on OS family 3 and using .NET 4.5 in place of the original application.

Deleting Your Deployment

The second option is to delete your deployment. This is not going to be the recommended approach for a production application because you will have downtime and there is a probability of losing the current IP address assigned to your VIP. This option is really only useful for dev/test where you do not want to go through the VIP swap life cycle or you are making changes to the cloud service that are restricted during an in-place upgrade using VIP swaps.

To delete your deployment open the Windows Azure portal at https://manage.windowsazure.com and select your cloud service. Click the “STOP” button in the bottom panel (circled in green in the accompanying picture). Click “yes” on the dialog box that pops up.

Once the service is deleted you can simply republish from Visual Studio or package and upload using Visual Studio + the management portal.

If you are one of the many users of our Windows Azure Powershell cmdlets or our Windows Azure command line tool, then you know that the CLI makes it really easy to manage and deploy Websites, Mobile Services, VMs, Service Bus and much more in Windows Azure from your favorite shell prompt on Windows, Mac and Linux.

That’s not all you can do though, you can do much more! You can take our tools and use them in your favorite scripts as part of your automation infrastructure. Or you can use them right from within your favorite development environments.

Below is a bunch posts both the community and our team talking about this Azure automation goodness.

General scripting

These posts cover the basis of scripting from different shell environments.

Using the Azure CLI from within Cloud 9. In this post, C9 folks show you can install the “azure” cli right from the terminal within Cloud 9. Once you have it the full world of Azure is open to you right from the browser.

One of the simplest ways to get a server full of Microsoft Windows today is to click through a few forms on the Windows Azure platform. Voilá, you have a machine running Windows in the cloud. Azure offers full-featured Windows machines at rates that rival those for Linux instances on other clouds. If you want Linux, Java, Python, Node.js, MySQL, or NoSQL, they're available too.

Microsoft is integrating Azure with its other products at a level you don't see in cloud-only companies. Clearly Microsoft views Azure as a crucial vector for delivering its platform in all its various combinations.

Peter Wayner asserted “Microsoft's cloud wows with great price-performance, Windows toolchain integration, and plenty of open source options” in a deck for his Review: Windows Azure shoots the moon article of 12/19/2012 for InfoWorld’s Cloud Computing blog (missed when published):

A long time ago in a century slipping further and further away, Bill Gates compared MSN with the exploding World Wide Web, saw the future, and pivoted nicely to embrace the Internet. A few decades later, someone at Microsoft looked at the cloud and recognized that the old days of selling Windows Server OS licenses were fading. Today we have Windows Azure, Microsoft's offering for the cloud.

Azure is a cloud filled with racks and racks of machines like other clouds, but it also offers a wider collection of the building blocks enterprise managers need to assemble modern, flexible websites. There are common offerings such as virtual machines, databases, and storage blocks, along with not-so-common additions such as service buses, networks, and connections to data farms address verifiers, location data, and Microsoft's own Bing search engine. There are also tools for debugging your code, sending emails, and installing databases like MongoDB and ClearDB's version of MySQL.

All of these show that Microsoft is actively trying to build a system that lets developers easily produce a working website using the tools of their choice. Azure is not just delivering commodity Microsoft machines and leaving the rest up to you -- it's starting to make it simpler to bolt together all of the parts. The process still isn't simple, but it's dramatically more convenient than the old paradigm.

Not-only-Windows AzureThe Azure service is a godsend for those who are heavily invested in Microsoft's operating systems. Many of the big clouds offer only Linux or BSD machines. Rackspace charges 33 percent more to build out a Microsoft Windows server, but Azure rents a Windows machine at the same bargain rate as Linux.

Did I say the same as Linux? Yes, because Microsoft is fully embracing many open source technologies with Azure. You can boot up a virtual machine and install a few of the popular Linux distros like Ubuntu Server 12.04 or OpenSuse 12.1. There aren't many choices of open source distros, but Microsoft has chosen a few of the more popular ones. They cost the same as the standard Windows Server 2008 R2 and Windows Server 2012 offerings.

Microsoft's embrace of open source is on full display with Azure. The company is pushing PHP, Node.js, Python, Java (if you consider Java open source), and even MySQL. Well, that's not exactly true. You can create running versions of Drupal or WordPress, and Azure will set up MySQL back ends for you. If you go to the SQL tab to start up your own SQL database, you can provision an instance of Microsoft SQL Server, but there's no mention of MySQL. That's because Microsoft is letting a third party, ClearDB, deliver MySQL. It's one of a dozen or so extras you can buy.

The websites with Drupal or WordPress are among many options available. Microsoft will let you have up to 10 free ones with your account. Then you push your HTML or PHP to them with Git, and the server does the rest. (Notice the embrace of Git too.)

These free options are come-ons. If your website takes off and you start getting traffic, you can upgrade to shared services or full, managed machines that can be load balanced. The documentation is a bit cagey about what happens as you start fiddling with the Scale control panel, but you get better guarantees of service and less throttling. If you move over to the Reserved setting, you get dedicated virtual machines with resource guarantees. This is a pretty simple way to build and test a website before deploying it for production.

I'm consistently taken aback by many businesses' disregard for customer service. As long as customers push back on companies that treat them shabbily, enterprises willing to cut service will find themselves out of business or forced to merge with establishments that treat their customers better.

Giving short shrift to customer service remains an issue in the cloud, which is based on the notion of automation and self-provisioning at scale. Dealing with people individually seems contrary to the idea of the cloud. Many public cloud providers assumed they could just put a layer of Web pages between them and their customers, and all would be right -- no phones to answer, no planes to board.

The truth of the matter is that small businesses drove the initial growth of cloud computing. Because typical small businesses can't pay much for cloud services, they weren't surprised when they couldn't get a person on the phone. The cloud providers that courted small businesses continued to grow without much of an investment in customer service.

These days, larger enterprises are investing in public clouds, and they're accustomed to real people talking to them on the phone, account managers in their offices, and cell numbers for support engineers on call around the clock. In other words, they want public cloud providers to offer the same level of customer service as the larger enterprise software providers.

The problem is that many of the public cloud providers are not set up to meet this level of customer service. They simply don't have the people or the systems in place. To establish such systems and personnel, they'll have to raise their prices -- and no one is doing that these days.

But as public clouds push into larger enterprises, they will have no choice but to provide a richer customer service experience. Large enterprise IT demands that level of service, and public clouds won't be able to penetrate the large enterprise market without it.

As I continue to think about the opportunities that Software Defined Networking (SDN) and Network Function Virtualization (NFV) bring into focus, the capability to deliver security as a service layer is indeed exciting.

Recent activity in the space has done nothing but reinforce this opinion. My day job isn’t exactly lacking in excitement, either

As many networking vendors begin to bring their SDN solutions to market — whether in the form of networking equipment or controllers designed to interact with them — one of the missing strategic components is security. This isn’t a new phenomenon, unfortunately, and as such, predictably there are also now startups entering this space and/or retooling from the virtualization space and stealthily advertising themselves as “SDN Security” companies

Like we’ve seen many times before, security is often described (confused?) as a “simple” or “atomic” service and so SDN networking solutions are designed with the thought that security will simply be “bolted on” after the fact and deployed not unlike a network service such as “load balancing.” The old “we’ll just fire up some VMs and TAMO (Then a Miracle Occurs) we’ve got security!” scenario. Or worse yet, we’ll develop some proprietary protocol or insertion architecture that will magically get traffic to and from physical security controls (witness the “U-TURN” or “horseshoe” L2/L3 solutions of yesteryear.)

The challenge is that much of Security today is still very topologically sensitive and depends upon classical networking constructs to be either physically or logically plumbed between the “outside” and the asset under protection, or it’s very platform dependent and lacks the ability to truly define a policy that travels with the workload regardless of the virtualization, underlay OR overlay solutions.

Depending upon the type of control, security is often operationalized across multiple layers using wildly different constructs, APIs and context in terms of policy and disposition depending upon it’s desired effect.

Virtualization has certainly evolved our thinking about how we should think differently about security mostly due to the dynamism and mobility that virtualization has introduced, but it’s still incredibly nascent in terms of exposed security capabilities in the platforms themselves. It’s been almost 5 years since I started raging about how we need(ed) platform providers to give us capabilities that function across stacks so we’d have a fighting chance. To date, not only do we have perhaps ONE vendor doing some of this, but we’ve seen the emergence of others who are maniacally focused on providing as little of it as possible.

If you think about what virtualization offers us today from a security perspective, we have the following general solution options:

Hypervisor-based security solutions which may apply policy as a function of the virtual-NIC card of the workloads it protects.

Extensions of virtual-networking (i.e. switching) solutions that enable traffic steering and some policy enforcement that often depend upon…

Virtual Appliance-based security solutions that require manual or automated provisioning, orchestration and policy application in user space that may or may not utilize APIs exposed by the virtual networking layer or hypervisor

There are tradeoffs across each of these solutions; scale, performance, manageability, statefulness, platform dependencies, etc. There simply aren’t many platforms that natively offer security capabilities as a function of service delivery that allows arbitrary service definition with consistent and uniform ways of describing the outcome of the policies at these various layers. I covered this back in 2008 (it’s a shame nothing has really changed) in my Four Horsemen Of the Virtual Security Apocalypse presentation.

As I’ve complained for years, we still have 20 different ways of defining how to instantiate a five-tupule ACL as a basic firewall function.

Out of the Darkness…

The promise of SDN truly realized — the ability to separate the control, forwarding, management and services planes — and deploy security as a function of available service components across overlays and underlays, means we will be able to take advantage of any of these models so long as we have a way to programmatically interface with the various strata regardless of whether we provision at the physical, virtual or overlay virtual layer.

Delivering security as a service via SDN holds enormous promise for reasons I’ve already articulated and gives us an amazing foundation upon which to start building solutions we can’t imagine today given the lack of dynamism in our security architecture and design patterns.

Finally, the first two elements give rise to allow us to do things we can’t even imagine with today’s traditional physical and even virtual solutions.

I’ll be starting to highlight really interesting solutions I find (and am able to talk about) over the next few months.

• Larry Dignan (@ldignan) asserted “Amazon's infrastructure as a service unit may be underestimated by Wall Street. Bottom line: AWS may change Amazon's profit profile completely in the years to come, argues Macquarie Capital” in his Amazon's AWS: $3.8 billion revenue in 2013, says analyst post of 1/7/2013 to ZDNet’s Cloud blog:

Amazon Web Services is expected to have revenue of $3.8 billion in 2013 and could be worth $19 billion to $30 billion if it were a standalone company, argued Macquarie Capital analysts in a research note.

Macquarie's argument, led by analyst Ben Schachter, relies on the addressable market for cloud computing and the assumption that AWS accounts for all of Amazon's growth in the "other" revenue category.

The Macquarie research note landed at the same time as a Morgan Stanley upgrade. Scott Devitt upgraded Amazon based on international growth and global fulfillment services. Devitt called AWS a strategic asset.

Schachter's report on AWS was far more interesting. Schachter said that AWS is likely to land more large enterprises, a reality that is likely to boost growth. To date, AWS has relied on startups and small companies. The large company argument adds up. At AWS' customer and partner powwow last year, companies like Pfizer were readily available.

Independently, I've confirmed the profile of one U.S. sales region for AWS. In a nutshell, this region features top 200 accounts that range from $5,000 a month to about $200,000 a month. If you extrapolate those numbers throughout the U.S., combine international markets and smaller accounts paying about $1,000 a month it's clear that AWS has some serious growth ahead. That growth can come from additional partnerships and better channel efforts alone.

Macquarie's Schachter is estimating that AWS' current addressable market was $11 billion in 2012 and the unit delivered actual revenue of about $2 billion. In 2013, Schachter estimates that AWS will have revenue of $3.8 billion.

Among the key points from the Macquarie report:

AWS is 100 percent gross margin business for Amazon. Amazon's AWS costs run through its technology and content expense line. As AWS grows faster than Amazon's retail business, the gross margin profile for the entire company changes.

Storage growth for AWS' S3 services is exponential and can carry growth for years.

AWS is expected to have revenue of $3.8 billion in 2013, $6.2 billion in 2014 and $8.8 billion in 2015. In 2015, AWS will be 7 percent of Amazon's revenue---significant, but not large enough for the retailer to be required to break out numbers in its financial reports.

Comparisons to AWS are tricky since RackSpace is among the only standalone direct competitors. Savvis and Terremark compete with AWS, but those outfits are subsidiaries of CenturyLink and Verizon, respectively.

Schachter said:

Using our estimate of $3.8bn for 2013 AWS revenues, and applying a ~5x multiple based on the comps noted above, we arrive at a valuation of ~$19bn for the business on an EV/Sales basis (equating to ~$41/share of AMZN stock). Importantly, we believe this to be a conservative valuation multiple, as AWS revenues are growing much faster than any of the comps incorporated above. At an 8x valuation multiple, we estimate the AWS business could be worth $30bn as a stand-alone company, or ~$66/share.

As you probably know, Amazon CloudWatch provides monitoring services for your cloud resources and your applications. You can track cloud, system, and application metrics, see them visually, and arrange to be notified (via a CloudWatch alarm) if they go beyond a value that you specify. For example, you can track the CPU load of your EC2 instances and receive a notification (via email and/or Amazon SNS) if it exceeds 90% for a period of 5 minutes.

Today we are giving you the ability to stop or terminate your EC2 instances when a CloudWatch alarm is triggered. You can use this as a failsafe (detect an abnormal condition and then act) or as part of your application's processing logic (await an expected condition and then act).

Before we dig in, I should remind you of one thing. If you are using EBS-backed EC2 instances, you can stop them at any point, with the option to restart them later, while retaining the same instance ID and root volume (this is, of course, distinct from the associated termination option).

Failsafe IdeasIf you (or your developers) are forgetful, you can detect unused EC2 instances and shut them down. You could do this by detecting a very low load average for an extended period of time. This type of failsafe could be used to reduce your AWS bill by making sure that you are not paying for resources you're not actually using.

You could also implement a failsafe that would detect runaway instances (for example, CPU pegged at 100% for an extended period of time). Perhaps your application gets stuck in a loop from time to time (only when you are not looking, of course). You could also use our CloudWatch monitoring scripts to detect and act on other situations, such as excessive memory utilization).

Processing LogicMany AWS applications will pull work from an Amazon SQS queue, do the work, and then pass the work along to the next stage of a processing pipeline. You can detect and terminate worker instances that have been idle for a certain period of time.

You can use a similar strategy to get rid of instances that are tasked with handling compute-intensive batch processes. Once the CPU goes idle and the work is done, terminate the instance and save some money!

Application IntegrationYou can also create CloudWatch alarms based on Custom Metrics that you observe on an instance-by-instance basis. You could, for example, measure calls to your own web service APIs, page requests, or message postings per minute, and respond as desired.

Setting Up Alarm ActionsYou can set up alarm actions from the EC2 or CloudWatch tabs of the AWS Management Console. Let's say you want to start from the EC2 tab. Right-click on the instance of interest and choose Add/Edit Alarms:

Choose your metrics, set up your notification (SNS topic and optional email) and check Take the action, and choose either Stop or Terminate this instance:

The console will confirm the creation of the alarm, and you're all set (if you asked for an email notification, you need to confirm the subscription within three days):

Your TurnI can speak for the entire CloudWatch team when I say that we are interested in hearing more about how you will put this feature to use. Feel free to leave a comment and I'll pass it along to them ASAP.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.