Microsoft delivered a beta of its Windows Azure Online Backup service back in March 2012. Late last week, officials shared more on the updated version of that service, which the company is calling a "preview."

Windows Azure Online Backup is a cloud-based backup service that allows server data to be backed up and recovered from the cloud. Microsoft is pitching it as an alternative to on-premises backup solutions. It offers block-level incremental back ups (only changed blocks of information are backed up to reduce storage and bandwidth utilization); data compression, encryption and throttling; and verification of data integrity in the cloud, among other features.

The Azure backup service also supports the Data Protection Manager (DPM) component of the System Center 2012 Service Pack 1. Microsoft made available the beta of System Center 2012 Service Pack 1 on September 10, and has said the final version of that service pack will be out in early 2013.

Microsoft officials said they had no comment on when Microsoft plans to move Windows Azure Online Backup from preview to final release.

Speaking of System Center 2012 Service Pack 1, there are a number of new capabilities and updates coming in this release. SP1 enables all System Center components to run on and manage Windows Server 2012. SP1 adds support for Windows Azure virtual machine management and is key to Microsoft's "software defined networking" support. On the client-management side, SP1 provides the ability to deploy and manage Windows 8 and Windows Azure-based distribution points.

That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account.

Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment.

The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database).

You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted.

As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

Screen captures for each step are added to the two tutorials and code modifications are made to accommodate user authentication.

Prerequisites: Completion of the oakleaf-todo C# application in Part 1, completing the user authentication addition of Part 2 and downloading/installing the Live SDK for Windows and Windows Phone, which provides a set of controls and APIs that enable applications to integrate single sign-on (SSO) with Microsoft accounts and access information from SkyDrive, Hotmail, and Windows Live Messenger on Windows Phone and Windows 8.

1 – Add Push Notifications to the App

1-1. Launch your WAMoS app in Visual Studio 2012 for Windows 8 or higher, open the App.xaml.cs file and add the following using statement:

using Windows.Networking.PushNotifications;

1-2. Add the following code to App.xaml.cs after the OnSuspending() event handler:

A future walkthrough will cover task 4, Pushing notifications to Windows 8 users. I’ll provide similar walkthoughs for Windows Phone, Surface and Android devices when the corresponding APIs become available.

Prerequisites: Completion of the oakleaf-todo C# appliation in Part 1 and downloading/installing the Live SDK for Windows and Windows Phone, which provides a set of controls and APIs that enable applications to integrate single sign-on (SSO) with Microsoft accounts and access information from SkyDrive, Hotmail, and Windows Live Messenger on Windows Phone and Windows 8.

1-2. Make a note of the Package Display Name and Publisher values for registering your app, and navigate to the Windows Push Notifications & Live Connect page, log on with the Windows Live ID you used to create the app, and type the Package Display Name and Publisher values in the text boxes:

Note: The preceding screen capture has been edited to reduce white space but the obsolete capture from Video Studio has not been updated.

1-4. Copy the Package Name value to the clipboard, return to Visual Studio, and paste the Package Name value into the eponymous text box:

1-5. Press Ctrl+s to save the value, log on to the Windows Azure Management Portal, click the Mobile Services icon in the navigation frame, click your app to open its management page and click the Dashboard button:

This walkthrough, which is simpler than the Get Started with Data walkthrough, explains how to obtain a Windows Azure 90-day free trial, create a C#/XAML WASDB instance for a todo application, add a table to persist todo items, and generate and use a sample oakleaf-todo Windows 8 front-end application. During the preview period, you can publish up to six free Windows Mobile applications.

7. Open the Management Portal’s Account tab and click the Preview Features button:

8. Click the Mobil Services’ Try It Now button to open the Add Preview Feature form, accept the default or select a subscription, and click the submit button to request admission to the preview:

9. Follow the instructions contained in the e-mail sent to your Live ID e-mail account, which will enable the Mobile Services item in the Management Portal’s navigation pane:

Note: My rogerj@sbcglobal.net Live ID is used for this example because that account doesn’t have WAMoS enabled. The remainder of this walkthrough uses the subscription(s) associated with my roger_jennings@compuserve.com account.

10. Click the Create a New App button to open the Create a Mobile Service form, type a DNS prefix for the ToDo back end in the URL text box (oakleaf-todo for this example), select Create a new SQL Database in the Database list, accept the default East US region.

Note: Only Microsoft’s East US data center supported WAMoS when this walkthrough was published.

11. Click the next button to open the Specify Database Settings form, accept the default Name and Server settings, and type a database user name and complex password:

Note: You don’t need to configure advanced database settings, such as database size, for most Mobile Services.

12. Click the Submit to create the Mobil Service’s database and enable the Mobile Services item in the Management Portal’s navigation pane. Ready status usually will appear in about 30 seconds:

Yesterday, Microsoft revealed that the Windows Store is now open to all developers in a wide range of countries and locations. For the people who think "wtf is the 'Windows Store'?", it's the central place where Windows 8 users will be able to find, download and purchase applications (or as we now have to say to not look like a computer illiterate: <accent style="Kentucky">aaaaappss</accent>) for Windows 8.

As this is the store which is integrated into Windows 8, it's an interesting place for ISVs, as potential customers might very well look there first. This of course isn't true for all kinds of software, and developer tools in general aren't the kind of applications most users will download from the Windows store, but a presence there can't hurt.

Now, this Windows Store hosts two kinds of applications: 'Metro-style' applications and 'Desktop' applications. The 'Metro-style' applications are applications created for the new 'Metro' UI which is present on Windows 8 desktop and Windows RT (the single color/big font fingerpaint-oriented UI). 'Desktop' applications are the applications we all run and use on Windows today. Our software are desktop applications. The Windows Store hosts all Metro-style applications locally in the store and handles the payment for these applications. This means you upload your application (sorry, 'app') to the store, jump through a lot of hoops, Microsoft verifies that your application is not violating a tremendous long list of rules and after everything is OK, it's published and hopefully you get customers and thus earn money. Money which Microsoft will pay you on a regular basis after customers buy your application.

Desktop applications are not following this path however. Desktop applications aren't hosted by the Windows Store. Instead, the Windows Store more or less hosts a page with the application's information and where to get the goods. I.o.w.: it's nothing more than a product's Facebook page. Microsoft will simply redirect a visitor of the Windows Store to your website and the visitor will then use your site's system to purchase and download the application. This last bit of information is very important.

So, this morning I started with fresh energy to register our company 'Solutions Design bv' at the Windows Store and our two applications, LLBLGen Pro and ORM Profiler. First I went to the Windows Store dashboard page. If you don't have an account, you have to log in or sign up if you don't have a live account. I signed in with my live account. After that, it greeted me with a page where I had to fill in a code which was mailed to me. My local mail server polls every several minutes for email so I had to kick it to get it immediately.

I grabbed the code from the email and I was presented with a multi-step process to register myself as a company or as an individual. In red I was warned that this choice was permanent and not changeable. I chuckled: Microsoft apparently stores its data on paper, not in digital form. I chose 'company' and was presented with a lengthy form to fill out. On the form there were two strange remarks:

Per company there can just be 1 (one, uno, not zero, not two or more) registered developer, and only that developer is able to upload stuff to the store. I have no idea how this works with large companies, oh the overhead nightmares... "Sorry, but John, our registered developer with the Windows Store is on holiday for 3 months, backpacking through Australia, no, he's not reachable at this point. M'yeah, sorry bud. Hey, did you fill in those TPS reports yesterday?"

A separate Approver has to be specified, which has to be a different person than the registered developer. Apparently to Microsoft a company with just 1 person is not a company. Luckily we're with two people! *pfew*, dodged that one, otherwise I would be stuck forever: the choice I already made was not reversible!

After I had filled out the form and it was all well and good and accepted by the Microsoft lackey who had to write it all down in some paper notebook ("Hey, be warned! It's a permanent choice! Written down in ink, can't be changed!"), I was presented with the question how I wanted to pay for all this. "Pay for what?" I wondered. Must be the paper they were scribbling the information on, I concluded. After all, there's a financial crisis going on! How could I forget! Silly me.

"Ok fair enough".

The price was 75 Euros, not the end of the world. I could only pay by credit card, so it was accepted quickly. Or so I thought. You see, Microsoft has a different idea about CC payments. In the normal world, you type in your CC number, some date, a name and a security code and that's it. But Microsoft wants to verify this even more. They want to make a verification purchase of a very small amount and are doing that with a special code in the description. You then have to type in that code in a special form in the Windows Store dashboard and after that you're verified. Of course they'll refund the small amount they pull from your card.

Sounds simple, right? Well... no. The problem starts with the fact that I can't see the CC activity on some website: I have a bank issued CC card. I get the CC activity once a month on a piece of paper sent to me. The bank's online website doesn't show them. So it's possible I have to wait for this code till October 12th. One month.

"So what, I'm not going to use it anyway, Desktop applications don't use the payment system", I thought. "Haha, you're so naive, dear developer!" Microsoft won't allow you to publish any applications till this verification is done. So no application publishing for a month. Wouldn't it be nice if things were, you know, digital, so things got done instantly? But of course, that lackey who scribbled everything in the Big Windows Store Registration Book isn't that quick. Can't blame him though. He's just doing his job.

Now, after the payment was done, I was presented with a page which tells me Microsoft is going to use a third party company called 'Symantec', which will verify my identity again. The page explains to me that this could be done through email or phone and that they'll contact the Approver to verify my identity. "Phone?", I thought... that's a little drastic for a developer account to publish a single page of information about an external hosted software product, isn't it? On Facebook I just added a page, done. And paying you, Microsoft, took less information: you were happy to take my money before my identity was even 'verified' by this 3rd party's minions! "Double standards!", I roared. No-one cared. But it's the thought of getting it off your chest, you know.

Luckily for me, everyone at Symantec was asleep when I was registering so they went for the fallback option in case phone calls were not possible: my Approver received an email. Imagine you have to explain the idiot web of security theater I was caught in to someone else who then has to reply a random person over the internet that I indeed was who I said I was. As she's a true sweetheart, she gave me the benefit of the doubt and assured that for now, I was who I said I was.

Remember, this is for a desktop application, which is only a link to a website, some pictures and a piece of text. No file hosting, no payment processing, nothing, just a single page. Yeah, I also thought I was crazy. But we're not at the end of this quest yet.

I clicked around in the confusing menus of the Windows Store dashboard and found the 'Desktop' section. I get a helpful screen with a warning in red that it can't find any certified 'apps'. True, I'm just getting started, buddy. I see a link: "Check the Windows apps you submitted for certification". Well, I haven't submitted anything, but let's see where it brings me. Oh the thrill of adventure!

I click the link and I end up on this site: the hardware/desktop dashboard account registration. "Erm... but I just registered...", I mumbled to no-one in particular. Apparently for desktop registration / verification I have to register again, it tells me. But not only that, the desktop application has to be signed with a certificate. And not just some random el-cheapo certificate you can get at any mall's discount store. No, this certificate is special. It's precious. This certificate, the 'Microsoft Authenticode' Digital Certificate, is the only certificate that's acceptable, and jolly, it can be purchased from VeriSign for the price of only ... $99.-, but be quick, because this is a limited time offer! After that it's, I kid you not, $499.-. 500 dollars for a certificate to sign an executable. But, I do feel special, I got a special price. Only for me! I'm glowing. Not for long though.

Here I started to wonder, what the benefit of it all was. I now again had to pay money for a shiny certificate which will add 'Solutions Design bv' to our installer as the publisher instead of 'unknown', while our customers download the file from our website. Not only that, but this was all about a Desktop application, which wasn't hosted by Microsoft. They only link to it. And make no mistake. These prices aren't single payments. Every year these have to be renewed. Like a membership of an exclusive club: you're special and privileged, but only if you cough up the dough.

To give you an example how silly this all is: I added LLBLGen Pro and ORM Profiler to the Visual Studio Gallery some time ago. It's the same thing: it's a central place where one can find software which adds to / extends / works with Visual Studio. I could simply create the pages, add the information and they show up inside Visual Studio. No files are hosted at Microsoft, they're downloaded from our website. Exactly the same system.

As I have to wait for the CC transcripts to arrive anyway, I can't proceed with publishing in this new shiny store. After the verification is complete I have to wait for verification of my software by Microsoft. Even Desktop applications need to be verified using a long list of rules which are mainly focused on Metro-style applications. Even while they're not hosted by Microsoft. I wonder what they'll find. "Your application wasn't approved. It violates rule 14 X sub D: it provides more value than our own competing framework".

While I was writing this post, I tried to check something in the Windows Store Dashboard, to see whether I remembered it correctly. I was presented again with the question, after logging in with my live account, to enter the code that was just mailed to me. Not the previous code, a brand new one. Again I had to kick my mail server to pull the email to proceed. This was it. This 'experience' is so beyond miserable, I'm afraid I have to say goodbye for now to the 'Windows Store'. It's simply not worth my time.

Now, about live accounts. You might know this: live accounts are tied to everything you do with Microsoft. So if you have an MSDN subscription, e.g. the one which costs over $5000.-, it's tied to this same live account. But the fun thing is, you can login with your live account to the MSDN subscriptions with just the account id and password. No additional code is mailed to you. While it gives you access to all Microsoft software available, including your licenses.

Why the draconian security theater with this Windows Store, while all I want is to publish some desktop applications while on other Microsoft sites it's OK to simply sign in with your live account: no codes needed, no verification and no certificates? Microsoft, one thing you need with this store and that's: apps. Apps, apps, apps, apps, aaaaaaaaapps. Sorry, my bad, got carried away. I just can't stand the word 'app'. This store's shelves have to be filled to the brim with goods. But instead of being welcomed into the store with open arms, I have to fight an uphill battle with an endless list of rules and bullshit to earn the privilege to publish in this shiny store. As if I have to be thrilled to be one of the exclusive club called 'Windows Store Publishers'. As if Microsoft doesn't want it to succeed.

I hope this helps Microsoft make things more clearer and smoother and also helps ISVs with their decision whether to go with the Windows Store scheme or ignore it. For now, I don't see the advantage of publishing there, especially not with the nonsense rules Microsoft cooked up. Perhaps it changes in the future, who knows.

• David Ramel (@dramel) reported Microsoft Eases Mobile Data Access in the Cloud in an 8/29/2012 post to his Data Driver blog (missed when published):

As explained by Scott Guthrie, when Windows Azure subscribers create a new mobile service, it automatically is associated with a Windows Azure SQL Database. That provides ready-made support for secure database access. It uses the OData protocol, JSON and RESTful endpoints. The Windows Azure management portal can be used for common tasks such as handling tables, access control and more.

The key point about all this is that it enables data access to the cloud from mobile or Windows Store (or desktop) apps without having to create your own server-side code, a somewhat difficult task for many developers. Instead, developers can concentrate on the client and user UI experience. That greatly appeals to me.

In response to a reader query about what exactly is "mobile" about Mobile Services, Guthrie explained:

The reason we are introducing Windows Azure Mobile Services is because a lot of developers don't have the time/skillset/inclination to have to build a custom mobile backend themselves. Instead they'd like to be able to leverage an existing solution to get started and then customize/extend further only as needed when their business grows.

Looks to me like another step forward in the continuing process to ease app development so just about anybody can do it. I'm all for it!

When asked by another reader why this new service only targets SQL Azure (the old name), instead of also supporting BLOBs or table storage, Guthrie replied that it was in response to developers who wanted "richer querying capabilities and indexing over large amounts of data--which SQL is very good at." However, he noted that support for unstructured storage will be added later for those developers who don't require such rich query capabilities.

This initial Preview Release only supports Windows 8 apps to begin with, but support is expected to be added for iOS, Android and Windows Phone apps, according to this announcement. Guthrie explains more about the new product in a Channel9 video, and more information, including tutorials and other resources, can be found at the Windows Azure Mobile Services Dev Center.

Well, if you’re not fortunate enough to be early registered for the store (if you want to get registered, then contact me), you’ll have to go a different route for your development for now.

Instead, head to the Live Connect app management site for Metro style apps at https://manage.dev.live.com/build?wa=wsignin1.0 . Sign in if necessary and register your application here. You’ll be registering your application for “Windows Push Notifications and Live Connect”, which will allow you to use either of these two services from your application. Just follow the steps below:

Open your Package.appxmanifest file in Visual Studio 2012.

Switch to the Packaging tab.

Fill out reasonable values for “Package display name” and “Publisher”. These might already have reasonable defaults, but check to be sure they’re what you want. Once you have these values, you’ll copy and paste these into the form on the Live Connect app management site. This is what binds your application together with the Live ID service.

Press the I accept button. If all goes well, you’ll be redirected to a page that has new values for parts of your manifest. Copy and paste out the value that Live sends back to you for “Package name”. If you’re only using the Live Authentication service, you don’t need “Client secret” and Package Security Identifier (SID)” at this point. The are used for the Push Notifications Service though, so check out http://msdn.microsoft.com/en-us/library/windows/apps/hh465407.aspx if that’s what you’re trying to accomplish.

Save and rebuild your application, and you should now be able to use the Live services from your Windows 8 application.

In my last post, Going deep with Mobile Services data, we looked at how we could use server scripts to augment the results of a query, even returning a hierarchy of objects. This time, let us explore some trickery to do the same, but in reverse.

Imagine we want to post up a series of objects, maybe comments, all at once and process them in a single script… here’s a (somewhat manufactured scenario) demonstrating how you can do this. But first, we’ll need to understand a little more about the way data is handled in Mobile Services.

If you think of the Mobile Service data API as a pipeline, there are two key stages: scripting and storage. You write scripts that can intercept writes and reads to storage. There are two additional layers to consider:

Pre-scripting
This is where Mobile Services performs the authentication checks and validates that the payload makes sense – we’ll talk about this in more detail below.

Pre-StorageAt this point, we have to make sure that anything you’re about to do makes sense – this validation layer is much stricter and won’t allow complex objects through. This stage is also where we handle and log any nasty errors (to help you diagnose issues) and perform dynamic schematization.

The pre-scripting layer (1) expects a single JSON object (no arrays) and if the operation is an update (PATCH) and has an id specified in the JSON body – that id must match the id specified on the url, e.g. http://yourapp.azure-mobile.net/tables/foo/63

But with that knowledge in mind, we can still use the scripting layer to perform some trickery, if you so desire, and go to work on that JSON payload however it sees fit. Take the following example server script; it expects a JSON body like this:

Sending the correct payload from Fiddler is easy, just POST the above JSON body to http://yourapp.azure-mobile.net/tables/ratings and watch the script unfold the results for you. You should get a response as follows:

HTTP/1.1 201 Created
Content-Type: application/json

Content-Length: 19

{"ratingIds":[7,8]}

Which is pretty cool – Mobile Services really does present a great way of building data focused JSON APIs. But what about uploading data like this using the MobileServiceClient in both C# and JS?

C# Client

The C# client is a little trickier. You’re probably using types and have a Rating class that has a MovieId and Rating property. If you have an IMobileServiceTable<Rating> then you can’t ‘insert’ a List<Rating> it simply won’t compile. In this case, you’ll want to drop to the JSON version of the client – it’s still really easy to use.

If you were doing this a lot, you’d probably want to create a few helper methods to help you with the creation of the JSON, but otherwise, it’s pretty straightforward. Naturally, JavaScript has a slightly easier time with JSON:

As more companies move their applications to Microsoft's Azure cloud service, they are looking for a fast, reliable backup solution to ensure that data stored in Azure SQL Databases can be recovered in the case of accidental deletion or damage. Built on technology acquired from Blue Syntax, a leading Azure consultancy and services provider led by SQL Azure Database MVP Herve Roggero, Idera's new Azure SQL Database Backup product provides fast, reliable backup and restore capabilities, including:

Save time and storage space with compression up to 95%

Backup on-premise or to Azure BLOB storage

View historical backup and restore operations

Restore databases to and from the cloud with transaction consistency

"Our new Azure SQL Database Backup product builds on our strong and successful portfolio of SQL Server management solutions," said Heather Sullivan, Director of SQL Products at Idera. "Moving SQL Server data to the cloud can be an intimidating exercise. Our Azure SQL Database Backup tool helps DBAs sleep better at night knowing they are reaping the benefits of cloud services while ensuring the integrity of their data."

Idera and Azure SQL Database Backup are registered trademarks of Idera, Inc. in the United States and trademarks in other jurisdictions. All other company and product names may be trademarks of their respective companies.

SharePoint 2013 has improved OData support. In fact, it offers full-blown OData compliant REST based interface to program against. For those who aren’t familiar with OData, OData is an open standard for querying and updating data that relies on common web standard HTTP. Read this OData primer for more details.

The obvious benefits are:

SharePoint data can be made available on non-Microsoft platforms and to mobile devices

SharePoint can connect and bring in data from any OData sources

Client Programming Options: In SharePoint 2010, there were primarily three ways to access SharePoint data from the client or external environment.

1. Client side object model(CSOM) - SharePoint offers three different set of APIs, each intended to be used in certain type of client applications.

Each object model uses its own proxy to communicate with the SharePoint server object model through WCF service Client.SVC. This service is responsible for all communications between client models and server object model.

2. ListData.SVC – REST based interface to add and update lists.

3.Classic ASMX web services – These services were used when parts of server object model aren’t available through CSOM or ListData service such as profiles, publishing and taxonomy. They also provided backward compatibility to code written for SharePoint 2007.

4. Custom WCF services – When a part of server object model isn’t accessible through all of above three options, custom written WCF services can expose SharePoint functionalities.

Architecture:In SharePoint 2010, Client.svc wasn’t accessible directly. SharePoint 2013 extends Client.svc with REST capabilities and it can now accepts HTTP GET, POST, PUT, MERGE and DELETE requests. Firewalls usually block HTTP verbs other than GET and POST. Fortunately, OData supports verb tunneling where PUT, MERGE and DELETE are submitted as POST requests and X-HTTPMehod header carries the actual verb. The path /_vti_bin/client.svc is abstracted as _api in SharePoint 2013.

CSOM additions: User profiles, publishing, taxonomy, workflow, analytics, eDiscovery and many other APIs are available in client object model. Earlier these APIs are available only in server object model.

ListData.svc is still available mainly for backward compatibility.

Atom or JSON response – Response to OData request could be in Atom XML or JSON format. Atom is usually used with managed clients while JSON is the preferred format for JavaScript client as the response is a complex nested object and hence no extra parsing is required. HTTP header must have specific instructions on desired response type otherwise it would be an Atom response which is the default type.

SharePoint 2013 has an out-of-box supports for connecting to OData sources using BCS. You no longer are required to write .Net assembly connector to talk to external OData sources.

1. Open Visual Studio 2012 and create a new SharePoint 2013 App project. This project template is available after installing office developer tools for Visual Studio 2012.

Create App

Select App Settings

2. Choose a name for the app and type in the target URL where you like to deploy the App. Also choose the appropriate hosting option from the drop-down list. For on promise deployment select ”SharePoint-Hosted”. You can change the URL later by changing “Site URL” property of the App in Visual Studio.

3.Add an external content type as shown in the snapshot below. It will launch a configuration wizard. Type in OData source URL you wish to connect to and assign it an appropriate name. For this demo purpose, I am using publicly available OData source NorthWind.

4. Select the entities you would like to import. Select only entities that you need for the app otherwise not only App will end up with inflated model but also all data associated with each entity will be brought into external lists.Hit finish. At this point, Visual Studio has created a model under “External Content Types” for you. Feature is also updated to have new modules.

5.Expand “NorthWind” and you should see customer.ect. This is the BCS model. It doesn’t have a “.bdcm” extension like its predecessor. However it doesn’t alter its behavior as the model is still defined with XML. If you open the .ect file with ordinary XML editor, you can observe similarity in schema.

Customer BDC model representing OData entity

6. Deploy the App. And browse to the http://sp2013/ListCustomers/Lists/Customer({BaseTargetUrl}/{AppName}/Lists/{EntityName}) . You should see imported customer data in external list.

ODATA uses the Open Data Protocol (ODATA) to expose and consume data over the Web or Intranet.

It is primarily designed to expose resources manipulated by PowerShell cmdlets and scripts as schematized ODATA entities using the semantics of representational state transfer (REST).

The philosophy of REST ODATA limits the verbs that can be supported on resources to only the basic operations: Create, Read, Update and Delete.
In this topic I will talk about Management ODATA being able to expose resources that model PowerShell pipelines that return unstructured data. This is an optional feature and is called “PowerShell pipeline invocation” or “Invoke”. A single Management ODATA endpoint can expose schematized resources, or the arbitrary cmdlet resources or both.

In this blog I will show how to write a windows client built on WCF client to create a PowerShell pipeline invocation.

Any client can be used that supports ODATA. WCF Data Services includes a set of client libraries for general .NET Framework client applications that is used in this example.

If you are building a WCF client, the only requirement is to use WCF Data Services 5.0 libraries to be compatible. In this topic I will assume you already have a MODATA endpoint configured and up and running. For more information on MODATA in general and how to create an endpoint please refer to msdn documentation at http://msdn.microsoft.com/en-us/library/windows/desktop/hh880865(v=vs.85).aspx

Since “Invoke” feature is an optional feature and is disabled by default, you will need to enable it by adding the following configuration to your MODATA endpoint web.config:

Management ODATA defines two ODATA resource sets related to PowerShell pipeline execution: CommandDescriptions and
CommandInvocations. The CommandDescriptions resource set represents the collection of commands available on the server.

By enumerating the resource set, a client can discover the commands that it is allowed to execute and their parameters.

The client must be authorized to execute Get-Command cmdlet for the CommandDescriptions query to succeed.

This indicates that the client is allowed to execute the Get-Process command.

The CommandInvocations resource set represents the collection of commands or pipelines that have been invoked on the server. Each entity in the collection represents a single invocation of some pipeline. To invoke a pipeline, the client sends a POST request containing a new entity. The contents of the entity include the PowerShell pipeline itself (as a string), the desired output format (typically “xml” or “json”), and the length of time to wait synchronously for the command to complete. A pipeline string is a sequence of one or more commands, optionally with parameters and delimited by a vertical bar character.

For example, if the server receives the pipeline string “Get-Process –Name iexplore”, with output type specified as “xml” then it will execute the Get-Process command (with optional parameter Name set to “iexplore”), and send its output to “ConvertTo-XML”.

The server begins executing the pipeline when it receives the request. If the pipeline completes quickly (within the synchronous-wait time) then the server stores the output in the entity’s Output property, marks the invocation status as “Complete”, and returns the completed entity to the client.

If the synchronous-wait time expires while the command is executing, then the server marks the entity as “Executing” and returns it to the client. In this case, the client must periodically request the updated entity from the server; once the retrieved entity’s status is “Complete”, then the pipeline has completed and the client can inspect its output.

The client should then send an ODATA DeleteEntity request, allowing the server to delete resources associated with the pipeline.

There are some important restrictions on the types of commands that can be executed. Specifically, requests that use the following features will not execute successfully:

script blocks

parameters using environment variables such as "Get-Item -path $env:HOMEDRIVE\\Temp"

Authorization and PowerShell initial session state are handled by the same CLR interfaces as for other Management ODATA resources. Note that every invocation calls some ConvertTo-XX cmdlet, controlled by the OutputFormat property of the invocation. The client must be authorized to execute this cmdlet in order for the invocation to succeed.

Here is the code snippet that shows how to send a request to create a PowerShell pipeline invocation and how to get the cmdlet execution result:

I’ve got more cool news to share with you today about Windows Azure Active Directory (AD). We’ve been hard at work over the last 6 weeks improving the service and today we’re sharing the news about three major enhancements to our developer preview:

The ability to create a standalone Windows Azure AD tenant

A preview of our Directory Management User Interface

Write support in our GraphAPI

With these enhancements Windows Azure Active Directory changes from a being compelling promise into a standalone cloud directory with a user experience supported by a simple yet robust set of developer API’s.

Here’s a quick overview of each of these new capabilities and links that let you try them out and read more about the details.

New standalone Windows Azure AD tenants

First we’ve added the ability to create a new Windows Azure AD tenant for your business or organization without needing to sign up for Office 365, Intune or any other Microsoft service. Developers or administrators who want to try out the Windows Azure Active Directory Developer preview can now quickly create an organizational domain and user accounts using this page. For the duration of the preview these AD tenants are available free of charge.

User Interface Preview

Second, I’m excited to share with you that we have also added a preview of the Windows Azure Active Directory Management UI. It went live as a preview last Friday to support the preview of Windows Azure Online Backup. With this new user interface administrators of any service that uses Windows Azure AD as its directory (Windows Azure Online Backup, Windows Azure, Office 365, Dynamics CRM Online and Windows InTune) can use the preview portal at https://activedirectory.windowsazure.com. Administrators can use this UI to manage the users, groups, and domains used in their Windows Azure Active Directory, and to integrate their on-premises Active Directory with their Windows Azure Active Directory. The UI is a standalone preview release. As we work to enhance it over the coming months, we will move it into the Windows Azure Management portal to assure developers and IT Admins have a single place to go to manage all their Windows Azure services.

Please note that if you log in using your existing Windows Azure AD account from Office 365 or another Microsoft service, you’ll be working with actual live data. So any changes made through this UI will affect live data in the directory and will be available in all the Microsoft services your company subscribes to (e.g. Office 365, InTune, etc.). That is of course the entire purpose of having a shared directory, but during the preview, you might want to create a new tenant and new set of users rather than experimenting with mission critical live data. Also note that the existing portals that you already use for identity management in these different apps will continue to work as is providing a dedicated in-service experience.

Write access for the Windows Azure Active Directory Graph API

In our first preview release of the Graph API we introduced the ability for 3rd party applications to Read data from Windows Azure Active Directory. Today we released the ability for applications to easily Write data to the directory. This update includes support for:

Create, Delete and Update operations for Users, Groups, Group Membership

User License assignments

Contact management

Returning the thumbnail photo property for Users

Setting JSON as the default data format

An updated Visual Studio sample application is available from here, and a Java version of the sample application is available from here. Please try the new capabilities and provide feedback directly to the product team - the download pages, includes a section where you can submit questions, comments and report any issues found with the sample applications.

For more detailed information on the new capabilities for Windows Azure AD Graph API, visit our updated MSDN documentation.

It’s exciting to get to share these new enhancements with you. We really hope you’ll find them useful for building and managing your organizations cloud based applications. And of course we’d love to hear any feedback or suggestions you might have.

"Project 'Brooklyn' is a networking on-ramp for migrating existing Enterprise applications onto Windows Azure. Project 'Brooklyn' enables Enterprise customers to extend their enterprise networks into the cloud by enabling customers to bring their IP address space into WA and by providing secure site-to-site IPsec VPN connectivity between the Enterprise and Windows Azure. Customers can run 'hybrid' applications in Windows Azure without having to change their applications."

(The "into WA" part of this description means into Microsoft's own Azure datacenters, I'd assume.)

Brooklyn got a brand-new mention this week in a blog post about High Performance Computing (HPC) Pack 2012, which is built on top of Windows Server 2012. (Microsoft is accepting beta applicants for the HPC Pack 2012 product as of September 10.) Among the new features listed as part of HPC Pack 2012 is Project Brooklyn.

Brooklyn fits with Microsoft's goal of convincing users that they don't have to create Azure cloud apps from scratch (which was Microsoft's message until it added persistent virtual machines to Azure earlier this year). Microsoft's intention is to make it easier for users to bring existing apps to the Azure cloud and/or bridge their on-premises apps with Azure apps in a hybrid approach.

And as to why am I writing about this now -- since it was unveiled a few months ago -- it's all about the disovery of the codename for this codename queen.

The Project “Brooklyn” codename has been around (but under the covers) for quite a while.

On 10/28/2012, the current CA certificate used by Windows Azure Connect endpoint software will expire. To continue to use Windows Azure Connect after this date, Connect endpoint software on your Windows Azure roles and on-premises machines must be upgraded to the latest version. Depends on your environment and configuration, you may or may not need to take any action.

For your Web and Worker roles, if they are configured to upgrade to the new guest OS automatically, then you don’t need to take any action. When the new OS is rolled out later this month Connect endpoint software will be automatically refreshed to the new version. To upgrade endpoint software manually, you can set below to true in your .cscfg and “Upgrade”.

Editor’s Note: Today’s post comes from Sridhar Nanjundeswaran, Software Engineer at 10gen [pictured at right]. 10gen develops MongoDB, and offers production support, training, and consulting for the open source database.

We at 10gen are excited about our ongoing collaboration with Microsoft. We are actively leveraging new features in Windows Azure to ensure our common customers using MongoDB on Windows Azure have access to the latest features and the best possible experience.

In early June, Microsoft announced the preview version of Windows Azure Virtual Machines, which enables customers to deploy and run Windows and Linux instances on Windows Azure. This provides more control over actual instances as opposed to using Worker Roles. Additionally, this is the paradigm that is most familiar to users who run instances in their own private clouds or on other public clouds. In conjunction with Windows Azure’s release, 10gen and Microsoft are now delivering the MongoDB Installer for Windows Azure.

The MongoDB Installer for Windows Azure automates the process of creating instances on Windows Azure, deploying MongoDB, opening up relevant ports, and configuring a replica set. The installer currently works when used on a Windows machine, and can be used to deploy MongoDB replica sets to Virtual Machines on Windows Azure. Additionally, the installer uses the instance OS drive to store MongoDB data, which limits storage and performance. As such, we recommend that customers only use the installer for experimental purposes at this stage.

There are also tutorials that walk users through how to deploy a single standalone MongoDB server to Windows Azure Virtual Machines for Windows 2008 R2 Server and CentOS. In both cases, by using Windows Azure data disks, this implementation provides data safety given the persistent nature of disks, which allows the data to survive instance crashes or reboots.

Furthermore, Windows Azure’s triple-replication of the data guards against storage corruption. Neither of these solutions, however, takes advantage of MongoDB’s high-availability features. To deploy MongoDB to be highly available, one can leverage MongoDB replica sets; more information on this high-availability feature can be found here.

Finally, customers who would like to deploy MongoDB replica sets to CentOS VMs can follow these basic steps:

The article includes many of the samples and concepts I’ve presented in my recent DevConnections and That Conference talks including using Node.js as simple alternative to ASP.NET/WCF 4.5. In fact, I plan to update all of the samples to run in Windows Azure Compute Services (Worker Role) as soon as Server 2012/.NET 4.5 is rolled out to Windows Azure Compute Services and hopefully show at Desert Code Camp in November.

BTW, if you run a user group or involved in a code camp or other event and would be interested in passing out some complimentary copies of CODE magazine, please let me know and I'll get you hooked up with the right folks.

Did you know that as a Windows Azure customer, you can easily access a highly scalable email delivery solution that integrates into applications on any Windows Azure environment? Getting started with SendGrid on Windows Azure is easy.

The first thing you’ll need is a SendGrid account. You can find SendGrid in the Windows Azure Marketplacehere. Just click the green “Learn More” button and continue through the signup process to create a SendGrid account. As a Windows Azure customer, you are automatically given a free package to send up to 25,000 emails/month.

And then “Sign up Now” to create your SendGrid account.

Now that your SendGrid account has been created, it’s time to integrate it into a Windows Azure Web Site. Login and click the “+ New” button at the bottom of the page to create a new Web Site:

After the website has been created, select it in the management portal and find the “Web Matrix” button at the bottom of the page. This will open your new website in WebMatrix, installing it if necessary. WebMatrix will alert you that your site is empty:

Choose “Yes, install from the template Gallery” and select the “Empty Site” template.

Choose the “Files” view on the left-hand side, and then from the File menu, select “New” -> “File”, and choose the “Web.Config (4.0)” file type.

Next, we need to tell Asp.Net to use the SendGrid SMTP server. Add this text to the new Web config, inside the <configuration> tag:

Change “from@domain.com” to your email address, “to@domain.com” to the email address you are sending to, and edit the “subject” and “body” parameters to your desired content.

All that’s left is to click “Publish” to push your changes back to Windows Azure, and to browse to the site. Clicking “Send Email” will cause the email to be sent.

That’s it! With SendGrid, developers like you can focus on building better systems and earning more revenue, while reducing costs in engineering and infrastructure management. Go build the next big application. Let SendGrid take care of your email.

Slightly more than a third (36%) of active Cloud developers have used or are using Microsoft’s Azure Cloud platform, according to a new Evans Data survey of over 400 developers actively developing for or in the Cloud. This makes it the leader over other competing services such as Google Storage (29%) and Amazon Web Services (28%) in a market that remains fragmented.

The independent syndicated survey, conducted in July, explored use patterns and intentions for Cloud development. Another finding, that has implications for the future, showed that over half of all Cloud developers who develop within a specific Cloud service also deploy their apps to that service, while 27% deploy to a different service, and just over 10% deploy to a hybrid model which includes an on-premises element.

“Microsoft was very aggressive with its introduction of Azure to the development community a few years ago and that has paid off,” said Janel Garvin, CEO of Evans Data Corp. “Additionally, the large established MSDN community and the fact that Visual Studio is still the most used development environment are huge assets to Microsoft in getting developers to adopt the Azure platform. However, Cloud platform use is still very much fragmented with lots of players laying claim to small slivers of share. It will take more time before a clear landscape of major Cloud vendors shakes out.”

The Evans Data Cloud Development Survey is conducted twice yearly and examines usage patterns, adoption intentions, and other issues relating to Cloud development such as Cloud as a development platform, Cloud development tools, ALM, Big Data in the Cloud, Security and Governance, Cloud configurations, mobile Cloud clients, and more.

Recently, one of my .NET developers who was involved in a Windows Azure Project came and asked me two questions:

1. Why does it take longer time to debug or run a Windows Azure Project than a typical ASP.NET project? It takes about 15 to 30 seconds to debug a Windows Azure Project, but only 8 to 15 seconds to debug an ASP.NET project.

The emulator is designed to simulate the cloud environment, so developers don’t have to be connected to the cloud at all times. The two emulators are: Compute Emulator that simulates the Azure fabric environment and Storage Emulator that simulates the Windows Azure Storage. Apart from emulators, the two important command-line tools are CSPack that prepares and packages the application for deployment and CSRun that deploys and manages the application locally. Other command-line tools can be found here.

Apart from the SDK, there’s an add-in called Windows Azure Tools for Microsoft Visual Studiothat extends Visual Studio 2010 to enable the creation, configuration, building, debugging, running, packaging, and deployment of scalable web applications and services on Windows Azure. You will find a new “cloud” template (as can be seen in Figure 3) when adding a new project after installing it. Furthermore, it encapsulates the complexity of running the tools and other commands behind the scenes when we build, run, and publish a Windows Azure Project with Visual Studio.

Figure 3 – Windows Azure project template

The reason why it takes longer

It is true that it takes more time to debug or run a Windows Azure Project than a typical ASP.NET project.

In fact there’s a reasonable rationale behind. When we debug or run a Windows Azure cloud project, all other associated projects (Web / Worker Role) will be compiled and packed into a csx directory. Afterwards, Visual Studio lets CSRun deploy and run your package. TheCompute Emulator will then set up and host your web applications in IIS as many as we specify in Instance Count property.

Figure 4 – Websites are being set up in IIS when running Windows Azure Project

As the Full IIS capability was introduced in SDK 1.3, web applications on Windows Azure involve two processes: w3wp.exe which runs your actual ASP.NET application, and WaIISHost.exe which runs your RoleEntryPoint in WebRole.cs / WebRole.vb.

As can be seen, there’re more steps involved when debugging or running a Windows Azure Project. This explains why it takes longer to debug or run Windows Azure Project on Compute Emulator compared to debugging or running an ASP.NET project on IIS or ASP.NET Development Server (Cassini) which is more straightforward.

2. Can I debug or run the ASP.NET project instead of the Windows Azure Project when developing a Windows Azure Project?

Jumping into the next question, is it possible to debug or run ASP.NET project instead of Windows Azure project?

The answer is yes. You can do so simply by setting the ASP.NET project as startup project. However, there are some caveats:

People often store settings at ServiceConfiguration.cscfg in their Windows Azure Project. You can get the setting value by calling RoleEnvironment.GetConfigurationSettingValue(“Setting1”). However, you will run into an error when debugging or running the ASP.NET project.

The reason of getting this error is because the ASP.NET project is unable to recognize and call GetConfigurationSettingValue as the settings belongs to Windows Azure Project.

The Resolution

To resolve this error, there’s a trick we can do as shown in the following code fragments. The idea is to encapsulate the retrieval settings using a get property. WithRoleEnvironment.IsAvailable, we are able to determine if the current runs on Windows Azure environment or a typical ASP.NET project. If it doesn’t run on Windows Azure environment, we can get the value from web.config instead of ServiceConfiguration.cscfg. Of course, we need to also store the setting somewhere else such as AppSettings in web.config file.

2. Loading a storage account

We normally use the store the Storage Account Connection String in Service Configuration setting as well.

Figure 6 – Setting Storage Connection String in Service Configuration

As such, you might run into similar error again when running the ASP.NET project.

The Resolution

We use similar technique to resolve, but slightly different API. If theRoleEnvironment.IsAvailable returns false, we will get the value from AppSetting in web.config. If we find that it uses Development Storage, we will loadCloudStorageAccount.DevelopmentStorageAccount, else we will parse the connection string that is loaded from AppSettings in web.config file. The following code fragments illustrate how you should write your code and configuration.

Catches and Limitations

Although these tricks work in most cases, there are several catches and limitations identified:

The technique is only applicable for ASP.NET Web Role, but not Worker Role.

Apart from two issues identified, logging with Windows Azure Diagnostic may not work. This may not be a serious concern as we are talking about the development phase, not in production.

You are unable to simulate multiple instances when debugging or running ASP.NET project.

Conclusion

To conclude, this article answers two questions. We have identified some caveats as well as the tricks to overcome these issues.

Although this technique is useful to avoid debugging or running a Windows Azure Project, itdoesn’t mean you never need to run as a Windows Azure Project again. I would still recommend you occasionally run the Windows Azure Project to ensure that your ASP.NET project targets Windows Azure perfectly.

Back in June, I walked you through the steps to define and design a cloud computing API or service. My goal was to get those who build private, public, and hybrid clouds to think a bit more around how these APIs are designed, developed, deployed, and managed.

The core problem is that APIs are services, which are typically used in the context of a service-oriented architecture, but SOA is not as cool as it was 10 years ago. To properly design services, you have to consider how resources should be used in service-oriented ways, including how well they work and play within infrastructure and application architecture.

The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database. To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise.

What are cloud providers and IT staff building private clouds to do? Once you've (re)considered how these services will be designed in the context of the larger use cases where cloud computing can provide measurable value to the business, it's a matter of decomposing the desired functions into core services, or cloud APIs. The more use cases considered, the more likely that the lower-level services will nail most of the business requirements. In other words, this is a top-down problem.

In my real-world encounters as a consultant, I find service design to be a more haphazard process. However, that need not be the case if you understand the use cases and how all these elements should exist in architecture. But few organizations have reached that level of thinking. As we all smarten up, count on major redesign work for your services.

In my previous post, I discussed separate storage accounts and the locality of those accounts as well as transfer, sample and trace logging levels as ways to optimize using Diagnostics using Windows Azure. This post discusses six additional ways to optimize your Windows Azure Diagnostic experience.

Data selection – Carefully select the minimal amount of diagnostic data that you need to monitor your application. That data should contain only the information you need to identify the issue and troubleshoot your application. Logging excess data increases the clutter of looking through logs data while troubleshooting and costs more to store in Windows Azure.

Purge Azure diagnostic tables – Periodically purge the diagnostic tables for stale data that you will not need any more to avoid paying storage costs for dormant bits. You can store it back on-premise if you feel you will need it sometime later for historical or auditing purposes. There are tools to help with this including System Center Monitoring Pack for Windows Azure.

Set Perfmon Counters during role OnStart – Diagnostics are set per role instance. Due to scalability needs the number of role instances can increase or decrease. By putting the initialization of the Perfmon counters in the OnStart method (which is invoked when a role instance is instantiated) you can ensure your role instance will always start configured with the correct Perfmon counters. If you don't specifically setup the counters during the OnStart method, the configurations might be out of sync. This is a common problem for customers who do not define the Perfmon counters in OnStart.

Optimize Performance Counters – Sometimes diagnostics are like gift purchases before Christmas sale. You need only a few of them but due to the lower prices you end up buying more than you need. The same goes with performance counters. Be sure what you are gathering are meaningful to your application and will be used for alerts or analysis. Windows Azure provides a subset of the performance counters available for Windows Server 2008, IIS, and ASP.NET. Here are some of the categories of commonly used PerfMon counters for Windows Azure applications. For each of these categories there can be more than one actual counter to track:

Manage max buffer size – When configuring Perfmon counters to be used in your role’s OnStart method you can specify a buffer size using the PerformanceCounters.BufferQuotaInMB property of the DiagnosticMonitorConfiguration object. If you set this to a value that fills up before the buffer is transferred from local to Azure storage you will lose the oldest events. Make sure your buffer size has room to spare to prevent loss of diagnostic data.

Consider WAD Config file – There are some cases where you may not want to put all the calls to configure Perfmon counters or logs to use in a config file instead of the code for the role. For instance, if you are using a VM which does not have a startup routine or non-default diagnostic operations, you can use the WAD config file to manage that. The settings in the config file will be set up before OnStart method gets called.

Since publishing my Windows Azure Cookbook series, I’ve received a number of requests to update the reference architecture diagram for the platform to include the features in the June release. Here my latest version of the diagram.

(Click on it [in the original post] to get a larger version suitable for printing.)

Starting with the good news: We are currently looking at about seven or so years of successful implementation and deployment of cloud computing, seven years! Although there seems to be a lot of hype in this young and still immature field of cloud computing, this hype is mostly seen in the press and in the layperson's view. In contrast, our science, engineering and business communities are moving forward into clouds, in big steps, driven by the many benefits and factors like virtualization and easy access, and accelerated by an ever increasing number of cloud use cases, success stories, growing number of cloud start-ups, and established IT firms offering cloud services. And many users are often not aware that the services they use today are sitting right in the cloud. They enjoy cloud benefits, like business flexibility, scalability of IT, reduced cost, and resource availability on demand and pay per use, at their finger tip. So far so good!

Looking closer into the current growing cloud offerings and use of clouds in research and industry, we anticipate a whole set of barriers to cloud adoption. To name a few, major ones: lack of trust into the service providers which is caused mainly by security concerns; the attitude of 'never change a running system'; painful legal regulations when crossing political boundaries; existing software licensing models and cost; and securing intellectual property and other corporate assets. Some of these issues are addressed currently by the Uber-Cloud Experiment.

And, another cloud challenge arises at the horizon, beyond the current state of the mega-providers' monolithic clouds: with more and more cloud service providers, with richer and deeper cloud services crowding the cloud market, in the near future, how do I get my data out of one cloud to continue processing it in another cloud, to support e.g., workflow or failover? Or, how does an independent service provider (or cloud broker) interconnect different services from different cloud providers most efficiently? Such scenarios are common, for example, with federated (Web) services, which consist of different service components sitting in different clouds. How do I manage such a cloud workflow? How do I monitor, control and manage the underlying cloud infrastructure and the complex applications running there? How far can I get with least manual intervention, plus taking into account user requirements and service level agreements?

These important topics are covered in a new book, Achieving Federated and Self-Manageable Cloud Infrastructures: Theory and Practice (2012), by Massimo Villari, Ivona Brandic, and Francesco Tusa. In 20 chapters, written exclusively by renowned experts in their field, the book thoroughly discusses the concepts of federation in clouds, resource management and brokerage, new cloud middleware, monitoring in clouds, and security concepts. The text also presents practical implementations, studies, and solutions, such as cloud middleware implementations and use, monitoring in clouds from a practical point of view, enterprise experience, energy constrains, and applicable solution for securing clouds.

And that's what makes this book so valuable, for the researcher, but also for the practitioner to develop and operate these cloud infrastructures more effectively, and for the user of these clouds. For the researcher it contributes to the actual and open research areas in federated clouds as mentioned above. For the practitioner and user it provides real use cases demonstrating how to build, operate and use federated clouds, which are based on real experience of the authors themselves, practical insight and guidance, lessons learned, and recommendations.

Microsoft’s Private Cloud is built on the industry leading foundation of Windows Server and System Center. The System Center 2012 Service Pack 1 (SP1) update enables System Center to run on and manage the final version of Windows Server 2012, released earlier this week. System Center 2012 SP1 brings System Center manageability to many new capabilities in Windows Server 2012 as well as a range of other improvements. You can read about the System Center 2012 SP1 on the product overview page

Windows Server 2012 and SQL Server 2012 SupportWith this Beta release, all System Center 2012 SP1 components are now enabled to manage and run in a Windows Server 2012 environment. System Center 2012 SP1 Beta also now supports the use of SQL Server 2012.

Network VirtualizationWith System Center 2012 SP1 you can take advantage of the Virtual Machine Manager’s ability to manage Hyper-V network virtualization across multiple hosts, simplifying the creation of entire virtual networks.

Hybrid Cloud Management and the Service Provider Foundation APISystem Center 2012 already enables optimization of your organization’s private cloud and Windows Azure resources from a single pane of glass, using the AppController component. In System Center 2012 SP1 we’ve extended AppController’s capabilities to include cloud resources offered by hosting service providers, giving you the ability to integrate and manage a wide range of custom and commodity IaaS cloud services into the same single pane of glass.

Service Provider Foundation APIThe Service Provider Foundation (SPF) API is a new, extensible OData REST API in System Center 2012 SP1 that enables hosters to integrate their System Center installation into their customer portal and is automatically integrated with customers’ on-premises installation of AppController. A simple exchange of credentials enables enterprises to add the Service Provider cloud to App Controller for consumption alongside private and public cloud resources. SPF also has multi-tenancy built-in enabling operation at massive scale, controlling multiple scale-units built around Virtual Machine Manager.

Windows Azure Virtual Machine managementSystem Center 2012 SP1 now integrates with Windows Azure Virtual Machines enabling you to move on-premises Virtual Machines to run in Windows Azure and then manage from your on-premises System Center installation enabling a range of workload distribution and remote operations scenarios

Enhanced backup and recovery optionsSystem Center 2012 SP1 Data Protection Manager adds the option to host server backups in the Windows Azure cloud, helping to protect against data loss and corruption while integrating directly into the existing backup administration interface in System Center. More details.

Global Service Monitor SupportSystem Center 2012 SP1 includes support for a new Windows Azure-based service called “Global Service Monitor” (GSM). GSM extends the application monitoring capabilities in System Center 2012 SP1 using Windows Azure points of presence around the globe, giving a true reflection of end-user experience of your application. Synthetic transactions are defined and scheduled using your on-premises System Center 2012 SP1 Operations Manager console; the GSM service executes the transactions against your web-facing application and GSM reports back the results (availability, performance, functionality) to your on-premises System Center dashboard. You can integrate this perspective with other monitoring data from the same application, taking action as soon as any issues are detected in order to achieve your SLA. To evaluate System Center 2012 SP1 with GSM, sign up for a customer preview of GSM.

It may be heresy, but not every organization needs or desires all the benefits of cloud.

There are multiple trends putting pressure on IT today to radically change the way they operate. From SDN to cloud, market pressure on organizations to adopt new technological models or utterly fail is immense.

That's not to say that new technological models aren't valuable or won't fulfill promises to add value, but it is to say that the market often overestimates the urgency with which organizations must view emerging technology.

Too, mired in its own importance and benefits, markets often overlook that not every organization has the same needs or goals or business drivers. After all, everyone wants to reduce their costs and simplify provisioning processes!

And yet goals can often be met through application of other technologies that carry less risk, which is another factor in the overall enterprise adoption formula – and one that's often overlooked.

There are two models competing for data center attention today: dynamic data center and cloud computing. They are closely related, and both promise similar benefits with cloud computing offering "above and beyond" benefits that may or may not be needed or desired by organizations in search of efficiency.

The dynamic data center originates with the same premises that drive cloud computing: the static, inflexible data center models of the past inhibit growth, promote inefficiency, and are fraught with operational risk. Both seek to address these issues with more flexible, dynamic models of provisioning, scale and application deployment.

The differences are actually quite subtle. The dynamic data center is focused on NOC and administration, with enabling elasticity and shared infrastructure services that improve efficiency and decrease time to market. Cloud computing, even private cloud, is focused on the tenant and enabling for them self-service capabilities across the entire application deployment lifecycle.

A dynamic data center is able to rapidly respond to events because it is integrated and automated to enable responsiveness. Cloud computing is able to rapidly respond to events because it is necessarily must provide entry points into the processes that drive elasticity and provisioning to enable the self-service aspects that have become the hallmark of cloud computing.

DATA CENTER TRANSFORMATION: PHASE 4

You may recall the cloud maturity model, comprising five distinct steps of maturation from initial virtualization efforts through a fully cloud-enabled infrastructure.

A highly virtualized data center, managed via one of the many available automation and orchestration frameworks, may be considered a dynamic data center. When the operational processes codified by those frameworks are made available as services to consumers (business and developers) within the organization, the model moves from dynamic data center to private cloud.

This is where the dynamic data center fits in the overall transformational model. The thing is that some organizations may never desire or need to continue beyond phase 4, the dynamic data center.

While cloud computing certainly brings additional benefits to the table, these may be benefits that, when evaluated against the risks and costs to implement (or adopt if it's public) simply do not measure up.

And that's okay. These organizations are not some sort of technological pariah because they choose not to embark on a journey toward a destination that does not, in their estimation, offer the value necessary to compel an investment.

Their business will not, as too often predicted with an overabundance of hyperbole, disappear or become in danger of being eclipsed by other more agile, younger versions who take to cloud like ducks take to water.

… A hybrid application is one where the marketing website scales up and runs in the cloud environment, and where the high-value, high-touch customer interactions can still securely connect and send messages to the core backend systems and run a transaction. We built Windows Azure Service Bus and the “Service Bus Connect” capabilities of BizTalk Server for just this scenario. And for scenarios involving existing workloads, we offer the capabilities of the Windows Azure Connect VPN technology.

Hybrid applications are also those where data is spread across multiple sites (for the same reasons as cited above) and is replicated and updated into and through the cloud. This is the domain of SQL Azure Data Sync. And as workloads get distributed across on-premises sites and cloud applications beyond the realms of common security boundaries, a complementary complexity becomes the management and federation of identities across these different realms. Windows Azure Access Control Service provides the solution to this complexity by enabling access to the distributed parts of the system based on a harmonized notion of identity.

This guide provides in-depth guidance on how to architect and build hybrid solutions on and with the Windows Azure technology platform. It represents the hard work of a dedicated team who collected good practice advice from the Windows Azure product teams and, even more importantly, from real-world customer projects. We all hope that you will find this guide helpful as you build your own hybrid solutions. …

Cloud Security and Governance

It’s got a new name and new digs, but it’s the same tremendous event and experience! Yes, New EnglandBoston Code Camp is now open for business!

If you’re unfamiliar with the event, it’s a day of technical presentations and networking held by the community for the community completely free of charge. This area gave birth to the concept over eight years ago, and twice a year 250 or more area technologists have gathered to take advantage of the knowledge transfer in an open format and informal setting.

Session Submission is Open!

At this point general registration is not yet open, but the organizers are looking for speakers to present on whatever technology areas they may be passionate about. Whether you’re a Distinguished Toastmaster or your last time on stage was as a rock in the 3rd grade play, there’s a place for you (well, as long as you’ve worked on the rock act a bit since then).

The sessions page has a host of topic areas to give you some ideas, but it’s far from an exhaustive list, so figure out where you can contribute your expertise and enthusiasm, and submit an abstract between now and October 12th.

To do so…

…start by creating a site account via the Presenters sidebar. Your account will be tied to a Windows Live ID, Google or Yahoo identity, so you don’t have yet another username and password to remember!

Once you’re logged in you’ll see a page like the following asking for name and e-mail and containing a Captcha challenge. Depending on what identity provider you used – Google, Live ID, or Yahoo – some of this information may or may not be filled in. Go ahead and complete the form and submit.

After you’ve registered, you’ll notice the sidebar changes, and you can now create your account profile.

With your profile complete, you can submit one or multiple sessions.

After you’ve entered your session details, you can pick from a number of tags (or create your own) to categorize your content and help other prospective presenters and attendees easily discover your session.

Now that your session submission is complete, it appears on the Sessions page for all prospective attendees to view.

Note that a submission does not guarantee you’ll be presenting. The event organizers’ goal is to include a diverse selection of presenters and topics; therefore, the actual slate of sessions and speakers will not be announced until after the submission period closes on October 12th. Look for the list to be published by the of the day on October 15th.

Pulling an event of this scale together is not a trivial effort, so thanks go to the organizers including Bob Goodearl, Patrick Hynds, Chris Pels, and John Zablocki for their contribution to the local developer community.

And a special shout out to Bob for pulling together the fantastic new web site, built with ASP.NET MVC and Entity Framework and hosted on Windows Azure. I think that’s a session-worthy topic right there!

So join me at The Cloud OS Signature Event Series for a free, one-day event, and check out the launch of our newest, most exciting products. You’ll get the opportunity to engage with experts, get hands on with the new technology, and learn how to build your modern data center with the Microsoft platform: Windows Server 2012, Windows Azure, Microsoft Visual Studio 2012, and Microsoft System Center 2012. I will be at the Detroit and Columbus events.

If you've been following the story of high performance computing on AWS, you'll see that we added instances with general purpose GPUs just under two years ago. In that time customers have been using these powerful, multi-core processors for a broad range of applications which can take advantage of the massive parallel computing capabilities, from rendering to transcoding, computer vision to molecular modeling.

Today, we're making that same functionality available more broadly:

The Cluster GPU instances (cg1.4xlarge) are now available in the EU West (Ireland) region. That means that customers who have data or additional supporting resources deployed in this region can now accelerate their projects with these high performance instances.

Additionally, today Cluster GPU instances are also available to launch inside Amazon VPC. If you're using VPC to build public and private subnets for your computational resources, you can add GPUs into the mix in just a few API calls.

These instances are available on-demand, as reserved capacity or via the spot market for even more bang for your buck.

Getting started

You can fire up a GPU instance and run some molecular dynamics computations across all those cores in just a few clicks:

This CloudFormation template will launch a GPU instance with a custom made AMI from the OpenMM team at Stanford University, who recently ran a molecular modeling workshop using this instance type to help get their students access to all those cores quickly and easily. You can find instructions, test code and other examples in the home directory of you new instance.

I hate looking at my AT&T Wireless bill each month, because it tallies up all my unused rollover minutes. Sure, it might be nice to know I have them just in case I decide to have a marathon long-distance conversation, but realistically, it's a reminder that I am overspending on talk time. Even worse is when it reminds me of the expiration date for those minutes. They are basically throwing my inefficiencies in my face. Thanks, AT&T. :(

But at least they are upfront in providing me visibility into this waste. If the overspend were high enough, I could change calling plans and waste less. But why can't I give those unused minutes to a relative or friend who keep going over their allotment? Or sell them to someone willing to pay a lower rate than AT&T's full fare but can't afford the mobile plan I am on?

Now you can correct these errors in planning by selling the overage back to the market. Simply list the Reserved Instances you have purchased (and held as active for at least 30 days) and let the market suck them up. You get back what you paid for them (minus a 12% service fee that AWS takes). The rub? You need to determine that you bought too much early enough for the instances to be valuable to the market. So don't be thinking you can sell them on December 29th when they expire January 1st.

What's most interesting about this move is that it is a business innovation — not a technology innovation — and one that we expect will drive up AWS customer loyalty and differentiation for enterprises.

Let's face it, we have been overspending on IT for decades. We buy software we don't use fully. We buy more seats than we use because we might need them and we certainly buy more infrastructure and utilize it less than we should. We do it because it's better to have more than you need than less; and frankly, it's more painful to go back through our own financial systems to buy more — quickly. So we accept these inefficiencies. We also accept them because there really hasn't been a better way to buy or an out for our overspending. Well now there is.

Another big benefit that comes from the Reserved Instance Marketplace is the ability to buy AWS instances at a much lower price than the public rate without having to accept the unpredictability of Spot Instances. And you can avoid the multi-year commitment to get multi-year discounts. Say you know you have a new app launching this quarter and you know it will have a persistent footprint of 30 instances, but you don't know if this application will still be needed after the Christmas holidays. That's a lousy case for buying Reserved Instances. And even if you did, you would only be able to get the 12-month discount at best. Now you can shop for 3-year or 5-year discounted RIs in the marketplace that are due to expire December 31. That's good business planning.

So far, these types of Infrastructure-as-a-Service business innovations are big differentiators for AWS. I had fully expected other cloud platform competitors to match both Spot Instances and RIs by now, but they haven't. With the Marketplace, AWS is further differentiating themselves from other cloud players and heavily leveraging cloud economics to do so.

EC2 OptionsI often tell people that cloud computing is equal parts technology and business model. Amazon EC2 is a good example of this; you have three options to choose from:

You can use On-Demand Instances, where you pay for compute capacity by the hour, with no upfront fees or long-term commitments. On-Demand instances are recommended for situations where you don't know how much (if any) compute capacity you will need at a given time.

If you know that you will need a certain amount of capacity, you can buy an EC2 Reserved Instance. You make a low, one-time upfront payment, reserve it for a one or three year term, and pay a significantly lower hourly rate. You can choose between Light Utilization, Medium Utilization, and Heavy Utilization Reserved Instances to further align your costs with your usage.

You can also bid for unused EC2 capacity on the Spot Market with a maximum hourly price you are willing to pay for a particular instance type in the Region and Availability Zone of your choice. When the current Spot Price for the desired instance type is at or below the price you set, your application will run.

Reserved Instance MarketplaceToday we are increasing the flexibility of the EC2 Reserved Instance model even more with the introduction of the Reserved Instance Marketplace. If you have excess capacity, you can list it on the marketplace and sell it to someone who needs additional capacity. If you need additional capacity, you can compare the upfront prices and durations of Reserved Instances on the marketplace to the upfront prices of one and three year Reserved Instances available directly from AWS. The Reserved Instances in the Marketplace are functionally identical to other Reserved Instances and have the then-current hourly rates, they will just have less than a full term and a different upfront price. Transactions in the Marketplace are always between a buyer and a seller; the Reserved Instance Marketplace hosts the listings and allows buyers and sellers to locate and transact with each other.

You can use this newfound flexibility in a variety of ways. Here are a few ideas:

Switch Instance Types. If you find that your application has put on a little weight (it happens to the best of us), and you need a larger instance type, sell the old RIs and buy new ones from the Marketplace or from AWS. This also applies to situations where we introduce a new instance type that is a better match for your requirements.

Buy Reserved Instances on the Marketplace for your medium-term needs. Perhaps you are running a cost-sensitive marketing promotion that will last for 60-90 days. Purchase the Reserved Instances (which we sometimes call RIs), use it until the promotion is over, and then sell it. You'll benefit from RI pricing without the need to own them for the full one or three year term. Keep the RIs as long as they continue to save you money.

Relocate. Perhaps you started to run your application in one AWS Region, only to find out later that another one would be a better fit for the majority of your customers. Again, sell the old ones and buy new ones.

In short, you get the pricing benefit of Reserved Instances and the flexibility to make changes as your application and your business evolves, grows, or (perish the thought) shrinks.

Dave Tells AllI interviewed Dave Ward of the EC2 Spot Instances team to learn more about this feature and how it will benefit our users. Watch and learn:

The DetailsNow that I've whet your appetite, let's take a look at the details. All of the functions described below are supported by the AWS Management Console, the EC2 API (command line) tools, and the EC2 APIs.

After registration, any AWS customer (US or non-US legal entity) can buy and sell Reserved Instances. Sellers will need to have a US bank account, and will need to complete an online tax interview before they reach 200 transactions or $20,000 in sales. You will need to verify your bank account as part of the registration process; this may take up to two weeks depending on your bank. You will not be able to receive funds until the verification process has succeeded.

Reserved Instances can be listed for sale after you have owned them for at least 30 days, and after we have received and processed your payment for them. The RI's state must be displayed as Active in the Reserved Instance section of the AWS Management Console:

You can list the remainder of your Reserved Instance term, rounded down to the nearest month. If you have 11 months and 13 days remaining on an RI, you can list the 11 months. You can set the upfront payment that you are willing to accept for your RI, and you can also customize the month-over-month price adjustment for the listing. You will continue to own (and to benefit from) the Reserved Instance until it is sold.

As a seller, you will receive a disbursement report if you have activity on a particular day. This report is a digest of all Reserved Instance Marketplace activity associated with your account and will include new Reserved Instance listings, listings that are fully or partially fulfilled, and all sales proceeds, along with details of each transaction.

When your Reserved Instance is sold, funds will be disbursed to your bank account after the payment clears, less a 12% seller fee. You will be informed of the purchaser's city, state, country, and zip code for tax purposes. As a seller, you are responsible for calculating and remitting any applicable transaction taxes such as sales tax or VAT.

As a buyer, you can search and browse the Marketplace for Reserved Instances that best suit your needs with respect to location, instance type, price, and remaining time. Once acquired, you will automatically gain the pricing and capacity assurance benefits of the instance. You can later turn around and resell the instance on the Marketplace if your needs change.

When you purchase a Reserved Instance through the Marketplace, you will be charged for Premium Support on the upfront fee. The upfront fees will also count toward future Reserved Instance purchases using the volume discount tiers, but the discounts do not apply to Marketplace purchases.

Visual Tour for SellersHere is a visual tour of the Reserved Instance Marketplace from the seller's viewpoint, starting with the process of registering as a seller and listing an instance for sale. The Sell Reserved Instance button initiates the process:

The console outlines the entire selling process for you:

Here's how you set the price for your Reserved Instances. As you can see, you have the ability to set the price on a month-by-month basis to reflect the declining value of the instance over time:

You will have the opportunity to finalize the listing, and it will become active within a few minutes. This is the perfect time to acquire new Reserved Instances to replace those that you have put up for sale:

Your listings are visible within the Reserved Instances section of the Console:

Here's a video tutorial on the selling process:

Visual Tour for BuyersHere is a similar tour for buyers. You can purchase Reserved Instances in the Console. You start by adding searching for instances with the characteristics that you need, and adding the most attractive ones to your cart:

You can then review the contents of your cart and complete your purchase:

Here's a video tutorial on the buying process:

I hope that you enjoy (and make good use of) the additional business flexibility of the Reserved Instance Marketplace.

Today we launched a new feature that enables you to buy and sell Amazon EC2 Reserved Instances. Reserved Instances are an important pricing option for AWS customers to drive cost down. If you are able to predict the capacity required to run your application, there is likely some combination of Reserved Instance options that will help you drive you costs down significantly (up to 71%) when compared to on-demand pricing. There are three options: heavy-, medium- and low-usage options that allow you to optimize your savings depending on how much you plan to use your Reserved Instance.

However, sometimes business and architectures change so that you need to change your mix of Reserved Instances. For example, we have heard from a number of customers, that they want the ability to move to a different region, change instance types, or switch from Microsoft to Linux. With the launch of the Reserved Instance Marketplace, you no longer need to worry what to do with Reserved Instances if you no longer need them: You can sell them at a price that you set.

Customers buying on the marketplace often have a need for Reserved Instances, but at a period different from the 1 & 3 year terms that AWS offers. For example, if you anticipate increased website traffic for a short period of time, or if you have remaining end-of-year budget to spend, you will be able to search for Reserved Instances with shorter duration times.

Flexibility is a key advantage of AWS services over traditional IT infrastructure. In the old world of IT once you have purchased a physical machine (or fleet of machines) it is costly and potentially impossible to change its configuration to a different number or type of processor and memory. In the new world of cloud based IT you can change instance types whenever the need arises, exactly matching your needs at all times. The same is true for if you no longer need the capacity or your capacity needs change. In the old world you are stuck with what you have bought. In the new world of cloud based resources, you can buy a Reserved Instance to substantially reduce the cost of your cloud footprint, but if your needs change you are not locked into continuous use as you are with traditional IT. The Reserved Instance Marketplace enables you to simply sell your Reserved Instance with the single click of a “sell” button in the AWS Management Console. After a buyer purchases your Reserved Instance and Amazon Web Services has received payment from the buyer, funds will be deposited via ACH wire transfer into your bank account. By buying and selling Reserved Instances as needed, you can adjust your Reserved Instance footprint to match your changing needs, such as moving instances to a new AWS Region, changing to a new instance type, or selling capacity for projects that end before your term expires.

Customers looking for an RI at different terms than the standard ones can also browse the AWS Management Console to find an RI that closest matches their needs at the price-point they are looking for. The buying process is as simple as just hitting the “purchase” button.

The AWS team continuously works to innovate to help our customers drive their cost down and the Reserved Instance Marketplace is yet another great invention that will gives customers greater flexibility in their pricing.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.