Look under the covers of almost any data-focused web application — including Klout — and you’ll find Hadoop. The open-source big data platform is ideal for storing and processing the large amounts of information needed for Klout to accurately measure and score its users’ social media influence. But Klout also has another important, and very not-open-source, weapon in its arsenal — Microsoft SQL Server.

Considering the affinity most web companies have for open source software, the heavy use of Microsoft technology within Klout is a bit surprising. The rest of the Klout stack reads like a who’s who of hot open source technologies — Hadoop, Hive, HBase, MongoDB, MySQL, Node.js. Even on the administrative side, where open source isn’t always an option, Klout uses the newer and less-expensive Google Apps instead of Microsoft Exchange.

“I would rather go open source, that’s my first choice always,” Klout VP of Engineering Dave Mariani told me during a recent phone call. “But when it comes to open source, scalable analysis tools, they just don’t exist yet.”

How Klout does big data

“Data is the chief asset that drives our services,” said Mariani, and being able to understand what that data means is critical. Hadoop alone might be fine if the company were just interested in analyzing and scoring users’ social media activity, but it actually has to satisfy a customer set that includes users, platform partners (e.g., The Palms hotel in Las Vegas, which ties into the Klout API and uses scores to decide whether to upgrade guests’ rooms) and brand partners (the ones who target influencers with Klout Perks).

As it stands today, Hadoop stores all the raw data Klout collects — about a billion signals a day on users, Mariani said — and also stores it in an Apache Hive data warehouse. When Klout’s analysts need to query the data set, they use the Analysis Services tools within SQL Server. But because SQL Server can’t yet talk directly to Hive (or Hadoop, generally), Klout has married Hive to MySQL, which serves as the middleman between the two platforms. Klout loads about 600 million rows a day into SQL Server from Hive.

It’s possible to use Hive for querying data in a SQL-like manner (that’s what it was designed for), but Mariani said it can be slow, difficult and not super-flexible. With SQL Servers, he said, queries across the entire data set usually take less than 10 seconds, and the product helps Klout figure out if its algorithms are putting the right offers in front of the right users and whether those campaigns are having the desired effects.

Analysis Services also functions as a sort of monitoring system, Mariani explained. It lets Klout keep a close eye on moving averages of scores so it can help spot potential problems with its algorithms or problems in the data-collection process that affect users’ scores.

Elsewhere, Klout uses HBase to serve user profiles and scores, and MongoDB for serving interactions between users (e.g., who tweeted what, who saw it, and how it affected everyone’s profiles).

Why Microsoft is turning heads in the Hadoop world

Although Klout is using SQL Server in part because Mariani brought it along with him from Yahoo and Blue Lithium before that, Microsoft’s recent commitment to Hadoop has only helped ensure its continued existence at Klout. Since October, Microsoft has been working with Yahoo spinoff Hortonworks on building distributions of Hadoop for both Windows Server and Windows Azure. It’s also working on connectors for Excel and SQL Server that will help business users access Hadoop data straight from their favorite tools.

Mariani said Klout is working with Microsoft on the SQL Server connector, as his team is anxious to eliminate the extra MySQL hop it currently must take between the two environments.

Leland said Microsoft is trying ”to provide a service that is very easy to consume for customers of any size,” which means an intuitive interface and methods for analyzing data. Already, he said, Webtrends and the University of Dundee are among the early testers of Hadoop on Windows Azure, with the latter using it for genome analysis.

We’re just three weeks out from Structure: Data in New York, and it looks as if our panel on the future of Hadoop will have a lot to talk about, as will every at the show. As more big companies get involved with Hadoop and the technology gets more accessible, it opens up new possibilities who can leverage big data and how, as well as for an entirely new class of applications that use Hadoop like their predecessors used relational databases.

A continuing theme in Big Data is the commonality, and developmental isolation, between the Hadoop world on the one hand and the enterprise data, Business Intelligence (BI) and analytics space on the other. Posts to this blog — covering how Massively Parallel Processing (MPP) data warehouse and Complex Event Processing (CEP) products tie in to MapReduce – serve as examples.

The Enterprise Data and Big Data worlds will continue to collide and, as they do, they’ll need to reconcile their differences. The Enterprise is set in its ways. And when Enterprise computing discovers something in the Hadoop world that falls short of its baseline assumptions, it may try and work around it. From what I can tell, a continuing hot spot for this kind of adaptation is Hadoop’s storage layer.

Hadoop, at its core, consists of the MapReduce parallel computation framework and the Hadoop Distributed File System (HDFS). HDFS’ ability to federate disks on the cluster’s worker nodes, and thus allow MapReduce processing to work against storage local to the node, is a hallmark of what makes Hadoop so cool. But HDFS files are immutable — which is to say they can only be written to once. Also, Hadoop’s reliance on a “name node” to orchestrate storage means it has a single point of failure.

Pan to the enterprise IT pro who’s just discovering Hadoop, and cognitive dissonance may ensue. You might hear a corporate database administrator exclaim: “What do you mean I have to settle for write-once read-many?” This might be followed with outcry from the data center worker: “A single point of failure? Really? I’d be fired if I designed such a system.” The worlds of Web entrepreneur technologists and enterprise data center managers are so similar, but their relative seclusion makes for a little bit of a culture clash.

Maybe the next outburst will be “We’re mad as hell and we’re not going to take it anymore!” The market seems to be bearing that out. MapR’s distribution of Hadoop features DirectAccess NFS, which provides the ability to use read/write Network File System (NFS) storage in place of HDFS. Xyratex’s cluster file system, called Lustre, can also be used as an API-compatible replacement for HDFS (the company wrote a whole white paper on just that subject, in fact). Appistry’s CloudIQ storage does likewise. And although it doesn’t swap out HDFS, the Apache HCatalog system will provide a unified table structure for raw HDFS files, Pig, Streaming, and Hive.

Sometimes open source projects do things their own way. Sometimes that gets Enterprise folks scratching their heads. But given the Hadoop groundswell in the Enterprise, it looks like we’ll see a consensus architecture evolve. Even if there’s some creative destruction along the way.

The bottom-line idea of the topic is that because SQL Azure Federation members can have different database schemas, therefore you can use this in order to upgrade your schema incrementally. This can be handy if you have an application with multiple customers, or a multi-tenant application: it enables you to roll out new features selectively, instead of all at once. You might want to do this if your customers vary in how quickly they wish to adopt new features. And it could be a useful way of validating your testing: just go live with a small subset of customers, and the impact of any bugs will be more contained.

I'd love to hear about your experiences using this scenario: is it easy to use? Are there glitches in the process that Microsoft could improve? Our software development gets driven by customer requests and experiences, so let us know what you think.

This topic describes one scenario that comes from being able to have different database schemas in federation members. But perhaps you have discovered a new scenario where you can use this feature. If so, we'd love to hear about it.

If you are new to SQL Azure then I will point you to this resource list by Buck Woody (blog | @buckwoody). If you are not new, but simply cloud-curious, then join me for the next 19 days as we journey into the cloud and back again. Also, Glenn Berry (blog | @GlennAlanBerry) has a list of handy SQL Azure queries, and between his blog and Buck Woody I have compiled the items that will follow for the next 19 days.

How To Get Started With SQL Azure

It really isn’t that complex. Here’s what you do:

Go to http://windows.azure.com

Either create an account or use your Live Microsoft account to sign in

You can select the option to do a free 90 day trial

Create a new SQL Azure server

Create a new database

Get the connection string details and use them inside of SSMS to connect to SQL Azure

Last year I started with the syslogins table. Well, that table doesn’t exist in SQL Azure, but sys.sql_logins does. But wait…before I even go there…how do I know what is available inside of SQL Azure? Well, I could go over to MSDN and poke around the documentation all day. Or, I could take 10 minutes to open up an Azure account and just run the following query:

SELECT name, type_desc
FROM sys.all_objects

You should understand that you only get to connect to one database at a time in SQL Azure. The exception is when you connect inside of SSMS initially to the master database then you *can* toggle to a user database in the dropdown inside of SSMS, but after that you are stuck in that user database. I created a quick video to help show this better.

You need to open a new connection in order to query a new database, including master. Why am I telling you this? Because you won’t be able to run this from a user database:

SELECT name, type_desc
FROM master.sys.all_objects

You will get back this error message:

Msg 40515, Level 15, State 1, Line 16
Reference to database and/or server name in ‘master.sys.all_objects’ is not supported in this version of SQL Server.

I found that master has 242 objects listed in the sys.all_objects view but only 210 are returned against a user database. So, which ones are missing? Well, you’ll have to go find out on your own. I will give you one interesting item…

It would appear that the master database in SQL Azure has two views named sys.columns and sys.COLUMNS. Why the two? I have no idea. From what I can tell they return the exact same sets of data. So where is the difference? It’s all about the schema. If you run the following:

SELECT name, schema_id, type_desc
FROM master.sys.all_objects
WHERE name = 'COLUMNS'

You will see that there are two different schema ids in play, a 3 (for the INFORMATION_SCHEMA schema) and a 4 (for the sys schema).

Of course, now I am curious to know about the collation of this instance, but we will save that for a later post. Let’s get back to those logins. To return a list of all logins that exist on your SQL Azure instance you simply run:

SELECT *
FROM sys.sql_logins

You can use this view to gather details periodically to make certain that the logins to your SQL Azure instance are not being add/removed/modified, for example.

OK, that’s enough for day one. Tomorrow we will peek under the covers of a user database.

2An inherent limitation placed on retrieving data via CRM 2011’s OData endpoint (OrganizationData.svc) is that the response can only include up to a maximum of 50 records (http://msdn.microsoft.com/en-us/library/gg334767.aspx#BKMK_limitsOnNumberOfRecords_). For the most part, this does not impact your ability to retrieve the primary entity data sets because each response will provide the URI for a subsequent request that includes a paging boundary token. As you can see below, the URI for the next page of results is provided in the JSON response object:

Here is what the full URI looks like from the example response shown above, including the $skiptoken paging boundary parameter:

Similarly for Silverlight, a DataServiceContinuation<T> object is returned in the IAsyncResult which contains a NextLinkUri property with the $skiptoken paging boundary. The CRM 2011 SDK includes examples for retrieving paged results both via JavaScript and Silverlight:

Nevertheless, this 50 record limit can become problematic when using $expand/.Expand() to return related entity records in your query results. The expanded record sets do not contain any paging boundary tokens, thus you will always be limited to the first 50 related records.

The next logical question – can this limit be increased? Yes, it can. 50 records is a default limit available for update via an advanced configuration setting stored in MSCRM_Config. Note that increasing the limit may only delay the inevitable for encountering results that exceed the limit. It may also cause your responses to exceed the message limits on your WCF bindings depending on how you have that configured. To avoid these scenarios, you may instead consider breaking up your expanded request into a series of requests where each result set can be paged. Also, before proceeding, I’d be remiss not to point out the following advisory from CRM 2011 SDK:

**Warning**
You should use the advanced settings only when indicated to do so by the Microsoft Dynamics CRM customer support representatives.

That said, now on with how to modify the setting…

The specific setting that imposes the 50 record limit is called ‘MaxResultsPerCollection’ and it’s part of the ServerSettings configuration table. Go here for the description of MaxResultsPerCollection and other ServerSettings available: http://msdn.microsoft.com/en-us/library/gg334675.aspx.

Advanced configuration settings can be modified either programmatically via the Deployment Service or via PowerShell cmdlet. The CRM 2011 SDK provides an example of how to set an advanced configuration setting via the Deployment Service here: http://msdn.microsoft.com/en-us/library/gg328128.aspx

To update the setting via PowerShell, execute each of the following at the PowerShell command-line:

In this tutorial, you learn how to expose Oracle Database data via WCF Data Services and OData through Oracle's Entity Framework support.

You will start by creating a new EDM from the HR schema. Next, you will create a WCF Data Service that uses OData to expose this EDM via the Web. Last, you will run the Web application and execute URL queries to retrieve data from the database.

Microsoft is continuing its plan to make OData as ubiquitous as web services. Their latest offering allows LightSwitch 2 to both produce and consume OData services.

LightSwitch is a rapid application development platform based on .NET with an emphasis on self-service. Based around the concept of the “citizen programmer”, Visual Studio LightSwitch is intended to allow for sophisticated end users to build their own business applications without the assistance of an IT department. So why should a normal office worker knocking out a CRUD application care about OData?

OData brings something to the table that neither REST nor Web Services offer: a standardized API. While proxy generation tools can format WS requests and parse the results, they don’t offer any way of actually hooking the service into the application. Instead developers have to painstakingly map the calls and results into the correct format. And REST is an even lower on the stack than that, usually offering only raw XML or JSON.

By contrast, every OData API looks just like any other OData API. The requests are always made the same way through the same conventions; the only significant difference is the columns being returned. This allows tools like LightSwitch to work with OData much in the same way it works with database drivers. In fact, consuming a OData service from LightSwitch is done using the same Attach Data Source wizard used for databases.

To expose OData from LightSwitch one simply needs to deploy the application in one of the three-tier configurations. From there you can consume the OData service from not only LightSwitch but also any other OData enabled application. Beth Massi’s example show importing LightSwitch data into Excel using the PowerPivot extension.

Another new feature in LightSwitch is the ability to redefine relationships between tables that come from external data sources. John Stallo writes,

One of the most popular requests we received was the ability to allow data relationships to be defined within the same data container (just like you can add relationships across data sources today). It turns out that there are quite a few databases out there that don’t define relationships in their schema – instead, a conceptual relationship is implied via data in the tables. This was problematic for folks connecting LightSwitch to these databases because while a good number of defaults and app patterns are taken care for you when relationships are detected, LightSwitch was limited to only keying off of relationships pre-defined in the database. Folks wanted the ability to augment the data model with their own relationships so that LightSwitch can use a richer set of information to generate more app functionality. Well, problem solved – you can now specify your own user-defined relationships between entities within the same container after importing them into your project.

LightSwitch offers a “Metro-inspired Theme” but at this time it doesn’t actually build Windows 8 or Windows Phone Metro-style applications. It does, however, offer options for hosting LightSwitch applications on Windows Azure.

Welcome to the last walkthrough (for now) of the new WIF tools for Visual Studio 11 Beta! This is my absolute favorite, where we show you how to take advantage of ACS2 from your application with just a few clicks.

Let’s say that you downloaded the new WIF tools (well done! ) and you at least checked out the first walkthrough. That test stuff is all fine and dandy, but now you want to get to something a bit more involved: you want to integrate with multiple identity providers, even if they come in multiple flavors.

Open the WIF tools dialog, and from the Provider tab pick the “Use the Windows Azure Access Control Service” option.

You’ll get to the UI shown below. There’s not much, right? That’s because in order to use ACS2 form the tools you first need to specify which namespace you want to use. Click on the “Configure…” link.

You get a dialog which asks you for your namespace and for the management key.
Why do we ask you for those? Well, the namespace is your development namespace: that’s where you will save the trust settings for your applications. Depending on the size of your company, you might not be the one that manages that namespace; you might not even have access to the portal, and the info about namespaces could be sent to you by one of your administrator.

Why do we ask for the management key? As part of the workflow followed by the tool, we must query the namespace itself for info and we must save back your options in it. In order to do that, we need to use the namespace key.

As you can see, the tool offer the option of saving the management key: that means that if you always use the same development namespace, you’ll need to go through this experience only once.

As mentioned above, the namespace name and management key could be provided to you from your admin; however let’s assume that your operation is not enormous, and you wear both the dev and the admin hats. Here there’s how to get the the management key value form the ACS2 portal.

Navigate to http://windows.azure.com, sign in with your Windows Azure credentials, 1) pick the Service Bus, Access Control and Cache area, 2) select access control 3) pick the namespace you want to use for dev purposes and 4) hit the Access Control Service button for getting into the management portal.

Once here, pick the management service entry on the left navigation menu; choose “management client”, then “symmetric key”. Once here, copy the key text in the clipboard (beware not to pick up extra blanks!).

Now get back to the tool, paste in the values and you’re good to go!

As soon as you hit OK, the tool downloads the list of all the identity providers that are configured in your namespace. In the screenshot below you can see that I have all the default ones, plus Facebook which I added in my namespace. If I would have had other identity providers configured (local ADFS2 instances, OpenID providers, etc) I would see them as checkboxes as well. Let’s pick Google and Fecebook, then click OK.

Depending on the speed of your connection, you’ll see the little donut pictured below informing you that the tools are updating the various settings.

As soon as the tool closes, you are done! Hit F5.

Surprise surprise, you get straight to the ACS home realm discovery page. Let’s pick Facebook.

Here there’s the familiar FB login…

…and we are in!

What just happened? Leaving the key acquisition out for a minute, let me summarize.

you went to the tools and picked ACS as provider

You got a list checkboxes, one for each of the available providers, and you selected the ones you wanted

you hit F5, and your app showed that it is now configured to work with your providers of choice

Now, I am biased: however to me this seems very, very simple; definitely simpler than the flow that you had to follow until now .

Of course this is just a basic flow: if you need to manage the namespace or do more sophisticated operations the portal or the management API are still the way to go. However now if you just want to take advantage of those features you are no longer forced to learn how to go through the portal. In fact, now dev managers can just give the namespace credentials without necessarily giving access to the Windows Azure portal for all the dev staff.

Let’s say that you downloaded the new WIF tools (well done! ) and you went through the first walkthrough, and you are itching to go deeper in the rabbit’s hole. Pronto, good Sir/Ma’am!

Let’s go back to the tool and take a look at the Configuration tab. What’s in there, exactly?

In V1 the tools operated in fire & forget fashion: they were a tool for establishing a trust relationship with a WS-Federation or WS-Trust STS, and every time you opened them it was expected that your intention was to create a new relationship (or override (most of) an existing one).

The WIF tools for .NET 4.5 aspire to be something more than that: when you re-open them, you’ll discover that they are aware of your current state and they allow you to tweak some key properties of your RP without having to actually get to the web.config itself.

The main settings you find here are:

Realm and AudienceUriThe Realm and AudienceUri are automatically generated assuming local testing, however before shipping your code to staging (or packaging it in a cspack) you’ll likely want to change those values. Those tow fields help you to do just that.

Redirection StrategyFor most business app developers WIF’s default strategy of automatically redirecting unauthenticated requests to the trusted authority makes a lot of sense. In business settings it is very likely that the authentication operation will be silent, and the user will experience single sign on (e.g. they type the address of the app the want, next thing they see the app UI).
There are however situations in which the authentication experience is not transparent: maybe there is a home realm discovery experience, or there is an actual credentials gathering step. In that case, for certain apps or audiences the user could be disoriented (e.g. they type the address of the app the want, next thing they see the STS UI). In order to handle that, the tool UI offers the possibility of specifying a local page (or controller) which will take care of handling the authentication experience. You can see this in action in the ClaimsAwareMVCApplication sample.

Flags: HTTPS, web farm cookiesThe HTTPS flag is pretty self-explanatory: by default we don’t enforce HTTPS, given the assumption that we are operating in dev environment; this flag lets you turn the mandatory HTTPS check on.
The web far cookie needs a bit of background. In WIF 4.5 we have a new cookie transform based on MachineKey, which you can activate by simply pasting the appropriate snippet in the config. That’s what happens when you check this flag.

Those are of course the most basic settings: we picked them because how often we observed people having to change them. Did we get them right? Let us know!

Let’s say that you downloaded the new WIF tools (well done! ) and you are itching to see them in action. Right away good Sir/Ma’am!

Fire up Visual Studio 11 as Administrator (I know, I know… I’ll explain later) and create a new ASP.NET Web Form Application.

Right-click on the project in Solution Explorer, you’ll find a very promising entry which sounds along the lines of “Identity and Access”. Go for it!

You get to a dialog which, in (hopefully) non-threatening terms, suggests that it can help handling your authentication options.
The default tab, Providers, offer three options.

The first option sounds pretty promising: we might not know what an STS exactly is, but we do want to test our application. Let’s pick that option and hit OK.

That’s it? There must be something else I have to do, right? Nope. Just hit F5 and witness the magic of claims-based identity unfold in front of your very eyes. (OK, this is getting out hand. I’ll tone it down a little).
As you hit F5, keep an eye on the system tray: you’ll see a new icon appear, informing you that “Local STS” is now running.

Your browser opens on the default page, shows the usual signs of redirection, and lands on the page with an authenticated user named Terry. Ok, that was simple! But what happened exactly?

Stop the debugger and go back to the Identity and Access dialog, then pick the Local Development STS tab.

The Local STS is a test endpoint, provided by the WIF tools, which can be used on the local machine for getting claims of arbitrary types and values in your application. By choosing “Use the local development STS to test your application” you told the WIF tools that you want your application to get tokens from the local STS, and the tools took care to configure your app accordingly. When you hit F5, the tools launched an instance of LocalSTS and your application redirected the request to it. LocalSTS does not attempt to authenticate requests, it just emits a token with the claim types and values it is configured to emit. In your F5 session you got the default claim types (name, surname, role, email) and values: if you want to modify those and add your own the Local Development STS tab offers you the means to do so, plus a handful of other knobs.

What does this all mean? Well, for one: you no longer need to rely on the kindness of strangers (i.e. your admins) to set up a test/staging ADFS2 endpoint to play with claim values; you also no longer need to create a custom STS and then modify directly the code in order to get the values you need to test your application.

Also: all the settings for the LocalSTS come from one file, LocalSTS.exe.config, which lives in your application’s folder. That means that you can create multiple copies of those files with different settings for your various test cases; you can even email values around fro repro-ing problems and similar. We think it’s pretty cool .

Now: needless to say, this is absolutely for development-time and test-time only activities. This is absolutely not fit for production, in fact the F5 experience is enabled by various defaults which assume that you’ll be running this far, far away from production (“you don’t just walk in Production”). In v1 the tools kind of tried to enforce some production-level considerations, like HTTPS, and your loud & clear feedback is that at development time you don’t want to be forced to deal with those and that you’ll do it in your staging/production environment. We embraced that, please let us know how that works for you!

The first version of Windows Identity Foundation was released in November 2009, in form of out of band package. There were many advantages in shipping out of band, the main one being that we made WIF available to both .NET 3.5 and 4.0, to SharePoint, and so on. The other side of the coin was that it complicated redistribution (when you use WIF in Windows Azure you need to remember to deploy WIF’s runtime with your app) and that it imposed a limit to how deep claims could be wedged in the platform. Well, things have changed. Read on for some announcements that will rock your world!

No More Moon in the Water

With .NET 4.5, WIF ceases to exist as a standalone deliverable. Its classes, formerly housed in the Microsoft.IdentityModel assembly & namespace, are now spread across the framework as appropriate. The trust channel classes and all the WCF-related entities moved to System.ServiceModel.Security; almost everything else moved under some sub-namespace of System.IdentityModel. Few things disappeared, some new class showed up; but in the end this is largely the WIF you got to know in the last few years, just wedged deeper in the guts of the .NET framework. How deep?

Very deep indeed.

To get a feeling of it, consider this: in .NET 4.5 GenericPrincipal, WindowsPrincipal and RolePrincipal all derive from ClaimsPrincipal. That means that now you’ll always be able to use claims, regardless of how you authenticated the user!

In the future we are going to talk more at length about the differences between WIF1.0 and 4.5. Why did we start talking about this only now? Well, because unless your name is Dominic or Raf chances are that you will not brave the elements and wrestle with WS-Federation without some kind of tool to shield you from the raw complexity beneath. Which brings me to the first real announcement of the day.

Brand New Tools for Visual Studio 11

I am very proud to announce that today we are releasing the beta version of the WIF tooling for Visual Studio 11: you can get it from here, or directly from within Visual Studio 11 by searching for “identity” directly in the Extensions Manager.

The new tool is a complete rewrite, which delivers a dramatically simplified development-time experience. If you are interested in a more detailed brain dump on the thinking that went in this new version, come back in few days and you’ll find a more detailed “behind the scenes” post. To give you an idea of the new capabilities, here there are few highlights:

The tool comes with a test STS which runs on your local machine when you launch a debug session. No more need to create custom STS projects and tweaking them in order to get the claims you need to test your apps! The claim types and values are fully customizable. Walkthrough here

Modifying common settings directly from the tooling UI, without the need to edit the web.config. Walkthrough here

Establish federation relationships with ADFS2 (or other WS-Federation providers) in a single screen. Walkthrough here

My personal favorite. The tool leverages ACS capabilities to offer you a simple list of checkboxes for all the identity providers you want to use: Facebook, Google, Live ID, Yahoo!, any OpenID provider and any WS-Federation provider… just check the ones you want and hit OK, then F5; both your app and ACS will be automatically configured and your test application will just work. Walkthrough here

No more preferential treatment for web sites. Now you can develop using web applications project types and target IIS express, the tools will gracefully follow.

No more blanket protection-only authentication, now you can specify your local home realm discovery page/controller (or any other endpoint handling the auth experience within your app) and WIF will take care of configuring all unauthenticated requests to go there, instead of slamming you straight to the STS.

Lots of new capabilities, all the while trying to do everything with less steps and simply things! Did we succeed? You guys let us know!

In V1 the tools lived in the SDK, which combined the tool itself and the samples. When venturing in Dev11land, we decided there was a better way to deliver things to you: read on!

New WIF Samples: The Great Unbundling

If you had the chance to read the recent work of Nicholas Carr, you’ll know he is especially interested on the idea of unbundling: in a (tiny) nutshell, traditional magazines and newspapers were sold as a single product containing a collection of different content pieces whereas the ‘net (as in the ‘verse) can offer individual articles, with important consequences on business models, filter bubble, epistemic closure and the like (good excerpt here).
You’ll be happy to know that this preamble is largely useless, I just wanted to tell you that instead of packing all the samples in a single ZIP you can now access each and every one of them as individual downloads, again both via browser (code gallery) or from Visual Studio 11’s extensions manager. The idea is that all samples should be easily discoverable, instead of being hidden inside an archive; also, the browse code feature is extremely useful when you just want to look up something without necessarily download the whole sample .

Also in this case we did our best to act on the feedback you gave us. The samples are fully redesigned, accompanied by exhaustive readmes and their code is thoroughly documented. Ah, and the look&feel does not induce that “FrontPage called from ‘96, it wants his theme back” feeling . While still very simple, as SDK samples should be, they attempt to capture tasks and mini-scenarios that relate to the real-life problems we have seen you using WIF for in the last couple of years.

ClaimsAwareWebAppThis is the classic ASP.NET application (as opposed to web site), demonstrating basic use of authentication externalization (to the local test STS from the tools).

ClaimsAwareMvcApplicationThis sample demonstrates how to integrate WIF with MVC, including some juicy bits like non-blanket protection and code which honors the forms authentication redirects out of the LogOn controller.

ClaimsAwareWebFarmThis sample is an answer to the feedback we got from many of you guys: you wanted a sample showing a farm ready session cache (as opposed to a tokenreplycache) so that you can use sessions by reference instead of exchanging big cookies; and you asked for an easier way of securing cookies in a farm.
For the session cache we included a WCF-based one: we would have preferred something based on Windows Azure blob storage, but we are .NET 4.5 only hence for the time being we went the WCF service route.
For the session securing part: in WIF 4.5 we have a new cookie transform based on MachineKey, which you can activate by simply pasting the appropriate snippet in the config. The sample demonstrates this new capability. I think I can see Shawn in the audience smiling from ear to ear, or at least that’s what I hope Note: the sample itself is not “farmed”, but it demonstrates what you need for making your app farm-ready.

ClaimsAwareFormsAuthenticationThis very simple sample demonstrates that in .NET 4.5 you get claims in your principals regardless of how you authenticate your users. It is a simple sample, no alliteration intended, but it makes an important point. You can of course generalize the same concept for other authentication methods as well (extra points… ehhm.. claims, if you are in a Windows8 domain and you do Kerberos authentication)

ClaimsBasedAuthorizationIn this sample we show how to use your CLaimsAuthorizationManager class and the CLaimsAuthorizationModule for applying your own authorization policies. We decided to keep it simple and use basic conditions, but we are open to feedback!

FederationMetadataIn this sample we demonstrate both dynamic generation (on a custom STS) and dynamic consumption (on a very trusty RP: apply this with care!) of metadata documents. As simple as that.

CustomTokenFinally, we have one sample showing how to build a custom token type. Not that we anticipate many of you will need this all that often, but just in case…
We factored the sample in a way that shows you how you would likely consume an assembly with a custom token (and satellite types, like resolvers) from a claims issuer (custom STS) and consumer (in this case a classic ASP.NET app). Ah, and since we were at it, we picked a token type that might occasionally come in useful: it’s the Simple Web Token, or SWT.

Icing on the case: if you download a sample from Visual Studio 11, you’ll automatically get the WIF tools. Many of the samples have a dependency on the new WIF tools, as every time we need a standard STS (e.g. we don’t need to show features that can be exercised only by creating a custom STS) we simply rely on the local STS.

The Crew!!!

Normally at this point I would encourage you to go out and play with the new toys we just released, but while I have your attention I would like to introduce you to the remarkable team that brought you all this, as captured at our scenic Redwest location:

…and of course there’s always somebody that does not show up at picture day, but I later chased them down and immortalized their effigies for posterity:

Thank you guys for an awesome, wild half year! Looking forward to fight again at your side

A key aspect of any solution that spans the on-premises infrastructure of an organization and the cloud concerns the way in which the elements that comprise the solution connect and communicate. A typical distributed application contains many parts running in a variety of locations, which must be able to interact in a safe and reliable manner. Although the individual components of a distributed solution typically run in a controlled environment, carefully managed and protected by the organizations responsible for hosting them, the network that joins these elements together commonly utilizes infrastructure, such as the Internet, that is outside of these organizations’ realms of responsibility.

Consequently the network is the weak link in many hybrid systems; performance is variable, connectivity between components is not guaranteed, and all communications must be carefully protected. Any distributed solution must be able to handle intermittent and unreliable communications while ensuring that all transmissions are subject to an appropriate level of security.

The Windows Azure™ technology platform provides technologies that address these concerns and help you to build reliable and safe solutions. This appendix describes these technologies.

I start this post by installing Node Package Manager. This is your gateway to Node nirvana. NPM will allow you to leverage 1000’s of lines of others code to make your node development efficient.

After installing NPM, we will download the library called underscore. This library has dozens of utility functions. I will walk you through about 12 of them. It will provide a quick background and allow you to start using underscore almost immediately in your code.

The next post will continue with some new modules. I recommend doing what I did. I tweaked and combined different examples to learn better how they work together. I plan to post a few more samples on the other modules soon. As always, life is busy. Hope this gets you one step closer to using Node effectively.

This sample implements retry logic to protect the application from crashing in the event of transient errors in Windows Azure. This sample uses Transient Fault Handling Application Block to implement retry mechanism. …

Introduction

When using cloud based services, it is very common to receive exceptions similar to below while performing cache operations such as get, put. These are called transient errors.

Developer is required to implement retry logic to successfully complete their cache operations.

ErrorCode<ERRCA0017>:SubStatus<ES0006>:There is a temporary failure. Please retry later. (One or more specified cache servers are unavailable, which could be caused by busy network or servers. For on-premises cache clusters, also verify the following conditions. Ensure that security permission has been granted for this client account, and check that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the serialized object size sent from the client.)

This sample implements retry logic to protect the application from crashing in the event of transient errors. This sample uses Transient Fault Handling Application Block to implement retry mechanism

Building the Sample

2) Modify the highlighted cachenamespace, autherizataionInfo attributes under DataCacheClient section of web.config and provide values of your own cache namespace and Authentication Token. Steps to obtain the value of authentication token, cache namespace value can be found here

Running the Sample

Click on “Add To Cache” button to add a string object to Azure cache. Up on successful operation, “String object added to cache!” message will be printed on the webpage

Click on “Read From Cache” button to read the string object from Azure Cache. Up on successful operation, value of the string object stored in Azure cache will be printed on the webpage. By default it will be “My Cache” (if no changes are made to code)

Using the Code

1. Define required objects globally, so that they are available for all code paths with in the module.

Windows 8 recently reached the Windows 8 Consumer Preview milestone. ISVs are downloading the preview and developers are eagerly building/testing new Metro Style apps experiences and sketching out how unique Windows 8 Metro Style apps can add to their portfolios.

The new version of the Windows Azure Toolkit for Windows 8 helps Azure applications reach Windows 8 and helps Windows developers leverage the power of Azure. In just minutes, developers can download working code samples that use Windows Azure and the Windows Push Notification Service to send Toast, Tile and Badge notifications to a Windows 8 Metro Style application.

After a long time of preparation, my company Computer Life and [I] are ready to publish the next –-commercial-- version of CLASS Extensions.

In this SkyDrive location you can find 2 videos briefly presenting capabilities of this new version. In less than a week a page will be started here, where you would be able to buy the package you prefer. There will be 2 packages: no source code and with code and code documentation. Pricing will be announced along with the official release.

In these videos you will find a brief demonstration of the following controls and/or business types:

Color Business Type and Control. This was available also in the first free version. It has been improved to automatically detect grid or detail and read-only or not.

BooleanCheckbox and BooleanViewer controls. These were also available in the first version. No change to this one.

Rating Business Type and Control. A new business type to support ratings. You can set maximum rating, star size, show/hide starts and/or labels, allow/disallow half stars.

Image Uri Business Type and Control. A new business type to support displaying images from a URI location instead of binary data.

Audio Uri Business Type and Control. A new business type to support playing audio files from a URI location.

Video Uri Business Type and Control. A new business type to support playing video files from a URI location.

Drawing Business Type and Control. A new business type that allows the user to create drawings using an image URI (not binary) as background. The drawing can be saved either as a property of an entity (the underlying type is string) or in the IsolatedStorage of the application. Also, every time the drawing is saved a PNG snapshot of the drawing is available and can be used any way you want, with 2 (two) lines of additional code. The drawings, both local or data service provided, can be retrieved any time and modified and saved back again.

FileServer Datasource Extension. A very important RIA Service datasource provided to smoothly integrate all the URI-based features. This Datasources exports two EntityTypes: ServerFile and ServerFileType. Also provides methods to retrieve ServerFile entities representing the content files (Image, Audio, Video, Document, In the current version these are the ServerFileTypes supported) stored at your application server. Also you can delete ServerFile entities. Upload (insert/update) is not implemented in this version as depending on Whether your application is running out of browser or not it has to be done a different way.

Last but not least. There is a list of people whose code and ideas helped me a lot in the implementation of this version. Upon official release they will be properly mentioned as community is priceless.

Microsoft Visual Studio 11 betaand Windows 8 Consumer Previews were made available on Feb 29th 2012. They will be valid through to June 2012.

The Xpert360 development team took the beta for a spin with the latest versions of the Xpert360 Lightning Series product builds: WCF Ria Service data source extensions for LightSwitch and .NET 4 that connect to salesforce and Dynamics CRM Online instances.

VS11 Beta Premium Applying: LightSwitch Beta Core

After the VS11 install a quick rebuild of the data extensions VSIX in VS2010 pulled in the latest software versioning as shown below:

Clip for LightSwitch Data Extension vsixmanifest

The new VSIX files now prompt for the version of Visual Studio if not already installed and within ten minutes of the VS11 install we are building our first Visual Studio LightSwitch 11 application to interact with our CRM test systems.

LightSwitch project templates in VS11 Beta

Then we create new data connections with the Xpert360 Lightning Data Extensions.

LightSwitch designer - Choose a WCF Ria Service!

… and we move on and choose some of the CRM entities exposed by the service.

The chosen entities appear against the data source in the LightSwitch designer and can be explored and manipulated as usual. Notice the automatically available entity relationships between the salesforce entities.

The Xpert360 Lightning Series data extensions will unleash the true power of LightSwitch onto your salesforce CRM and Dynamics CRM Online data very soon. They are currently undergoing private beta testing which will now be extended to include the VS11 Beta as this platfrom has a go-live license.

In this blog post, I’m going to describe how LightSwitch developers can programmatically access the security data contained in a LightSwitch application. Having access to this data is useful for any number of reasons. For example, you can use this API to programmatically add new users to the application.

This data is exposed in code as entities, just like with your own custom data. So it’s an easy and familiar API to work with. And it’s available from both client or server code.

First I’d like to describe the security data model. Here’s a UML diagram describing the service, entities, and supporting classes:

SecurityData

This is the data service class that provides access to the security entities as well as a few security-related methods. It is available from your DataWorkspace object via its SecurityData property.

SecurityData is a LightSwitch data service and behaves in the same way as the data service that gets generated when you add a data source to LightSwitch. It exposes the same type of query and save methods. It just operates on the built-in security entities instead of your entities.

Some important notes regarding having access to your application’s security data:
In a running LightSwitch application, users which do not have the SecurityAdministration permission are only allowed to read security data; they cannot insert, update, or delete it. In addition, those users are only able to read security data that is relevant to themselves. So if a user named Bob, who does not have the SecurityAdministration permission, queries the UserRegistrations entity set, he will only see a UserRegistration with the name of Bob. He will not be able to see that there also exists a UserRegistration with the name of Kim since he is not logged in as Kim. Similarly with roles, if Bob queries the Roles entity set, he can see that a role named SalesPerson exists because he is assigned to that role. But he cannot see that a role named Manager exists because he is not assigned to that role.
Users which do have the SecurityAdministration permission are not restricted in their access to security data. They are able to read all stored security data and have the ability to modify it.

In addition to entity sets, SecurityData also exposes a few useful methods:

ChangePassword
This method allows a user to change their own password. They need to supply their old password in order to do so. If the oldPassword parameter doesn’t match the current password or the new password doesn’t conform to the password requirements, an exception will be thrown. This method is only relevant when Forms authentication is being used. This method can be called by any user; they do not require the SecurityAdministration permission.

GetWindowsUserInfo
This method validates and resolve a Windows account name into a normalized value and retrieve the full name (display name) of that user. As an example, let’s say that you passed “kim@contoso.com” as the parameter to this operation, it would return back a WindowsUserInfo object with the FullName property set to “Kim Abercrombie” and the NormalizedUserName property set to “contoso\kim”. If the service is unable to resolve the username parameter to a known account, an exception will be thrown. This method is only relevant when Windows authentication is being used. This method can only be called by users that have the SecurityAdministration permission.

IsValidPassword
This method checks whether the supplied password meets the password requirements of the application. This happens automatically when attempting to add a new user or updating their password but this method allows the caller to preemptively check the password which can be useful for validation UI purposes. This method is only relevant when Forms authentication is being used. This method can only be called by users that have the SecurityAdministration permission.

UserRegistration

The UserRegistration entity represents a user that has access to the application. (In VS 11, it can also represent an Active Directory security group when Windows authentication is being used). When using Forms authentication, all the UserRegistration properties (UserName, FullName, Password) are required. When using Windows authentication, only the UserName property is required; FullName is populated dynamically based on the UserName and Password is irrelevant.

The Role entity represents a grouping of users with common characteristics. Examples of roles in an application include “Sales Person” and “Manager”. You can then configure your application security around these roles.

Here is some example code using the Role entity:

C#:

// Create a new roleRole role = this.DataWorkspace.SecurityData.Roles.AddNew();role.Name = "Manager";this.DataWorkspace.SecurityData.SaveChanges();

// Iterate through all Rolesforeach (Role role in this.DataWorkspace.SecurityData.Roles){}

' Find which roles Kim is assigned toDim user As UserRegistration = Me.DataWorkspace.SecurityData.UserRegistrations_Single("contoso\\kim")For Each roleAssignment As RoleAssignment In user.RoleAssignments Dim role As Role = roleAssignment.RoleNext

Permission

The Permission entity represents an action that a logged in user is permitted to do within the application. Examples of permissions in an application include “CanRejectSalesOrder” and “CanUpdateInventory”. Permissions are different than the rest of the security data because they are read-only. The permissions that are available map to the permissions that were defined within the Access Control tab of the application properties of a LightSwitch project:

Here is some example code using the Permission entity:

C#:

// Iterate through all Permissionsforeach (Permission permission in this.DataWorkspace.SecurityData.Permissions){}

' Find which permissions are assigned to the Manager roleDim role As Role = Me.DataWorkspace.SecurityData.Roles_Single("Manager")For Each rolePermission As RolePermission In role.RolePermissions Dim permission As Permission = rolePermission.PermissionNext

Conclusion

Using the SecurityData service and the entities it exposes, you can accomplish all kinds of interesting scenarios that require dynamic management of your application’s security data. I hope you’ve found this blog post helpful in your work. Happy coding.

We took your feedback and drew inspiration from the Silverlight Cosmopolitan theme in designing the extension. The shell and theme provides a modern UI to achieve a more immersive feel, simple and clean styling for the controls, as well as many other improvements:

Branding. The shell now displays your corporate logo (provided as part of your LightSwitch project) at the top of the application.

Navigation menu. Similar to many web-style applications, the navigation menu is now at the top of the application. When you click on a menu item, a dropdown will appear with a list of screens. This frees up more screen real estate to be devoted to the active screen, providing a more immersive experience for the users of your applications.

Command bar. We moved the screen command bar to the bottom of the application. This provides a better visual association between the commands and the screen.

List/Grid. We optimized the UI and removed clutter in the List and Grid.

The team plans to make it as the default shell and theme when we ship VS11 (you still have the option to use the existing Office-style shell and theme). We also plan to release the extension source code for the new shell and theme to the community at that time.

We’d love to hear your feedback! Please give it a try and help us improve the extension in preparation to shipping VS11!

One of the questions I often get around Windows Azure is: “Is Windows Azure interesting for me?”. It’s a tough one, because most of the time when someone asks that question they currently already have a server somewhere that hosts 100 websites. In the full-fledged Windows Azure model, that would mean 100 x 2 (we want the SLA) = 200 Windows Azure instances. And a stroke at the end of the month when the bill arrives. Microsoft’s DPE team have released something very interesting for those situations though: the Windows Azure Accelerator for Web Roles.

In short, the WAAWR (no way I’m going to write Windows Azure Accelerator for Web Roles out every time!) is a means of creating virtual web sites on the IIS server running on a Windows Azure instance. Add “multi-instance” to that and have a free tool to create a server farm for you!

A web administrator portal for managing web sites deployed to the role

The ability to upload and manage SSL certificates

Simple logging and diagnostics tools

Interesting… Let’s go for a ride!

Obtaining & installing the Windows Azure Accelerator for Web Roles

Installing the WAAWR is as easy as download, extract, buildme.cmd and you’re done. After that, Visual Studio 2010 (or Visual Studio Web Developer Express!) features a new project template:

Click OK, enter the required information (such as: a storage account that will be used for synchronizing the different server instances and an administrator account). After that, enable remote desktop and publish. That’s it. I’ve never ever setup a web farm more quickly than that.

Creating a web site

After deploying the solution you created in Visual Studio, browse to the live deployment and log in with the administrator credentials you created when creating the project. This will give you a nice looking web interface which allows you to create virtual web sites and have some insight into what’s happening in your server farm.

I’ll create a new virtual website on my server farm:

After clicking create we can try to publish an ASP.NET MVC application.

Publishing a web site

For testing purposes I created a simple ASP.NET MVC application. Since the default project template already has a high enough “Hello World factor”, let’s immediately right-click the project name and hit Publish. Deploying an application to the newly created Windows Azure webfarm is as easy as specifying the following parameters:

One Publish click later, you are done. And deployed on a web farm instance, I can now see the website itself but also… some statistics :-)

Conclusion

The newly released Windows Azure Accelerator for Web Roles is, IMHO, the easiest, fastest manner to deploy a multi-site webfarm on Windows Azure. Other options like.the ones proposed by Steve Marx on his blog do work, but are only suitable if you are billing your customer by the hour.

The fact that it uses web deploy to publish applications to the system and the fact that this just works behind a corporate firewall and annoying proxy is just fabulous!

This also has a downside: if I want to push my PHP applications to Windows Azure in a jiffy, chances are this will be a problem. Not on Windows (but not ideal there either), but certainly when publishing apps from other platforms. Is that a self-imposed challenge? Nope. Web deploy does not seem to have an open protocol (that I know of) and while reverse engineering it is possible I will not risk the legal consequences :-) However, some reverse-engineering of the WAAWR itself learned me that websites are stored as a ZIP package on blob storage and there’s a PHP SDK for that. A nifty workaround is possible as such, if you get your head around the ZIP file folder structure.

My conclusion in short: if you ever receive the question “Is Windows Azure interesting for me?” from someone who wants to host a bunch of websites on it? It is. And it’s easy.

The report says, "The personal cloud will begin a new era that will provide users with a new level of flexibility with the devices they use for daily activities, while leveraging the strengths of each device, ultimately enabling new levels of user satisfaction and productivity. However, it will require enterprises to fundamentally rethink how they deliver applications and services to users."

OK, what the hell is a personal cloud?

Gartner's conclusions build on the topic of my last blog, which I wrote before seeing this report. My concept was that the use of cloud-based resources provides us with the flexibility to work with many different devices, and the personal cloud will become the new center of our digital universe.

I've been living this life for some time now, with several computers and mobile devices all accessing the same cloud services: document sharing, file system, email, and so on. I'm not alone. In the past, I carried around one laptop with everything on it, and further back, I carried a box of 3.5-inch disks between my work and home tower computers.

Things have changed for the better, and the use of consumer-oriented personal, public, and private clouds are the driving force. I agree with Gartner on this one: The game is shifting.

The trouble is that most enterprises don't see this one coming, which the Gartner report also points out. But users do. Traditional IT needs to understand the value and the use of personal clouds, as well as how they can work and play with existing enterprise data and application resources. These plans need to be created now. Otherwise, this wave will crush enterprise IT. You've been warned.

Microsoft recently came clean about the root cause of its Windows Azure outage and promised a 33% reduction in many customers’ monthly bills. While this generally impressed analysts, partners and customers, some Azure shops have reservations about the public cloud service.

The company fixed the Azure service disruptions relatively quickly and released a "root cause analysis" on March 9, more than one week after the Feb. 29 Azure outage. Microsoft made efforts to go beyond promises in its service-level agreements (SLA) for recompense, but some complained those moves were a bit tardy.

Further, one source with access to the Azure team and knowledge of what has become known as the "Leap Year Outage" said it has cost Microsoft some credibility -- and money.

"Azure is losing some customers [over the outage]," the source, who requested anonymity, said, adding that the outage was "not crippling" for users.

But most industry watchers say the outage wasn’t significant enough to cause customers to jump ship.

No data was lost, but it's embarrassing that one line of code could cause an outage.

Rob Sanfilippo, analyst.

Roger Jennings, a Windows Azure MVP and developer said the outage only affected one of his demo apps and that was only down for 35 minutes. “I doubt if Microsoft lost any customers over the leap day outage; the same is true for Amazon with their last extended outage," Jennings added.

One analyst agrees with Jennings.

"I haven't heard of any customers leaving due to this," said Rob Sanfilippo, research vice president at Directions on Microsoft, an independent analysis firm based in Kirkland, Wash.

Microsoft did not respond to questions regarding whether the company lost customers from the outage at time of publication.

The Azure outage began late in the afternoon of February 28 (00:00 on February 29, Greenwich Mean Time) when an SSL security certificate that had not been properly coded to deal with the extra day in the month failed, causing a rolling service outage. In response, Microsoft technicians disabled cloud management services globally to keep customers from damaging running processes.

The upshot was virtual machines (VM) that were already running would continue to run but could not be managed, and new VMs could not be started. Technicians got most of the systems repaired within about 12 hours but, rushing to put a fix in place, they inadvertently caused a secondary outage that stretched over 24 hours for some customers in three major sub-regions -- North Central U.S., South Central U.S., and North Europe.

Ten days later, Bill Laing, corporate vice president of Microsoft's Server and Cloud Division, posted the incident post mortem on the Windows Azure Team Blog. Besides providing a blow-by-blow description of the incident, including the human errors, and discussion of steps Microsoft is taking to avoid future problems, he also announced the customer rebates.

"We have decided to provide a 33% credit to all customers of Windows Azure Compute, Access Control, Service Bus and Caching for the entire affected billing month(s) for these services, regardless of whether their service was impacted," Laing said.

That's a fair offer which, while it doesn't reimburse customers for lost income, is at least more than the payouts guaranteed under Azure's SLA terms, Sanfilippo said. "No data was lost, but it's embarrassing that one line of code could cause an outage."

Anecdotally at least, some customers bear out that assertion.

"We did not lose any income, data or receive any complaints from clients," said John Anastasio, partner and CTO at KGS Buildings, an Azure customer, adding that he was generally "open minded" about the outage.

Mark Eisenberg, director at Microsoft Silver Partner Fino Consulting, said most cloud customers recognize that cloud computing is still nascent, and tend to be more forgiving of outages due to cloud technologies' early adopter status.

"Coming clean after the fact was the right thing to do," Eisenberg said. "It was just a bad day."

It’s a stretch to base a headline on a quote from one anonymous source when several attributed sources say the opposite.

Well, the day has finally come; we’re at the end of this series of Azure training. In this final episode RMitch talks Topics, AppFabric Composition, Windows Azure Platform Appliance, and German data laws.

One thing we’re looking at is Topics, which is an extension of Queues, where you can have different subscribers. You can have one publisher with lots and lots of subscribers. This is very different Queue data process jpeg or gif and they can go to different worker roles depending on what you want to happen.

When designing an application, you figure you need some basic things, storage, tables, etc which becomes really very complicated to set up and manage. With AppFabric Composition you can design everything all in one place, simplifying things for you.

There’s also a bit about some weird German laws, but that’s for another time.

I normally don't announce commercial classes... but this is a special case. On Thursday (3/22) and Friday (3/23) there is going to be a Kanban for DevOps class here in Silicon Valley (Sunnyvale, CA to be precise).

The class is being taught by Dominica Degrandis who has been following the DevOps movement for David J. Anderson and Associates (widely recognized as an authoritative source of Kanban knowledge and training... probably because they wrote the book).

Kanban has been around for a while in the Dev world (and of course for decades in the manufacturing world). Recently, the idea of using Kanban to being Dev and Ops together under a common visualization of flow is picking up steam in the DevOps community. This is going to be the first class of its kind. In addition to Dominica's curriculum, I'm just as excited about the interaction with the other attendees (Gene Kim, Alex Honor, John Willis, etc.). The discount rate is $800 (use the code DEVOPSDELIVER). I have no financial stake in the class. I just think it will be well worth it. If you have questions please direct them towards Dominica at dominica@djaa.com.

But wait there is more....

On the evening of Thursday (3/22), we are going to be having a special eddition of the Silicon Valley DevOps Meetup. Most of the attendees from the Kanban for DevOps class are going to be there. Of course we'll talk a lot about DevOps and Kanban but Gene Kim is also going to be there to discuss the research (and perhaps do some live reading) from his new book "When IT Fails: The Novel". Like all SV DevOps Meetups, this is a free event (and rumor has it that Enstratus is springing for the pizza).

The IT thought leaders of tomorrow are today building hybrid clouds spanning extremely efficient, vertically scalable data centers with powerful and increasingly software-centric infrastructure. They’re building private clouds as their base and then renting public clouds for the spike. Today they are experiencing the challenges that will be rippling through Fortune 500 companies during the next 5 years as enterprises become more IT-centric and data centers replace ever more factories and file cabinets.

The cloud leaders of today are leading the development of advanced electrical and mechanical infrastructures and cooling systems that deliver strategic operating advantage in the form of higher availability and significantly reduced operating (power) expenses while they rent the spike from the public players. Once they cross 500 kW of power, the alignment of electrical, mechanical design with infrastructure becomes a material issue. At 1 MW it’s a strategic issue.

So we’re seeing public cloud-enabled players migrate into private clouds as they cross the 500 kW threshold and lay out plans for exceeding 1MW. As the public cloud players reduce prices I cannot help but to wonder if hybrid cloud emigration was a factor in their decision.

The business case for vanilla data centers (containerized or modular one design fits all architecture for all operating models) starts to break down, especially for highly custom infrastructures in high growth environments. So we’ll see the development of data center campuses with varied electrical and mechanical architectures and power densities, allowing for scale and customization, supported with dedicated substations (substation power pricing, 2N high voltages from substation to the server floor) and a host of new innovations.

Note: Contrast this vision of a dynamic mesh of electrons pulsating between customers, vendors and partners today in the Internet sector with this recent “chilling” report from Computerworld on IT at the White House.

Also: I’m assembling a panel for the upcoming Future in Review conference on what every CIO should know about cloud. We’re lining up some of the best minds on cloud from companies like Boeing, VMware and PayPal to talk about when and where cloud can deliver strategic advantage over legacy IT (including floppy drives-LOL).

Welcome to the latest edition of our weekly roundup of the latest community-driven news, content and conversations about cloud computing and Windows Azure. Let me know what you think about these posts via comments below, or on Twitter @WindowsAzure. Here are the highlights from last week.

Send us articles that you’d like us to highlight, or content of your own that you’d like to share. And let us know about any local events, groups or activities that you think we should tell the rest of the Windows Azure community about. You can use the comments section below, or talk to us on Twitter @WindowsAzure.

I'm pleased to announce that the AWS Storage Gateway is now available in our South America (São Paulo) Region. The AWS Storage Gateway service connects an on-premises software appliance with cloud-based storage to integrate your existing on-premises applications with the AWS storage infrastructure in a seamless, secure, and transparent fashion. As I wrote in my overview when we launched the service in January, I see the AWS Storage Gateway being put to use in a number of interesting ways, including:

Disaster Recovery and Business Continuity - You can reduce your investment in hardware set aside for Disaster Recovery using a cloud-based approach. You can send snapshots of your precious data to the cloud on a frequent basis and use our VM Import service to move your virtual machine images to the cloud.

Backup - You can back up local data to the cloud without worrying about running out of storage space. It is easy to schedule the backups, and you don't have to arrange to ship tapes off-site or manage your own infrastructure in a second data center.

Data Migration - You can move data from your data center to the cloud, and back, with ease. For example, if you’re running development and test environments in EC2, you can use the Storage Gateway to provide these environments with ongoing access to the latest data from your production systems on-premises.

Using the AWS Storage Gateway, data stored in your current data center can be backed up to Amazon S3 over an encrypted channel, where it is stored as Amazon EBS snapshots. Once there, you will benefit from S3's low cost and intrinsic redundancy. In the event you need to retrieve a backup of your data, you can easily restore these snapshots locally to your on-premises hardware. You can also access them as Amazon EBS volumes, enabling you to easily mirror data between your on-premises and Amazon EC2-based applications.
With today’s release, the AWS Storage Gateway is now available in all seven of the public AWS Regions. You can choose to upload your on-premises data to the Region (full map here) that is closest to you. You can also select the Region location that best addresses your special regulatory or business constraints for storing data.

You can get started easily with our free 60 day trial of the AWS Storage Gateway. Simply visit the Management Console to begin. After your trial ends, there is a charge of $125 per month for each activated gateway. Snapshot storage pricing starts at $0.125 per Gigabyte-month in the US-East Region and $0.170 per Gigabyte-month in our South America Region. If you have an hour or so, please watch the recorded version of our Storage Gateway webinar:

Extending our cloud storage service into our customers’ datacenters has very interesting challenges. We have taken the first step by simplifying the connection of customers’ existing on-premises applications with AWS storage, but we have plans to do a lot more. If you want to play a part in this exciting space, consider joining the AWS Storage Gateway team. Below are our open positions (these jobs are all based in Seattle):

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.