Today, during my presentation at Microsoft DevDays “Make Web not War” I had a pretty nice question about concurrency and I left the question somehow blurry and without a straight answer. Sorry, but we were changing subjects so fast that I missed it and I only realized it on my way back.

The answer is yes, there is concurrency. If you examine a record on your table storage you’ll see that there is a Timestamp field or so called “ETag”. Windows Azure is using this field to apply optimistic concurrency on your data. When you retrieve a record from the database, change a value and then call “UpdateObject”, Windows Azure will check if timestamp field has the same value on your object as it does on the table and if it does it will update just fine. If it doesn’t, it means someone else changed it and you’ll get an Exception which you have to handle. One possible solution is retrieve the object again, update your values and push it back. The final approach to concurrency is absolutely up to the developer and varies between different types of applications. …

Brad provides more details on each new feature and links to additional documentation.

• Mike Flasko announced that v1.5 of ADO.NET Data Services will be released as “an in-place update of the data services binaries that shipped with the .NET Framework 3.5 SP1 (instead of a SxS release as was the case for the v1.5 CTP1 release)” in his Data Services Release Plan Update post of 11/25/2009:

One area we heard consistent feedback on was the desire to limit the number of Data Service releases where possible. To address this need we have decided to change the ship vehicle for the release to be an in-place update of the data services binaries that shipped with the .NET Framework 3.5 SP1 (instead of a SxS release as was the case for the v1.5 CTP1 release). The release will ship as a redistributable update to the .NET Framework 3.5 SP1, which will include enhanced data services runtime assemblies and an updated datasvcutil.exe command line tool. As you’d expect, since the release is shipping as an update to the .NET Framework 3.5 SP1, all the binaries shipped with the release will be named as they were in the .NET Framework 3.5 SP1 release (i.e. System.Data.Services.*.dll). In addition to shipping an update to the .NET Framework 3.5 SP1, we’ll release a new Silverlight 3-compatible client library and the .NET-related client and server features from the .NET 3.5 SP1 update release will be directly included in the subsequent .NET Framework 4.0 release.

We will also be changing the release name to better convey that it is an in-place update to the .NET Framework 3.5 SP1. We’ll be dropping “v1.5” from the name and simply calling it the Data Services Update for the .NET Framework 3.5 SP1.

At PDC last week, we introduced two communication-related capabilities: a) inter-role communication and b) external endpoints on worker roles. These capabilities enable new application patterns in Windows Azure-hosted services.

Inter-role Communication

While loosely coupled communication via Queues remains the preferred method for reliable message processing, roles can now communicate directly using TCP, HTTP, or HTTPS connections. In addition, roles are notified as role instances within the deployment are added or removed, enabling elasticity. A common application pattern enabled by this is client-server, where the server could be an application such as a database or a memory cache.

External Endpoints on Worker Roles

Worker roles can now contain external facing endpoints, or InputEndpoints. You can bind to these endpoints either directly in the worker role or from within a process that you spawn from the worker role. Unlike the InternalEndpoints used by inter-role communication, InputEndpoints are load balanced.

A common application type enabled by this is a self-hosted Internet-exposed service, such as a custom application server.

Note that the port that actually gets assigned to the instances is different from the port that gets exposed via the load balancer. This port can be discovered via the RoleEnvironment.CurrentRoleInstance property.

Microsoft Corp. is creating technology to give businesses more fine-grained control over access to data stored in the company's upcoming SQL Azure database-as-a-service, a senior engineer said Tuesday.

If you want to use the ASP.NET Providers (membership, role, personalization, profile, web event provider) with SQL Azure, you'll need to use the the following scripts or aspnet-regAzure.exe tool to setup the database: http://support.microsoft.com/default.aspx/kb/2006191

Currently the only provider which is not supported on SQL Azure is the session state provider.

Personally, I like using SSMS 2008 R2 to connect to SQL Azure and using the Query window to run the scripts. (if you already have SSMS 2008 installed, you can use that as well, just connect from the Query window itself, not the Object Explorer as that will fail)

Now you can actually see a quick demo of one of these services and read some of the media coverage.

You can see the product demo if you watch the recording of Kim Cameron’s identity keynote session (the Quest OnDemand demo starts approximately at the 35:00 mark). If you don’t have Silverlight, here are the recording files in downloadable format:

In the same spirit of experimentation shown here, in the last year I’ve been using another fairly original presentation technique. The original aim was to mitigate my being chronically late in turning in slides for events but it turned out something that audiences actually like :-).

The technique is easy to explain, and i am sure that somebody is using it already (although I’ve never stumbled in anybody doing it so far). Instead of having fully baked slides, you have just few elements appearing at strategic moments; you hand draw everything else on the fly, directly during the presentation. I finally got a good recording of a session using the technique, the “Windows Identity Foundation Overview” I gave last week at PDC09. It went really well, and judging from the comments the drawing was a contributing factor (BTW thanks to all the nice comments on twitter and in the evals! :-)) [Emphasis Vibro’s.]

The .NET services bus is part of the new Microsoft Cloud Computing Windows Azure initiative, and arguably, it is the most accessible, ready to use, powerful, and needed piece. The service bus allows clients to connects to services across any machine, network, firewall, NAT, routers, load balancers, virtualization, IP and DNS as if they were part of the same local network, and doing all that without compromising on the programming model or security. The service bus also supports callbacks, event publishing, authentication and authorization and doing all that in a WCF-friendly manner.

This session will present the service bus programming model, how to configure and administer service bus solutions, working with the dedicated relay bindings including the available communication modes, relying on authentication in the cloud for local services and the various authentication options, and how to provide for end-to-end security through the relay service. You will also see some advanced WCF programming techniques, original helper classes, productivity-enhancing utilities and tools, as well as discussion of design best practices and pitfalls.

Apparently, the new Windows Azure AppFabric Service Bus and Access Control nomenclature hasn’t gotten out to everyone. (Windows Server AppFabric is a set of integrated technologies that “make it easier to build, scale and manage Web and composite applications that run on IIS.”)

If you want the full gory details, check out the .NET Services team blog post here. What follows below are some of the things that I think are most crucial to understand both for new developers and for developers unfortunate enough to be in a position of having to migrate a lot of code. Quite possibly the single most important thing to note is this:

If you bought a book on Windows Azure that has already been released or will be released within the next month or two, it is out of date and completely irrelevant. PDC (along with the changes I'm going to outline below) will substantially change all of the Azure offerings.

I've trimmed a little bit because some of the breaking changes are fairly minor and don't have too much impact on developers.

Kevin continues with his list of important breaking changes in the AppFabric.

I disagree that Cloud Computing with the Windows Azure Framework is “is out of date and completely irrelevant.” Kevin appears to base his judgment only on changes to the Azure AppFabric (nee .NET Services), which are dramatic. However, only minor and, for the most part, non-breaking changes have been made to Azure Data Services: Blobs, Queues, and Tables, as evidenced by several of the book’s sample applications running on the South Central US data center. SQL Azure received only a few minor updates in its November 2009 preview. The book’s sample applications will be updated and updates will be made to online chapters as time permits. I expect my book to continue to be a primary source of development know-how for Azure’s commercial version.

Books that publish before Visual Studio 2010’s release will be subject to late-breaking changes in Azure’s programming tools. The Azure team expects to deliver significant updates to the Azure Services Platform’s commercial version every eight months. Thus some elements of any Azure book will become outdated by the time it’s published. An investment in a book-length Windows Azure programming guide will return many times its cost in development time and effort saved.

Recently I installed the Beta 2 version of "Geneva", or ADFS 2.0. All of my machines are now Windows 7 machines, including just about all of my VHDs and virtual machines. The only time I use Win2k8 R2 is when the product I'm installing specifically requires me to do that. So when I installed Geneva on my Win7 box, I thought everything would be fine.

Then I rebooted. The "Modify STS Reference..." and "Update federation metadata" menu items that are supposed to be added to the list of available options on an ASP.NET web application were gone. They were there before I rebooted but they were gone after. I also noticed something funny with the Identity training kit install. Every single directory and file in there was marked as "read only". I would unset the read-only flag, right click the file, get properties, and sure enough, it was still set to read only. WTF?

He goes on to explain the problem with “blocked content” from the Internet and how to unblock it to prevent the issues he describes.

New Guide

Back in August we released a guide that explained how to use WIF for adding to your web role SSO and claims-based identity capabilities via WS-Federation. That guide contained a number of workarounds that were made necessary by the limitations of the bits publicly available back then. A lot of you wrote back saying that the guide was helpful in getting you going with identity & the cloud and experiment with the scenario (I believe it was the case here, for example) while waiting for more complete guidance. That’s great, because that was precisely the intent.

Since then both WIF and Windows Azure evolved quite a bit: today the scenario described by the original guide can be set up in significantly less steps, and above all you are no longer forced to implement the unsafe workarounds that were needed back then.

The new Microsoft.WindowsAzure.StorageClient release is pending but that takes a bit of more work. The goal @ AzureContrib where Cloud Storage is concerned is to enable so called Persistence Ignorance (PI). The PI thing is about being blissfully ignorant of exactly where my data is stored and how it is stored. Instead of depending on a specific storage technology those who adhere to PI instead depend on an abstraction of storage that gives us the abstract functions we require; such as save, load, select etc. This part of AzureContrib will be reviewed later (soonish). …

There's quite a lot of buzz around smart grid lately - many companies jumping on the green tech band wagon and investing resources in making all kinds of applications for smart grid - the next-generation energy infrastructure. Microsoft is no exception - it's developing Microsoft Hohm - a consumer-oriented service to help home residents to reduce their energy bills based on detailed usage report created from the data their utilities provide. My group is also helping our partner ISVs to develop smart-grid solutions based on Microsoft products and technologies. Most recently I helped Invensys to develop Smart Grid Pilot (SGP) application on top of Windows Azure, AppFabric (new name for .NET Services) and Silverlight. The idea behind Smart Grid Pilot is pretty simple - connect all smart grid participants (energy producers, utilities and consumers - businesses and homes) into one distributed network that can reach massive scale but would be easy to use.

So, why Azure, AppFabric and Silverlight? The main obstacle on the path to adoption of smart grid is not the outdated power infrastructure (it's actively being modernized), laws (many countries adopted very favorable laws for smart grid businesses) or lack of willingness of consumers to adopt (it means more savings for them). It is the lack of software infrastructure. Simply put, there's no such smart grid software solution right now that could scale to millions of homes, businesses, and most importantly devices. It's not just about connectivity (Internet is everywhere where energy might be these days) but mostly about applications and resources they use to serve the myriad of users, components and data streams in real time, changing constantly in both volume and distribution patterns. …

A correspondent pointed me to this document, dated March 30, 1965, in which an executive with Western Union, the telegraph company, lays out the company's ambitious plan to create "a nationwide information utility, which will enable subscribers to obtain, economically, efficiently, immediately, the required information flow to facilitate the conduct of business and other affairs."

The idea of a "computing utility" was much discussed in the 1960s, but this document nonetheless provides a remarkably prescient outline of what we now call cloud computing. …

When the history of cloud computing is written, it may be that Western Union will play the role that Xerox now plays in the history of the personal computer: the company that saw the future first, but couldn't capitalize on its vision.

If you’re shopping around for a company that monitors your website transactions, servers or networks, there are many points of comparison that would be worth your time covering in a request for proposal process. But here are some major points that are worthwhile asking a potential vendor:

How quickly and efficiently can you update your monitoring tools to reflect the latest technological innovations?

Does the process require downtime or reduced bandwidth that will take resources away from your ability to monitor my site or server?

Hovhannes Avoyan is the CEO of Monitis, Inc., a provider of on-demand systems management and monitoring software. OakLeaf uses their free mon.itor.us service to keep tabs on uptime of demo projects running in Microsoft’s South Central US data center.

••• Maureen O’Gara reports in Azure Gets its First Commercial ERP App of 11/26/2009: “Earlier this month [Microsoft] took a swing at Salesforce and Oracle CRM On Demand with a ‘six months free’ deal for their users.”

While Microsoft is webifying bits and pieces of its client/server Dynamics ERP solution, it ain't gonna put any full-blown Dynamics ERP on Azure. Too much customization and integration to make a good candidate apparently.

Enter Acumatica, a potentially competitive third-party ERP solution that compares itself to NetSuite except NetSuite is wholly SaaS and Acumatica, out only since June but one of the few programs already in production on the still-in-beta Azure, straddles both a client's on-premises site and the Microsoft cloud.

The Acumatica software is the same in the cloud as it is on-premise and experimenting with in on site is supposed to make accounts more comfortable with the idea of using it in the cloud. …

Last week we announced the availability of some great new Windows Azure features in the November Windows Azure SDK. One of these features enables Worker Roles to receive network traffic from both external and internal endpoints using HTTP, HTTPS and TCP. This new feature enables many new scenarios, one of then is the ability to run existing applications that receive traffic over sockets in Windows Azure.

Using these capabilities as a foundation we have shown the ability to run various applications and technologies such as MySQL, Mediawiki, Memcached and Tomcat. We have also provided a number of solution accelerators – which you can find links to here – in order to make it more more straightforward to get up and running. There are a couple of great PDC sessions here and here that demonstrate and explain how to get going with these technologies.

One of the questions I’ve heard from a number of customers and partners over the last few months has been “Is it possible to run Ruby on Rails on Windows Azure”. Well the answer is now yes. Using these new features and the approach used in the solution accelerators I have Ruby on Rails running at http://rubyonrails.cloudapp.net. There is also an incredibly simple test application running with a SQLite database at http://rubyonrails.cloudapp.net/posts

In my next post I will walk through the steps I took to get this working.

Microsoft’s public relations team published during the Professional Developer’s Conference 2009 53 case studies that contained Azure as a keyword. This post contains links to and abstracts of the second, primary earlier 20 of these case studies.

If you want to use the ASP.NET Providers (membership, role, personalization, profile, web event provider) with SQL Azure, you'll need to use the the following scripts or aspnet-regAzure.exe tool to setup the database: http://support.microsoft.com/default.aspx/kb/2006191

Currently the only provider which is not supported on SQL Azure is the session state provider.

Personally, I like using SSMS 2008 R2 to connect to SQL Azure and using the Query window to run the scripts. (if you already have SSMS 2008 installed, you can use that as well, just connect from the Query window itself, not the Object Explorer as that will fail)

Back from PDC 2009 with a lot of information on Windows Azure, I did an MSDN Live Meeting on ASP.NET and Windows Azure today. Here's the slide deck and demo code.

Abstract: "Put your stuff in the cloud! Windows Azure allows you to take advantage of cloud computing infranstructure for hosting, computing, and storage of your applications. In this demo filled session we take an existing ASP.Net Application and move it to be hosted in Windows Azure, while taking advantage of Windows Azure storage."

Yesterday, Dr. Blumenthal, head of the Office of the National Coordinator (ONC) who is tasked with the roll-out (setting policy) for all that HIT stimulus funding under the HITECH Act, launched his own Blog: Health IT Buzz. With over 20 comments so far, this Blog has generated a ready following. Now the question is: Can he/ONC maintain momentum and truly engage the HIT public at large? A quick scan of the comments revealed not a single comment from an HIT vendor (though there were a few from systems integrators).

Good to see this type of outreach by ONC and do hope that this forum lends itself to a deeper engagement with all stakeholders in HIT, consumers included.

At the bottom of this post you’ll find the DinnerNow version that I’ve been using for my PDC09 talk. The video of that talk is now available at http://microsoftpdc.com/Sessions/SVC18 and I recommend that you listen to the talk for context.

The DinnerNow drop I’m sharing here is a customized version of the DinnerNow 3.1 version that’s up on CodePlex. If I were you, I’d install the original version and then unpack my zip file alongside of it and then use some kind of diff tool (the Windows SDK’s WinDiff tool is a start) to look at the differences between the versions. That will give you a raw overview of what I had to do. You’ll find that I had to add and move a few things, but that the app didn’t change in any radical way. …

The Microsoft Sync Framework Power Pack for SQL Azure November CTP contains a series of new components that improve the experience of synchronizing with SQL Azure. This download includes runtime components that optimize performance and simplify the process of synchronizing with the cloud. The Sync Framework Power Pack for SQL Azure contains a database provider for Sync Framework that is specifically tuned for SQL Azure and a stand-alone utility for SQL Server that enables synchronization between an on-premise SQL Server database and SQL Azure. Additionally, the Sync Framework Power Pack for SQL Azure contains a Visual Studio plug-in that demonstrates how to add offline capabilities to applications which synchronize with SQL Azure by using a local SQL Server Compact database. The Microsoft Sync Framework Power Pack for SQL Azure November CTP is comprised of the following:

SqlAzureSyncProvider: SqlAzureSyncProvider is a new database provider created by Microsoft for Sync Framework 2.0 that adds first class support for SQL Azure. This new provider performs efficiently, lowers the barrier to entry, and ensures reliability when synchronizing with SQL Azure by intelligently handling some SQL Azure-specific complexities that occur on multi-tenant systems. Specifically, the provider decreases the number of round-trips to the server by using table-valued parameters (TVPs) to apply changes. In addition, When SQL Azure uses its throttling mechanism to minimize the impact of run-away operations, SqlAzureSyncProvider responds by using a “back-off algorithm” which automatically reduces batch sizes from the default of 5,000 rows during synchronization. A helpful side-effect of the use of this algorithm is that changes are viewable before synchronization is complete, and synchronization progress can be displayed in real time.

Sql Azure Offline Visual Studio Plug-In: This plug-in extends the functionality of Visual Studio 2008 Professional SP1 by providing a new Visual Studio item template called SqlAzureDataSyncClient. Using this tooling, developers can quickly add the ability to cache data stored in SQL Azure for use in their own application. Data can be cached in SQL Compact using the tooling and the code generated can be extended to support SQL Express as well. Projects that use this item are populated with both the appropriate assembly references and a generated class which contains a method named Synchronize that the developer can call in their application to automatically refresh data from the cloud.

SQL Azure Data Sync Tool for SQL Server: This tool contains a wizard that walks users through the SQL Azure connection process, automating both the provisioning and synchronization of data between SQL Server and SQL Azure. This tool is targeted at database administrators and database developers who want the ability to quickly synchronize their existing on-premise database with the cloud efficiently, reliably and without having to write any code.

New SQL Azure Events: When writing an application that uses synchronization, developers rely on events to automatically detect and handle necessary synchronization operations. Existing events provided by Sync Framework, such as ApplyChangeFailed, ApplyMetadataFailed, and all change enumeration events are here, but so are new events like AzureBatchApplied, which is fired when changes are successfully applied to the SQL Azure database.

Automated Provisioning Getting off the ground has largely been automated by the SqlAzureSyncScopeProvisioning and SqlAzureSyncTableProvisioning classes. Everything that needs to happen cloud-side and on-premise is taken care of by detecting the presence of, and (if necessary) creating, all of the appropriate metadata tables. This is taken a step further by a new plug-in for Visual Studio and SQL Azure Data Sync Tool for SQL Server, both of which automate the setup of SQL Azure connectivity, provisioning and filtering. …

In a previous post [Windows Azure - Upgrade your Service without disruption (VIP Swap)] I’ve described what a VIP Swap is and how you can use it as an updating method to avoid service disruption. This particular method doesn’t apply to all possible scenarios and if not always, most of the times, during protocol updates or schema changes you’ll need to upgrade your service when its still running, chunk-by-chunk and without any downtime or disruption.

By In-Place, I mean upgrades that take place during which both versions (old version and new version) are running side-by-side. In order to better understand the process below, you should read my “Upgrade domains” post [Windows Azure - What is an upgrade domain?] in which there is a detailed description of what Upgrade domains are, how they affect your application, how you can configure the number of domains etc. [Links added.] …

Lots of discussion lately about the need for virtualization in a cloud computing context. On one side you have people saying it's not necessary and adds extra complexity, on the other you have people (vendors) saying that virtualization is inherently a cloud infrastructure. Some even go as far as saying that virtualization and cloud computing are one in the same. I'm here to tell you that neither is true. My position is Virtualization Doesn't Make the Cloud, it makes the cloud better. Sure, you could manage raw servers Google style, but why? For me, it comes down two main aspects of scale, scaling up, and scaling out.

First let's look at scaling out, or to scale horizontally which basically means to add more nodes to a distributed system, such as adding a new servers or storage (which is easier). These could be in the form of physical or virtual servers. An example might be scaling out from one web server system to many dedicated slaves machines. Google has made an art form of scaling out. They have data centers around the globe geared toward this one core task - just in time hardware provisioning, but for most this is a very difficult and costly endeavour. Virtualization makes this sort of instant replication & provisioning of many virtual machines much easier. …

What is an application platform? Why is it important? And how should we think about application platforms in a world of cloud computing? In this session, David Chappell looks at all of these topics, providing a general model for both on-premises and cloud platforms. He then uses this model to examine several important issues in this area, including the competition between .NET and Java, why SOA is failing, and how the Microsoft platform compares with its on-premises and cloud competitors. …

The presentation includes interesting architectural insights into SOA, Azure, Amazon EC2, et al. David says in this post to his personal blog: “Now that I think about it, a better title for this session might have been Things David Thinks are Interesting in the Application Platform World Today.”

… At its annual Professional Developers’ Conference, Microsoft formally announced its Azure Cloud platform plans and availability. The Azure platform combines cloud-based developer capabilities with storage, computational and networking infrastructure services, all hosted on servers operating within Microsoft datacenters. Microsoft-focused developers can deploy applications in the cloud or on-premise. Within the Azure envelope, Microsoft announced AppFabric, a method of inter-connecting services in the Azure cloud, or through the Azure cloud to other services connected to Azure. Microsoft also offered more insight into its “Dallas” information services initiative, a push to offer third party content and information services. And Microsoft made it clear that all its efforts are focused on a very wide and inclusive definition of The Cloud, including Windows desktops, mobile phones and TVs. In sum, Microsoft laid out a broad and deep strategy for its own future as a Cloud IT Master Brand. …

This year has been one of relatively grand alliances between emerging cloud computing vendors as they fill holes in their capabilities and try to create appealing one-stop enterprise cloud services.

We’ve seen major announcements so far from IBM and Juniper, Cisco/EMC/VMware, and most recently BMC and Salesforce. There are many other smaller initiatives that have formed as well and all of these efforts underscore several key points for those businesses trying to understand the real strategic benefits of the cloud including cost, agility, and scalability. …

Strange that Dion mentions Amazon, Google App Engine, and Eucalyptus but not Microsoft or Azure.

Microsoft last week launched its first serious effort to build IT into its cloud plans by introducing technologies that help connect existing corporate networks and cloud services to make them look like a single infrastructure.

The concept began to come together at Microsoft's Professional Developers Conference. The company is attempting to show that it wants to move beyond the first wave of the cloud trend, which is defined by the availability of raw computing power supplied by Microsoft and competitors such as Amazon and Google. Microsoft's goal is to supply tools, middleware and services so users can run applications that span corporate and cloud networks, especially those built with Microsoft's Azure cloud operating system.

"Azure is looking at the second wave," says Ray Valdes, an analyst with Gartner. "That wave is what happens after raw infrastructure. When companies start moving real systems to the cloud and those systems are hybrid and they have to connect back in significant ways to legacy environments. It's a big challenge and a big opportunity for Microsoft." …

Innovation around the management of large data sets is coming from the cloud, such as through MapReduce and Hadoop.

InfoWorld's own Pete Babb provided some good coverage around the "analytics cloud" recently debuted by IBM, called Blue Insight. You can think of Blue Insight as a system that gathers data from those who use it and externalizes the data to those who need it, doing so on a cloud -- a private cloud.

However, IBM clearly does not have a lock on "big data." There has been movement in this direction for some time now, including some innovative approaches to leveraging data such as MapReduce. For those of you unfamiliar with the concept, MapReduce is a software framework brought to us by Google to support large distributed data sets on clusters of computers. What's unique about MapReduce is that it can process both structured and unstructured data and, through the use of a distributed "share nothing"-type query-processing system, return result sets in record time. …

If you have something you need from Windows Azure, please tell us what it is and vote for other's ideas. I put a few things on the list just to get you started, but feel free to add you own! We want to better understand what you need from Windows Azure and to build plans around how we make the things that "bubble to the top" a reality for our customers in the future. Comments which aren't feature requests will be moderated. It’s that simple.

Below is an article I wrote many months ago prior to all the Nicholas Carr “electricity ain’t Cloud” discussions. The piece was one from a collection that was distributed to “…the Intelligence Community, the DoD, and Congress” with the purpose of giving a high-level overview of Cloud security issues.

It is very likely that should one develop any interest in Cloud Computing (“Cloud”) and wish to investigate its provenance, one would be pointed to Nicholas Carr’s treatise “The Big Switch” for enlightenment. Carr offers a metaphoric genealogy of Cloud Computing, mapped to, and illustrated by, a keenly patterned set of observations from one of the most important catalysts of a critical inflection point in modern history: the generation and distribution of electricity.

Carr offers an uncannily prescient perspective on the evolution and adaptation of computing by way of this electric metaphor, describing how the scale of technology, socioeconomic, and cultural advances were all directly linked to the disruptive innovation of a shift from dedicated power generation in individual factories to a metered utility of interconnected generators powering distribution grids feeding all. He predicts a similar shift from insular, centralized, private single-function computational gadgetry to globally-networked, distributed, public service-centric collaborative fabrics of information interchange. …

What better way to emerge from your (Wild) Turkey stupor than to join the PDC crew and guest Christofer Hoff live at 20:30 EST on Friday November 27th for Episode 177 of PaulDotCom Security Weekly! We promise not to ask you to pass the gravy or overstay our welcome in exchange for your agreement to not Hassle the Hoff.

As a special treat, the PDC crew will be recording from Larry’s barn! At least, Larry told us it’s his barn (Social Engineering paranoia sets in after a while & we begin to question just about everything these days). …

As the new Azure toolkit allows upload of certificates to be associated with the roles, and multiple web roles, it is possible to set up 2 sites (2 webroles) one which would be secure, and one which would be unsecured.

… Companies in every sector of the economy including health care have begun moving operations and sensitive data to the cloud. As the trend accelerates, some experts have questioned whether the cloud is a safe place to store personal data.

What is no longer in question though, is that client-server systems sitting in providers' offices are inherently unsafe places for such data. Every one of the largest breaches of patient confidential information that took place last year for example, could not have happened had the data been stored in the cloud.

And those events pale in comparison to the massive breach of patient data that was announced last week by company officials from Health Net. These officials reported that a portable, external hard drive containing 7 years worth of personal, medical and financial information on 1.5 million customers had been lost. …

…[Customer dialogs are] the Cloud equivalent of eHarmony.com’s 29 dimensions of compatibility; it’s such a multidimensional problem in large enterprises that have a huge number of applications (thousands) and a ton of sunk infrastructure, mature decades-old operational practices, cultural dispositions, and economic pressures that it’s hard to figure out what to do.

For large enterprises (and the service providers who cater to them) Cloud is not a simple undertaking, at least not to those who have to deal with bridging the gap between the “old world” and the new shiny bits glimmering off in the distance.

Consider that the next time you hear a story of cloud successes and scrutinize what that really means.

I wanted to quickly let everyone know about an upcoming CloudCamp in Seoul December 16th. We're currently looking for a few additional sponsors to help cover some of the costs. If you're interested in helping out, please get in touch. Registration: http://cloudcamp-seoul-09.eventbrite.com.

Let me point of a few the more interesting points of my trip to the land of the rising sun. As I mentioned in my previous post about the opportunities for Cloud Computing in Asia, if my schedule is any indication of the demand for cloud products, there is a tremendous amount. Every minute of my trip was accounted for with non-stop meetings. I will also point out that the Japanese know how to entertain. As you can probably tell, I do a lot of traveling and am quite frequently taken to fancy restaurants, nothing comes close to the fine restaurants of Tokyo. Duck Sashimi anyone?

As for CloudCamp Tokyo, it was well attended with more then 160 in attendance. One of the more interesting aspects of the Camp was how the Japanese interact in an unconference setting. To put it simply, they don't. Getting them to publicly speak was a challenge. A few ask questions, but generally it was a one way conversation. I spoke, my translator spoke. The lightning presentations were also very well received. After the main unconference is when things got interesting. We had an open bar which probably helped loosen things up a bit. In an orderly single file fashion, almost everyone of the 160 or so attendees proceed to introduce themselves to me, handing me their business cards, with both hands, followed by a bow and a Hajimemashite (a polite 'Hello, I am pleased to make your acquaintance' which you only use the very first time you meet).

Abstract: Cloud computing is one of the hottest topics in information technology today. With all the confusion surrounding acronyms ending in ‘aas’ like Platform as a Service (PaaS), Infrastructure as a Service (IaaS) and Software as a Service (SaaS) it can be intimidating for even seasoned IT professionals. This presentation will briefly discuss the different types of cloud platforms and then address one of the key business scenarios for the cloud: Software as a Service.

Software as a Service is a business model for making applications available over the Internet. One of the key tenets of SaaS is multi-tenancy, or software designed to be used by multiple parties. Designing SaaS applications touches on many of the technologies that comprise the Azure platform: Processing, Storage, Workflow, Database and most importantly security. This presentation will discuss how each of technologies can be utilized to define a flexible architecture for multi-tenant solutions.

Channel 9 Learning Centers

Coinciding with PDC, we have released the first wave of learning content on Channel 9. The new Ch9 learning centers features content for both the Windows Azure Platform, as well as a course specifically designed for the Identity Developer. The content on both these sites will be continued to be developed by the team over the coming weeks and months. Watch out for updates and additions.

Downloadable Training Kits

To complement the learning centers on Ch9, we still continue to maintain the training kits on the Microsoft download center, which allows you to download and consume the content offline. You can download the Windows Azure Platform training kit here, and the Identity training kit here. The next update is planned for mid-December.

As noted in my SharePoint Nightmare: Installing the SPS 2010 Public Beta on Windows 7 post of 11/27/2009, I am running 64-bit Windows Server 2008 R2 with Hyper-V as the host OS with the hope of running the Office Professional and SharePoint Server 2010 Public Beta versions in a 64-bit Windows 7 virtual machine. The earlier post includes detailed information about the developer computer’s Intel processor and mother board, as well as other components.

Exchanging the physical NICs displayed a warning message about loss of connectivity, but there was no immediate indication of any problem.

I was able to successfully activate the Office 2010 Public Beta with a Multiple Access Key (MAK), run Windows Update, join the oakleaf.org domain, sign on to Windows Messenger, and sign into connect.microsoft.com with my Live ID credentials. However, I still can’t sign into skydrive.live.com. Guess I should have tried Twitter earlier!

P.S.: Liam owns a company, Tiger Computer Services Ltd, which is an Independent Software Vendor (ISV) providing .NET software solutions to clients in the London area. Here’s a link to his blog.

Update 11/29/2009: By exchanging NICs for the External Network, I was able to join the oaklear.org domain and have updated the forum thread.

Attempts to sign into sites with LiveID authentication, such as connect.microsoft.com and skydrive.live.com fail in the guest OS VM but succeed in the host OS.

Possible Network Configuration Issues

The Windows Server 2003 R2 domain controller is multihomed with RRAS NAT to an AT&T /24 commercial bank of five fixed IP addresses and the private 10.7.0.0 network at 10.7.5.2 by an 8-port 10/100 Mbps switch. This configuration has been working as expected since the release of Windows 2000 Server. (It’s described on pp. 1,056 to 1,065 of my Special Edition Using Windows 2000 Server book.)

Here’s the Network and Sharing diagram for the host OS:

And the Network diagram, which in this case doesn’t show the OAKLEAF-DC1 domain controller. Notice that the Network Location is Unidentified Network, not oakleaf.org 3 or Domain Network:

The Windows 7 guest OS obtains its 10.7.5.69 address from the OakLeaf-DC1 DHCP server’s range of 10.7.5.64 to 10.7.5.127:

Ping discloses one connectivity problem with the private network or internet:

Although I can access the DRoot share on OAKLEAF-WV21 and save files, but with a long response time, for unknown reasons I can no longer ping the workstation. I’m unable to open .png files saved to \\OAKLEAF-WV21\DRoot by Paint.net.

I’d appreciate any assistance Hyper-V experts can provide to overcome these problems.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.