• Michael Rhys (@SQLServerMike) asked “How do large-scale sites and applications remain SQL-based?” as a preface to his Scalable SQL article for the June 2011 issue of Communications of the ACM. From the Introduction:

One of the leading motivators for NoSQL innovation is the desire to achieve very high scalability to handle the vagaries of Internet-size workloads. Yet many big social Web sites (such as Facebook, MySpace, and Twitter) and many other Web sites and distributed tier 1 applications that require high scalability (such as e-commerce and banking) reportedly remain SQL-baseda for their core data stores and services.

The question is, how do they do it?

The main goal of the NoSQL/big data movement is to achieve agility. Among the variety of agility dimensions—such as model agility (ease and speed of changing data models), operational agility (ease and speed of changing operational aspects), and programming agility (ease and speed of application development)—one of the most important is the ability to quickly and seamlessly scale an application to accommodate large amounts of data, users, and connections. Scalable architectures are especially important for large distributed applications such as social networking sites, e-commerce Web sites, and point-of-sale/branch infrastructures for more traditional stores and enterprises where the scalability of the application is directly tied to the scalability and success of the business.

These applications have several scalability requirements:

Scalability in terms of user load. The application needs to be able to scale to a large number of users, potentially in the millions.

Scalability in terms of data load. The application must be able to scale to a large amount of data, potentially in, either produced by a few or produced as the aggregate of many users.

Computational scalability. Operations on the data should be able to scale for both an increasing number of users and increasing data sizes.

Scale agility. In order to scale to increasing or decreasing application load, the architecture and operational environment should provide the ability to add or remove resources quickly, without application changes or impact on the availability of the application.

Michael (mrys@microsoft.com) is principal program manager on the SQL Server RDBMS team at Microsoft. He is responsible for the Beyond Relational Data and Services scenario that includes unstructured and semi-structured data management, search, Spatial, XML, and others.

The relational model of data was proposed in 1970 by Ted Codd as the best solution for the DBMS problems of the day—business data processing. Early relational systems included System R and Ingres, and almost all commercial relational DBMS (RDBMS) implementations today trace their roots to these two systems.

Key Insights

Michael Stonebraker (stonebraker@csail.mit.edu) is an adjunct professor in the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology, consultant and founder, Paradigm4, Inc., consultant and founder, Goby, Inc., and consultant and founder, VoltDB, Inc.

Rick Cattell (rick@cattell.net) is a database technology consultant at Cattell.Net and on the technical advisory board of Schooner Information Technologies.

As I mentioned in my previous post OData Updates in Windows Phone “Mango”, new methods have been added to the DataServiceState class that improve performance and functionality when storing client state. You can now serialize nested binding collections as well as any media resource streams that have not yet been sent to the data service. But what does this new behavior look like?

Storing state in the State dictionary of the PhoneApplicationService essentially involves serializing a DataServiceState object, including a typed DataServiceContext object, one or more DataServiceCollection<T> objects, and all the entity data referenced by these objects. To make this behavior more correct, the old SaveState method is replaced with a new static Serialize method. This new method returns, quite simply, a string that is the XML serialized representation of the stored objects. This works much better for storing in the state dictionary because the DataServiceState is able to explicitly serialize everything before it gets stored. Also, nested collections should now work (this was broken in the Windows Phone 7 version).

// Code to execute when the application is activated (brought to foreground). // This code will not execute when the application is first launched. privatevoid Application_Activated(object sender, ActivatedEventArgs e)
{
// If data is not still loaded, try to get it from the state store. if (!ViewModel.IsDataLoaded)
{
if (PhoneApplicationService.Current.State.ContainsKey("ApplicationState"))
{
// Get back the stored dictionary.
Dictionary<string, string> appState =
PhoneApplicationService.Current.State["ApplicationState"]
as Dictionary<string, string>;
// Use the returned dictionary to restore // the state of the data service.
App.ViewModel.RestoreState(appState);
}
}
}
// Restores the view model state from the supplied state dictionary. publicvoid RestoreState(IDictionary<string, string> appState)
{
// Create a dictionary to hold any stored binding collections.
Dictionary<string, object> collections;
if (appState.ContainsKey("DataServiceState"))
{
// Deserialize the DataServiceState object.
DataServiceState state
= DataServiceState.Deserialize(appState["DataServiceState"]);
// Restore the context and binding collections.
var context = state.Context as NetflixCatalog;
collections = state.RootCollections;
// Get the binding collection of Title objects.
DataServiceCollection<Title> titles
= collections["Titles"] as DataServiceCollection<Title>;
// Initialize the application with stored data.
App.ViewModel.LoadData(context, titles);
// Restore other view model data.
_currentPage = Int32.Parse(appState["CurrentPage"]);
_totalCount = Int32.Parse(appState["TotalCount"]);
}
}

Note that with multi-tasking and the new fast application switching functionality in Mango, it is possible that your application state will be maintained in a “dormant” state in memory when your app loses focus. This means that even though the Activated event is still raised, the application data is still there and is immediately redisplayed. This is much faster than tombstoning because you don’t have to deserialize and re-bind everything (and the bound images don’t have to be downloaded again—hurray!). For more information, see Execution Model Overview for Windows Phone in the Mango beta documentation.

Describing OData

Our world is awash in data. Vast amounts exist today, and more is created every year. Yet data has value only if it can be used, and it can be used only if it can be accessed by applications and the people who use them.

Allowing this kind of broad access to data is the goal of the Open Data Protocol, commonly called just OData. This paper provides an introduction to OData, describing what it is and how it can be applied. The goal is to illustrate why OData is important and how your organization might use it.

The Problem: Accessing Diverse Data in a Common Way

There are many possible sources of data. Applications collect and maintain information in databases, organizations store data in the cloud, and many firms make a business out of selling data. And just as there are many data sources, there are many possible clients: Web browsers, apps on mobile devices, business intelligence (BI) tools, and more. How can this varied set of clients access these diverse data sources?

One solution is for every data source to define its own approach to exposing data. While this would work, it leads to some ugly problems. First, it requires every client to contain unique code for each data source it will access, a burden for the people who write those clients. Just as important, it requires the creators of each data source to specify and implement their own approach to getting at their data, making each one reinvent this wheel. And with custom solutions on both sides, there's no way to create an effective set of tools to make life easier for the people who build clients and data sources.

Thinking about some typical problems illustrates why this approach isn't the best solution. Suppose a Web application wishes to expose its data to apps on mobile phones, for instance. Without some common way to do this, the Web application must implement its own idiosyncratic approach, forcing every client app developer that needs its data to support this. Or think about the need to connect various BI tools with different data sources to answer business questions. If every data source exposes data in a different way, analyzing that data with various tools is hard -- an analyst can only hope that her favorite tool supports the data access mechanism she needs to get at a particular data source.

Defining a common approach makes much more sense. All that's needed is agreement on a way to model data and a protocol for accessing that data -- the implementations can differ. And given the Web-oriented world we live in, it would make sense to build this technology with existing Web standards as much as possible. This is exactly the approach taken by OData.

The Solution: What OData Provides

OData defines an abstract data model and a protocol that let any client access information exposed by any data source. Figure 1 shows some of the most important examples of clients and data sources, illustrating where OData fits in the picture.

As the figure illustrates, OData allows mixing and matching clients and data sources. Some of the most important examples of data sources that support OData today are:

Custom applications: Rather than creating its own mechanism to expose data, an application can instead use OData. Facebook, Netflix, and eBay all expose some of their information via OData today, as do a number of custom enterprise applications. To make this easier to do, OData libraries are available that let .NET Framework and Java applications act as data sources.

Cloud storage: OData is the built-in data access protocol for tables in Microsoft's Windows Azure, and it's supported for access to relational data in SQL Azure as well. Using available OData libraries, it's also possible to expose data from other cloud platforms, such as Amazon Web Services.

Content management software: For example, SharePoint 2010 and Webnodes both have built-in support for exposing information through OData.

Windows Azure Marketplace DataMarket: This cloud-based service for discovering, purchasing, and accessing commercially available datasets lets applications access those datasets through OData.

While it's possible to access an OData data source from an ordinary browser -- the protocol is based on HTTP -- client applications usually rely on a client library. As Figure 1 shows, the options supported today include:

Web browsers: JavaScript code running inside any popular Web browser, such as Internet Explorer or Firefox, can access an OData data source. An OData client library is available for Silverlight applications as well, and other rich Internet applications can also act as OData clients.

Mobile phones. OData client libraries are available today for Android, iOS (the operating system used by iPhones and iPads), and Windows Phone 7.

Business intelligence tools: Microsoft Excel provides a data analysis tool called PowerPivot that has built-in support for OData. Other desktop BI tools also support OData today, such as Tableau Software's Tableau Desktop.

Custom applications: Business logic running on servers can act as an OData client. Support is available today for code created using the .NET Framework, Java, PHP, and other technologies.

The fundamental idea is that any OData client can access any OData data source. Rather than creating unique ways to expose and access data, data sources and their clients can instead rely on the single solution that OData provides.

OData was originally created by Microsoft. Yet while several of the examples in Figure 1 use Microsoft technologies, OData isn't a Microsoft-only technology. In fact, Microsoft has included OData under its Open Specification Promise, guaranteeing the protocol's long-term availability for others. While much of today's OData support is provided by Microsoft, it's more accurate to view OData as a general purpose data access technology that can be used with many languages and many platforms.

How OData Works: Technology Basics

Providing a way for all kinds of clients to access all kinds of data is clearly a good thing. But what's needed to make the idea work? Figure 2 shows the fundamental components of the OData technology family.

Figure 2: An OData service exposes data via the OData data model, which clients access with an OData client library and the OData protocol.

The OData technology has four main parts:

The OData data model, which provides a generic way to organize and describe data. OData uses the Entity 1 Data Model (EDM), the same approach that's used by Microsoft's Entity Framework (EF)[1].

The OData protocol, which lets a client make requests to and get responses from an OData service. At bottom, the OData protocol is a set of RESTful interactions -- it's just HTTP. Those interactions include the usual create/read/update/delete (CRUD) operations, along with an OData-defined query language. Data sent by an OData service can be represented on the wire today either in the XML-based format defined by Atom/AtomPub or in JavaScript Object Notation (JSON).

OData client libraries, which make it easier to create software that accesses data via the OData protocol. Because OData relies on REST, using an OData-specific client library isn't strictly required. But most OData clients are applications, and so providing pre-built libraries for making OData requests and getting results makes life simpler for the developers who create those applications.

An OData service, which exposes an endpoint that allows access to data. This service implements the OData protocol, and it also uses the abstractions of the OData data model to translate data between its underlying form, which might be relational tables, SharePoint lists, or something else, into the format sent to the client.

Given this basic grasp of the OData technology, it's possible to get a better sense of how it can be used. The best way to do this is to look at some representative OData scenarios. …

The "realist" view on Microsoft's future is that Windows and Microsoft Office licenses will continue to be the company's bread and butter, and that enterprise-focused cloud initiatives like Azure and Office 365 will supplement this growth. In this view Microsoft's struggles in mobile, the rapid growth of Apple and the proliferation of Linux aren't real threats to the company. After all, even though OSX and Linux are growing faster than Windows, Windows is still growing. And it's too early to write Microsoft out of the mobile game, before its partnership with Nokia comes to fruition and before it even releases its tablets. It's a reasonable view of where things are going.

Then there's the other vision, which we might call the Cassandra version.

In this vision, Microsoft loses out in the operating system wars, Office becomes less relevant in the market and Microsoft's mobile ambitions were doomed from the start. This view sees Office 365 losing out to Google Docs and Azure losing out to the plethora of alternatives. Bing might be Microsoft's only hope in this view, and even that's a long shot.

In short: in this post I will show you how you can leverage the OnStart event of a WebRole to enable changing the WIF config settings even after deployment.

Since the very first time Hervey and I made the first foray in Windows Azure with WIF, all the way to the latest hands-on labs, books and whitepapers, one of the main challenges of using WIF in a WebRole has always been the impossibility of updating the settings in <microsoft.identityModel> without redeploying (or preparing in advance for a pool of alternative <service> elements fully known at deployment time).

Last Friday I was chatting with Wade about how to solve this very problem for some future deliverables in the toolkit, and it just came to me: why don’t we just leverage the WebRole lifecycle and use OnStart for setting the values we want even before WIF reads the web.config? All we need to do is create suitable <setting> entries in the ServiceConfiguration.cfg file, which can be modified without the need to redeploy, and use the events in WebRole.cs to ensure that our apps picks up the new values. Simple!

I created a new WebRole, hooked it to a local SelfSTS, and started playing with ServiceDefinition.csdef, ServiceConfiguration.cscfg and WebRole.cs. I just wanted to make sure the idea works, hence I didn’t pour much care in writing clean (or exhausting) code. Also, I totally ignored all the considerations about HTTPS, NLB session management and all those other things you learned you need to do in WIndows Azure. None of those really interferes with the approach, hence for the sake of simplicity I left them all out.

First, I created <Setting> entries in the .csdef for every WIF config parameter generated by the Add STS Reference you’d likely want to control:

When OnStart runs, the WebRole application itself didn’t have a chance to do anything yet. What I want to do here is getting my hands on the web.config file, override the WIF settings with all the non-empty values I find in ServiceConfiguration.cscfg and save back the file even before WIF gets to read <microsoft.identityModel>.

What I do above with Linq to XML for modifying the WIF settings is pretty dirty, very brittle and definitely tied to the assumption that the config we’ll be working with is the one that comes out from a typical Add STS Reference run. I tried to use ConfigurationManager at first, but it complained that <microsoft.identityModel> has no schema hence I just went the quicker, easier, more seductive “let’s just see if it works”. But remember, for the one among you who caught the reference: the dark side is not stronger. No no no.

Aaanyway. The element.Save(configFilePath) is the line that will fail if you forgot to add the elevated directive in the csdef, you’re warned.

The RoleEnvironmentChanging handler hookup at the beginning of OnStart, and the handler itself, are meant to ensure that when you change the values in ServiceConfiguration.cscfg Windows Azure will properly restart the role. If you don’t add that, just changing the config will not drive changes in the WebRole behavior until a stop & restart occurs. Technically there are few things you may try to do to get WIF to pick up the new settings at mid flight, but all those would entail changing the application code and that’s exactly what I am trying to avoid with all this brouhaha.

BTW, you can thank Nick Harris for the RoleEnvironment.Changing trick.
Nick just joined the Windows Azure Evangelism team and he is already doing an awesome job.

That should be all. Now, try to ignore the impulse that would make you change the config before deploying, and publish the project in Windows Azure staging “as is”.

That’s right. WIF is still configured for the address the application had in the environment formerly known as devfabric (now Windows Azure simulation environment), as described in the realm entry, hence SelfSTS (which behaves like the WIF STS template if there’s no wreply in the signin message) sends the token back there instead of http://eddb883659d04d0bbbb570f17c52ea01.cloudapp.net. Normally we’d be pretty stuck at this point, but thanks to the modification we made we can fix the situation.

All you need to do is navigate to the Windows Azure portal, select the deployment and hit the Configure button.

You’ll see the portal updating the instance for few moments. As soon as it reports the role as ready, navigate to its URL and, surprise surprise, this time the authentication flow ends up in the right place! In the screenshot below you can see (thanks to the SecurityTokenVisualizerControl, which you can find in all the latest ACS labs in the identity training kit) that the audienceURI has been changed as well.

I think that’s pretty cool.

Now, you may argue that this scenario is an artifact of how the WIF STS template handles things, and if you would have ben dealing with an STS (like ACS) which keeps realm and return URLs well separated you could have solved the matter at the STS side. All true, but beside the point.

Here I used the staging & realm example because with its unknowable-until-it’s-too-late GUID in the URL it is (was?) the paradigmatic example of what can be challenging when using WIF with Windows Azure; but of course you can use the technique you saw here for pushing out any post-deployment changes, including pointing the WebRole to a different STS, updating certificate thumbprints as keys rollover takes place or any other setting you may want to modify.

Please use this technique with caution. I haven’t used extensively yet hence I am not 100% sure if there are gotchas just waiting to be found, but so far it seems to be solving the problem pretty nicely.

Steve Marx, Tactical Strategist for Microsoft, talks about interoperability on Azure. Steve writes Python and Ruby apps that he hosts on Azure and says you can too. He explains why it makes sense for Microsoft to support developers of all languages... not just on the .Net Stack.

We have recently updated and reorganized content in Managing Certificates in Windows Azure to make it easier to find help on certificates. The most significant change is the addition of two new topics on using SSL certificates:

Devart today unveiled the release of new powerful Visual Studio add-in for editing T4 templates with syntax highlighting, intellisense, code outlining, and all features of first-class text editor add-in for Visual Studio. It provides very high performance and makes creating T4 templates easier and faster. With this new add-in, Devart offers a fast and easy way to create and edit T4 templates with multilevel template including, convenient template navigation, and rich code editing features.

Intellisense Devart T4 editor provides comprehensive intellisense including all Visual Studio C# and Visual Basic intellisense features - tooltips, parameter info, code completion, and additionally supports a completion list for template directives. T4 editor intellisense lists all available C# classes and members, even those that are in included template files and in referenced assemblies.

Syntax Highlighting With highlighting template directives, C# and Visual Basic code, you can easily distinguish text from the function calls. Fonts and colors for templates can be customized as for any Visual Studio code editor.

Goto Devart T4 Editor allows you to navigate to definitions and declarations of objects and members if they are present in the template file or included files.

Include Devart T4 Editor supports multilevel template including. All classes from included templates are available in intellisense, and you can navigate to them with Go To menu commands.

Ever since NuGet came out, I’ve been thinking about leveraging it in a corporate environment. I've seen two NuGet server implementations appear on the Internet: the official NuGet gallery server and Phil Haack’s NuGet.Server package. As these both are good, there’s one thing wrong with them: you can't be lazy! You have to do some stuff you don’t always want to do, namely: configure and deploy.

After discussing some ideas with my colleague Xavier Decoster, we decided it’s time to turn our heads into the cloud: we’re providing you NuGet-as-a-Service (NaaS)! Say hello to MyGet.

MyGet offers you the possibility to create your own, private, filtered NuGet feed for use in the Visual Studio Package Manager.

It can contain packages from the official NuGet feed as well as your private packages, hosted on MyGet. Want a sample? Add this feed to your Visual Studio package manager: http://www.myget.org/F/chucknorris

But wait, there’s more: we’re open sourcing this thing! Feel free to fork over at CodePlex and extend our "product". We've already covered some feature requests we would love to see, and Xavier has posted some more on his blog. In short: feel free to add your own most-wanted features, provide us with bugfixes (pretty sure there will be a lot since we hacked this together in a very short time). We're hosting on WIndows Azure, which means you should get the Windows Azure SDK installed prior to contributing. Unless you feel that you can write code without locally debugging :-)

Microsoft’s enhanced PHP support is a prime example of this new definition of open: the SDK is open source and does give non-.NET users access to more features of the Azure platform, but Azure itself is neither open source nor particularly open, in general. However, by giving developers what Microsoft calls “a ‘speed dial’ library to take full advantage of Windows Azure’s coolest features,” the company is trying to prove that it supports freedom of choice. PHP developers now can have their cake and eat it, too.

Microsoft has SDKs for Java and Ruby, as well, but they’re less full-featured than this latest version for PHP, which aims to give PHP developers an experience comparable to that of .NET developers within Windows Azure.

PHP happens to be a great place to start really eliminating the barriers for non-.NET development within Azure. Many Facebook apps are written in PHP, after all (including Hotel Peeps, which Microsoft counts as a customer) and it’s among the most-popular web-development languages overall. Microsoft certainly didn’t fail to notice that PHP, once relatively underserved in the Platform as a Service space, now has a dedicated PaaS in PHP Fog and support from others, including RightScale, Red Hat (OpenShift) and DotCloud.

The Libcloud case presents a different take on cloud openness. Libcloud gives developers an API to perform a set of standard actions — such as create, reboot, deploy and destroy — to cloud servers across a variety of providers. However, it’s written in Python (there’s also a less-capable Java (s orcl) version) and requires developers to work in Python, which does limit the scope of who’s likely to use Libcloud. It’s an open-source project that opens up developers’ abilities to manage resources across clouds, but it’s only open to the Python community.

Deltacloud, which Red Hat initiated and which is currently an Apache Incubator project, arguably takes a broader approach to cross-cloud management with its REST-based API that works with any application type. Both projects, however, fall short of the ultimate goal of interoperability standards among cloud providers, whereby users could actually move applications and data between clouds without first having to pull back in-house and reload it into its new home.

Whatever open actually means in cloud computing, it’s undeniable that we’re making progress. From open source software such as OpenStack to projects such as Libcloud to just multi-language support on PaaS platforms, we’re at least at a place where cloud users can be confident their applications can be moved from cloud to cloud, if need be, and where users might not even have to learn new APIs along the way. We might never have widely accepted cloud standards, but it looks like we’re coming along nicely on choice.

With that in mind, I’ll be talking at Structure 2011 next month with leaders from the OpenStack and Cloud Foundry projects, who have a lot more insight into how open source projects might ultimately change ideas around both interoperability and economics in the cloud.

Hopefully this is because LightSwitch is still in beta (as of this writing on 5/31/11) but from our research and testing, it seems that the documentation in the following article is correct:

"Publishing a 3-tier application requires that you have administrative access to a server that is running IIS and is preconfigured for LightSwitch, and also that you have administrative access to a computer that is running SQL Server." (http://msdn.microsoft.com/en-us/library/ff872288.aspx)

That's a tremendous failure (if not addressed) of the product. It means that publishing a LightSwitch application to a shared host is not an option (though I've heard of possible manual configuration workarounds). It also means that in a corporate world you either need a trusted deployment team with elevated permissions, or you need to give your developers or QA people administrative access to servers (generally a no-no).

LightSwitch has a Publish Application Wizard that seems to leverage Web Deploy so hopefully the LightSwitch team will address this publishing challenge so that any host supporting standard Web Deploy services can accept and support LightSwitch applications.

• Chris Czarnecki asked What is Azure ? in a 5/31/2011 post to the Learning Tree blog:

Last week I read an excellent article on Microsoft Azure about Azure and the death of the data centre [see post below]. The author of the article, Debra Littlejohn Shinder hit on a point that I have realised as part of my consultancy activities for some time too – that is Azure is one of the most misunderstood product offerings from Microsoft ever.

It is interesting to consider why this is so. Firstly, the materials that Microsoft produce to describe Azure are often confusing. Secondly, there is a degree of paranoia amongst organisations about handing over control of IT resources to a third party as well as the perceived threat to job security of administrators. Those that are vocal, propagate the confusion. This is highlighted in a sample of the comments posted by readers if Debra Littlejohn Shinder’s article. I have summarised a few below:

In the event of problems, Microsoft will blame the ISP, the ISP will blame Microsoft, and no resolution found

Microsoft cannot keep HotMail running, what chance Azure

I would rather own my own assets and manage them and secure my own data

Clouds are unreliable, look how often Amazon AWS is down

All of these arguments can easily be dismissed when one has a thorough understanding of Cloud Computing, and Microsoft Azure. The technology and business benefits and risks can then be considered and an informed decision made. Uninformed comments are harmful not primarily to Microsoft, or Cloud Computing in general, but to the organisations that the uninformed work for. These employees may be holding back their organisations from becoming more agile and responsive to their customers needs, missing business opportunities and losing ground to competitors. Equally, Cloud Computing is not a solution for every business. If you would like to learn more on Cloud Computing and its business and technical benefits, why not consider attending Learning Tree’s Cloud Computing course. If you would like to learn more about the details of Azure, Doug Rehnstrom has developed an excellent 4 day hands-on course that will provide you with the skills to use Azure to the maximum benefit of your organisation.

It appears to many people, both inside and outside of the company, that Microsoft is putting most of its development efforts into Azure, its cloud-computing platform, rather than Windows Server. This makes sense as part of its oft-declared “all in with the cloud” philosophy, but in many ways Azure is still a mystery, especially to IT professionals.

Some see it — and cloud computing in general — as a threat to their jobs, as I discussed in a previous post: “It Pros Are Not Feeling the Love from Microsoft.“Some, who apparently don’t understand what Azure is, are afraid it will subsume Windows Server, and I’ve even heard some predict that “the next release of Windows Server will be the last.”

But does this focus on Azure really mean the death of the datacenter — and the Windows servers that empower it - or could it actually signal the rebirth of Windows as a fresh, new, more flexible foundation for both public cloud offerings and the private cloud-based, on-premise datacenters of the future?

Understanding Azure

Ralph Waldo Emerson once said, “To be great is to be misunderstood.” Perhaps Microsoft can take comfort in that, because based on my casual discussions with many in the IT industry, Azure is one of the most misunderstood products the company has produced (and that’s saying a lot). Microsoft’s own descriptions of Azure often leave you more confused than ever.

For instance, the video titled What Is Windows Azure that’s linked from their Windows Azure page talks about three components: the “fabric,” the storage service, and the “developer experience.” Huh? To confuse matters more, many of the papers you’ll find on Azure, such as Introducing Windows Azure by David Chappell & Associates, list its three parts as fabric, storage service, and compute service.

The most common point of puzzlement is over whether Azure is or isn’t an operating system. Mary Jo Foley, in her Guide for the Perplexed, called Azure the base operating system that used to be codenamed “Red Dog” and was designed by a team of operating system experts at Microsoft. But she goes on to say that it networks and manages the set of Windows Server 2008 machines that comprise the cloud. That leaves us wondering whether Azure is an OS on which applications run or an OS on which another operating system runs.

Microsoft’s own web sites and documents, in fact, rarely call Azure an operating system, but refer to it as a “platform.” That’s a word that is used in many different ways in the IT industry. We have hardware platforms such as x86/x64, RISC, and ARM. We have software platforms such as .NET and Java. We have mobile platforms such as BlackBerry, Android, and WinMo. And we have OS platforms such as Windows, Linux, Mac OS X, Solaris, and so forth.

So what sort of platform is Azure? According to MSDN, it’s “an Internet-scale cloud computing and services platform hosted in Microsoft datacenters” that comprises three developer services: Windows Azure, Windows Azure AppFabric, and SQL Azure. So there we have yet another, different list of Azure’s three components.

It’s important to note that although they differ in other respects, every one of these lists includes the “fabric” component. And that’s also the most mysterious of the components to those who are new to Azure. Mark Russinovich likens Azure to a “big computer in the sky” and explains that the Fabric Controller is analogous to the kernel of the operating system. He has a unique ability to take this very complex subject and make sense out of it, and his Channel 9 discussion of Azure, cloud OS, and PaaS is well worth watching if you want a better understanding of the Fabric Controller and what it does.

How it all fits together

If you’re familiar with the basics of cloud computing, you know there are three basic service models: Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). Azure solutions exist for both the IaaS and the PaaS deployment models, with the former providing only compute, network, and storage services and the latter providing everything that the application code needs to run on.

This platform is not an operating system in the traditional sense; it has no OS interfaces like Control Panel, Server Manager, etc. on Windows Server. However, it does do some of the things, in the cloud, that conventional operating systems do, such as managing storage and devices and providing a run-time environment for applications, which is hosted by the Fabric Controller.

Where does Windows Server come in? You might remember that back in December 2009, Microsoft brought the Windows Server and Windows Azure teams together to form the Windows Server and Cloud Division within the Server and Tools Business that was led by Bob Muglia until the recent reorganization and is now under the wing of Satya Nadella. The fact that these two products are part of the same division should be a clue that they’re closely related. And in fact, if you take a look inside an instance of Windows Azure, you’ll find that they’re more closely related than you might have guessed.

So, after all the angst, we find that the operating system with which you interact in an Azure environment isn’t some brand-new, mysterious OS after all. It’s a virtualized version of Windows Server 2008 that has been preconfigured with a specific amount of resources (CPU, RAM, and storage) and delivered to you for a monthly fee. IT pros can breathe a sigh of relief; if you’re a Windows Server 2008 admin, you already know how to manage the OS that’s “visible” in Azure. Developers use the same tools and programming languages to create applications for Azure. The big difference is that the Fabric Controller manages the cloud environment, so applications must be structured with that in mind. For the nitty-gritty details about those differences, check out IaaS, PaaS and the Windows Azure Platformby Keith Pijanowski.

Microsoft has made released a beta of the next version of its capacity planning tool, called Microsoft Assessment & Planning (MAP) Toolkit. This version is het follow up of version 5.5. which was released beginning this year, which introduced assessment for migration to the Windows Azure and SQL Azure platform.

Version 6.0 will include assessment and planning for evaluating workloads for public and private cloud platforms, identifying the workload and estimating the infrastructure size and resources needed for both Windows Azure and Hyper-V Fast Track. Besides that this version will include support for assessment of Microsoft’s Software as a Service (SaaS) offering for Office, called Office 365, enhanced VMware inventory, and Oracle schema discovery and reporting for migration to SQL Server. Also readiness for migration to Internet Explorer 9 is included.

The beta review period will run through mid-July, 2011 and is accessible through Microsoft Connect.

That's just the list of topics! For every topic there's the slide deck used to teach the class, a main list of papers and a second list of optional papers. So there's a lot to choose from. Happy reading! If any of the papers really stand out for you, please share.

Cloud computing allows computer users to conveniently rent access to fully featured applications, to software development and deployment environments, and to computing infrastructure assets such as network-accessible data storage and processing.

This document reviews the NIST-established definition of cloud computing,describes cloud computing benefits and open issues, presents an overview of major classes of cloud technology, and provides guidelines and recommendations on how organizations should consider the relative opportunities and risks of cloud computing. Cloud computing has been the subject of a great deal of commentary.

Attempts to describe cloud computing in general terms, however, have been problematic because cloud computing is not a single kind of system, but instead spans a spectrum of underlying technologies, configuration possibilities, service models, and deployment models. This document describes cloud systems and discusses their strengths and weaknesses.

Depending on an organization's requirements, different technologies and configurations are appropriate.

To understand which part of the spectrum of cloud systems is most appropriate for a given need, an organization should consider how clouds can be deployed (deployment models), what kinds of services can be provided to customers (service models), the economic opportunities and risks of using cloud
services (economic considerations), the technical characteristics of cloud services such as performance and reliability (operational characteristics), typical terms of service (service level agreements), and the security opportunities and risks (security).

Deployment Models. A cloud computing system may be deployed privately or hosted on the premises of a cloud customer, may be shared among a limited number of trusted partners, may be hosted by a third party, or may be a publically accessible service, i.e., a public cloud. Depending on the kind of cloud
deployment, the cloud may have limited private computing resources, or may have access to large quantities of remotely accessed resources. The different deployment models present a number of tradeoffs in how customers can control their resources, and the scale, cost, and availability of resources.

Service Models. A cloud can provide access to software applications such as email or office productivity tools (the Software as a Service, or SaaS, service model), or can provide a toolkit for customers to use to build and operate their own software (the Platform as a Service, or PaaS, service model), or can provide
network access to traditional computing resources such as processing power and storage (the Infrastructure as a Service, or IaaS, service model). The different service models have different strengths and are suitable for different customers and business objectives. Generally, interoperability and portability of customer workloads is more achievable in the IaaS service model because the building blocks of IaaS offerings are relatively well-defined, e.g., network protocols, CPU instruction sets, legacy device interfaces.

Economic Considerations. In outsourced and public deployment models, cloud computing provides convenient rental of computing resources: users pay service charges while using a service but need not pay large up-front acquisition costs to build a computing infrastructure. The reduction of up-front costs reduces the risks for pilot projects and experimental efforts, thus reducing a barrier to organizational flexibility, or agility. In outsourced and public deployment models, cloud computing also can provide elasticity, that is, the ability for customers to quickly request, receive, and later release as many resources as needed. By using an elastic cloud, customers may be able to avoid excessive costs from overprovisioning, i.e., building enough capacity for peak demand and then not using the capacity in non-peak periods. Whether or not cloud computing reduces overall costs for an organization depends on a careful analysis of all the costs of operation, compliance, and security, including costs to migrate to and, if necessary, migrate from a cloud.

Operational Characteristics. Cloud computing favors applications that can be broken up into small ndependent parts. Cloud systems generally depend on networking and hence any limitations on networking, such as data import/export bottlenecks or service disruptions, reduce cloud utility, especially for applications that are not tolerant of disruptions.

Service Level Agreements (SLAs). Organizations should understand the terms of the SLA, their responsibilities, and those of the service provider, before using a cloud service.

Security. Organizations should be aware of the security issues that exist in cloud computing and of applicable NIST publications such as NIST Special Publication (SP) 800-53. As complex networked systems, clouds are affected by traditional computer and network security issues such as the needs to provide data confidentiality, data integrity, and system availability. By imposing uniform management practices, clouds may be able to improve on some security update and response issues. Clouds, however, also have potential to aggregate an unprecedented quantity and variety of customer data in cloud data centers. This potential vulnerability requires a high degree of confidence and transparency that cloud providers can keep customer data isolated and protected. Also, cloud users and administrators rely heavily on Web browsers, so browser security failures can lead to cloud security breaches. The privacy and security of cloud computing depend primarily on whether the cloud service provider has implemented robust security controls and a sound privacy policy desired by their customers, the visibility that customers have into its performance, and how well it is managed.

Inherently, the move to cloud computing is a business decision in which the business case should consider the relevant factors some of which include readiness of existing applications for cloud deployment, transition costs and life-cycle costs, maturity of service orientation in existing infrastructure, and other
factors including security and privacy requirements.

Recent Forrester inquiries from enterprise infrastructure and operations (I&O) professionals show that there's still significant confusion between infrastructure-as-a-service (IaaS) private clouds and server virtualization environments. As a result, there are a lot of misperceptions about what it takes to get your private cloud investments right and drive adoption by your developers. The answers may surprise you; they may even be the opposite of what you're thinking.

From speaking with Forrester clients who have deployed successful private clouds, we've found that your cloud should be smaller than you think, priced cheaper than the ROI math would justify and actively marketed internally - no, private clouds are not a Field of Dreams. Our latest report, "Q&A: How to Get Private Cloud Right" details this unconventional thinking and you may find that internal clouds are much easier than you think.

First and foremost, if you think the way you operate your server virtualization environment today is good enough to call a cloud, you are probably lying to yourself. Per the Forrester definition of cloud computing, your internal cloud must be:

Highly standardized - meaning that the key operational procedures of your internal IaaS environment (provisioning, placement, patching, migration, parking and destroying) should all be documented and conducted the same way every time.

Highly automated - and to make sure the above standardized procedures are done the same time every time you need to take these tasks out of human error and hand them over to automation software

Self-service to developers - We've found that many I&O pros are very much against this concept for fear that it will lead to chaos in the data center. But the reality is just the opposite because of 1 and 2. When you standardize what can be deployed into the cloud and how, you eliminate the risk of chaos.

Shared and metered - for your internal cloud to be cost effective and have a strong ROI you need it to be highly utilized - much more so than your traditional virtualziation environment. And the way to get there is to share a single cloud among all departments inside your company. And the way to cost-justify the cloud is to at least track everyone's consumption, if not to charge back for it.

Forrester ForrSights surveys show that 29 percent of I&O shops have put a high or critical priority on building a private cloud this year. You can successfully deploy and operate a private cloud. Whether you start with a cloud solution or build one yourself but ignoring these truths about IaaS environments will keep success at bey.

A small book of deep insight, VisibleVirtual Ops Private Cloud tackles the why and how of moving enterprise IT from virtualization to private cloud. Private cloud—better tailored, less costly, more secure than public cloud—means automation, dynamic workload management, and a service approach to IT consumption.

Throughout the book, this concept of service takes center stage. To gain the benefits of a private cloud, the authors argue, enterprise IT must offer to the business a service catalog of standardized offerings—bundles of compute power, memory, networking, and service—tailored and continuously adapted to align with business needs. From this catalog, users shall make one-touch orders, orders that through rules and automation lead to the dynamic provisioning of IT services, moving the business rapidly from idea to reality.

The private cloud in Visible Ops Private Cloud goes beyond technology; it represents a vision of extraordinary IT efficiency and value creation. Enterprise IT achieves all this through the discipline of process, the rigorous formulation, documentation, adaptation, and adherence to policies and procedures around service standardization and delivery.

The four practical steps promised by the book’s subtitle depict a journey to ever greater process discipline and efficiency. Authors Andi Mann, Kurt Milne, and Jeanne Morain effectively distill lessons learned from leading organizations, making the book remarkably concrete and comprehensive for its size.

However, as I consider the importance of the service catalog, I wonder whether the private cloud as described in Visible Ops Private Cloud produces the kind of services that business users should care about. As a step beyond virtualization, this private cloud produces servers—virtual, dynamic, and effectively managed servers. But users do not so much care about servers, nor even about IT workload management. Rather, they want solutions to their problems. An effective IT department understands the business well enough to analyze the problem, see its connection to business processes, and collaborate with users in the design of creative solutions.

So while I appreciate the efficiency gains that a private cloud might provide for the management of IT infrastructure, I’d be concerned that basing a service catalog around this infrastructure would mean speaking in a language that only IT understands. The private cloud, if it is to have business meaning, must be framed as a solution to specific business problems, rather than just efficient IT management.

That being said, this book’s focus on process and IT efficiency offers tremendous food for thought. Take this book as a guide to action, though keep in mind that the subtitle reads four practical steps, not four easy steps.

Strangely, James didn’t include a link to the book; here’s Amazon page.

Hybrid clouds are achieving almost universal buy-in as the way enterprises use the cloud. As we’ve described previously, the hybrid model federates internal and external resources so customers can choose the most appropriate match for workload requirements. The approach is already transforming enterprise computing, enabling a new generation of dynamic applications and deployments, such as:

Using multiple clouds for different applications to match business needs

Moving an application to meet requirements at specific stages in its lifecycle, from early development through UAT, scale testing, pre-production and ultimately full production scenarios

Moving workloads closer to end users across geographic locations, including user groups within the enterprise, partners and external customers

Meeting peak demands efficiently in the cloud while the low steady-state is handled internally

While everybody’s talking about the hybrid cloud, making it work is another story. Enterprise deployment can require extensive reconfiguring to adapt a customer’s internal environment to a given cloud. The result, when it’s finally running, is a hybrid deployment limited to the customer’s internal infrastructure and one particular cloud, for one particular application.

Most cloud architectures are built with layer-3 (routing) topologies, where each cloud is a separate network with its own addressing scheme and set of attributes. This means that all address settings for applications deployed to the cloud have to be changed to those assigned by the cloud provider. It also means that applications and services running internally that need to interact with the cloud have to be updated to match the cloud provider’s requirements. The result is lots of re-configuring and re-architecting so the organization’s core network can communicate with the new external resources – exactly the opposite of the agile environment that cloud computing promises to deliver.

In our discussions with enterprise customers and technology leaders, we’re now seeing a broad recognition that cloud federation requires layer-2 (bridging) connectivity. We’ve always believed that layer-2 is the right way to enable cloud federation. This week’s announcement of Cloud Bridge by Citrix is a confirmation that tight network integration is critical for successful cloud deployments. Although it’s great to see others now starting down the path of better cloud networking, it is critical that enterprises realize that this level of network integration also requires heightened security for cloud deployments – remember that you are now blending the cloud networks with your internal networks. This is why CloudSwitch has developed a comprehensive solution that not only provides full network control independent of what networking gear the cloud provider has chosen, but also secures and isolates customers’ data and communications completely through our Cloud Isolation Technology™.

In contrast to layer-3, layer-2 networking is location-independent, allowing the network in the cloud to become a direct extension of the network in the data center. It does this by preserving IP and MAC addresses so that all servers have the same addresses and routing protocols, wherever they physically run. Users can select where they want to run their applications, locally or in the cloud, without the need to reconfigure their settings for different environments.

Don’t Change AnythingCloudSwitch is unique in providing layer-2 connectivity between the data center and the cloud, with innovation that resolves previous addressing and security challenges. Our Cloud Isolation Technology automatically creates a layer-2 overlay network that encrypts and encapsulates the network traffic in the cloud as a seamless extension of the internal environment. The customer has full control over the cloud network and server addressing, even in clouds that don’t natively support this capability. No configuration changes are required. You don’t have to update router or firewall settings for every subnet or cloud deployment. You don’t have to change address settings, or keep up with changes in the cloud providers’ networks – everything “just works.”

While layer-2 connectivity is essential for full integration of the hybrid model, some companies and applications will still want to use layer-3 routing for their cloud deployments. Some practical applications for layer-3 connectivity include:

Cloud-only networks – providing access to the tiers of an application running in a cloud-only network

Remote access to cloud resources – VPN services for remote developers or users, branch office integration with the cloud resources where different network settings are required

Protected networks – for cases where the enterprise wants to centrally control who can access a specific network (utilizing their core switches and routers)

Keep in mind though, that most of these layer-3 deployments have use for layer-2 connectivity in the background as well. For the cloud-only networks, other network tiers in the same deployment can benefit from a layer-2 connection back to the primary data center for application and database tiers. For remote access deployments, management, operation, and maintenance for the cloud resources is greatly simplified by having a layer-2 connection to the data center in addition to the layer-3 access for remote users.

The CloudSwitch recommendation, and the way we’ve architected our product, is to offer layer-2, with support for layer-3 as an option. Our customers can choose to interact with their servers in the cloud using an automated layer-2 connection, or use layer-3 to create specific rules and routing to match their application and even infrastructure design. We believe that enterprises should have the freedom to create arbitrary networks and blend layer-2 and layer-3 deployments as they need, independent of the networking gear and topologies selected both by the cloud and their own IT departments.

Making Federation WorkFor hybrid computing to succeed, the cloud needs to appear like a resource on the customer network, and an application running in the cloud needs to behave as if it’s running in the data center. The ability to federate these disparate environments by mapping the data center configuration to the cloud can only happen at layer-2 in the networking stack. With innovations that make the cloud a seamless, secure extension of the internal environment, CloudSwitch helps customers turn the hype around hybrid cloud into reality.

Join us on June 4 for Code for Oakland, the first ever Oakland hackathon/bar camp dedicated to building applications that meet the needs of our local community.

This event is being organized by Oakland Local, Urban Strategies Council, Innovative Oakland, Code for America and a slew of others.

Not a software developer but have great ideas? You can help!

What’s going to happen?

The Knight Foundation and the FCC came to Oakland, CA in April 2011 to announce a major new tech competition called Apps for Communities that will award $100,000 in prizes to reward mobile and web-based applications that use government and public data to "deliver personalized, actionable information to people that are least likely to be online."

So, that’s where Code for Oakland comes in. We’re pulling together a low-cost one-day Bar Camp that will bring government officials, developers, designers, and interested parties together for a day that will be devoted to looking at local datasets of use to people in Oakland, and that gives teams a chance to talk through, brainstorm and prototype their ideas before the competition closes on July 11.

Come win prize money for your ideas! Win prize money to support building an application to submit to Apps for Communities—over $2,500 in potential awards on the day.

If you are a coder, designer, developer, database guru, hacker then read about the bar camp event below...

If you are not a coder but represent local government, the nonprofit community or you are an Oakland resident and have ideas that you would love to see built into powerful applications to help your community, then we invite you to join our

Susan is the founder of Oakland Local. She is also a circuit rider for The Community Information Challenge, a program of The John S and James L Knight Foundation, and a consultant to non-profit and community organizations.

Event ID: 1032487803

Language(s): English.

Product(s): Windows Azure.

Audience(s): Pro Dev/Programmer.

In this webcast, we explore how you can use local weather data to create compelling applications and revolutionize business intelligence. We focus on Weather Central's differentiating science, the ease of data integration offered through Windows Azure DataMarket, and several use case applications. Weather impacts every aspect of how we live, work, and play and will continue to be the focus of many next-generation technologies. Streamlined access to local weather information will empower product development ranging from consumer experiences like www.myweather.com to complex machine learning algorithms.

Ben Zimmerman is the director of Business Development for Weather Central and is an expert in weather technology. His primary focus is developing and implementing strategic solutions that enable companies to improve operations and increase profit with the integration of next-generation weather data and services. Ben's areas of expertise include renewable energy, public safety, telematics, GIS, consumer applications, and corporate operations. He holds a bachelor of science degree in Atmospheric and Oceanic Sciences from Iowa State University and has extensive experience with climate reanalysis, forecast modeling, radar interpretation, remote sensing, and data integration.

The Second Annual Ingram Micro Cloud Summit is set to run June 1-2 in Phoenix. So what’s on tap for the summit — and what cloud-related surprises should VARs and managed services providers expect? Here are eight trends and themes that Talkin’ Cloud expects to emerge at the conference.

1. Managed Services Meets Cloud Computing: There’s a reason why Ingram Micro VP of Managed Services and Cloud Computing Renee Bergeron has such a distinct business title. She joined Ingram Micro in September 2010. And since that time, the lines between Ingram Micro Seismic (a managed services push) and Ingram Micro Cloud have blurred. My best guess: The line will ultimately fade away…

Like we said: Distributors are making lots of cloud noise. We’ll see if Ingram can continue to stand out from the croud.

3. MSP Software: Ingram has a longstanding RMM (remote monitoring and management) software relationship with Nimsoft. To the best of my knowledge, Ingram also resells Level Platforms though it no longer hosts that platform for MSPs. Ingram also has a SaaS relationship with Kaseya in Australia. My best guess: Ingram will make at least two moves this week with MSP software providers in North America…

4. The Countdown: I suspect Microsoft will release Office 365 to the masses within the next few weeks — before the Microsoft Worldwide Partner Conference (WPC) begins July 10 in Los Angeles. The SaaS suite — including everything from Exchange Online to SharePoint Online — will start at $6 per user per month. For VARs and MSPs, it’s time to get educated. And there will be plenty of Office 365 talk at Ingram Micro Cloud Summit.

5. A Cloud Bill of Rights: Some VARs and MSPs are calling on the industry to create a cloud bill of rights for the channel. Among the items mentioned to me, channel partners should have the right to…

What else? I’ll be asking Ingram Micro Cloud Summit attendees for their thoughts.

6. The Business Model: Much like the managed services market before it, VARs and MSPs are asking how to set up pricing, sales force compensation and service level agreements for cloud solutions. I suspect IPED veteran Ryan Morris, now running Morris Management Partners, will be on hand to share some guidance.

7. Mergers and Acquisitions: Some Ingram cloud partners have been acquired. For instance, Oak Hill Capital Partners last week acquired Intermedia, the hosted Exchange specialist. Intermedia will be on hand at Ingram Micro Cloud Summit and the company remains committed to its channel partners. As I tour the conference I will certainly wonder: Who’s next on the M&A front?

8. Get Moving: The bottom line… There’s lots of cloud computing noise but the market is real. Our own Talkin’ Cloud 50 — which tracks the top VARs and MSPs navigating cloud computing — shows cloud computing revenues growing nearly 50 percent in 2010 vs. 2009. (The complete Talkin’ Cloud 50 report will debut on this site within days.) Savvy VARs and MSPs are already making their cloud bets. And we’ll get an update on those bets during the Ingram Micro Cloud Summit.

That’s all for now. I land soon in Phoenix for the summit. And Talkin’ Cloud will be blogging live throughout the conference.

Hi all. If you're in the Bay Area and want to get up to speed on Windows Azure -- whether you want to learn how to use it or whether you want to validate your own approach! -- MVP Scott Klein of Blue Syntax Software (blog) (author and co-author of many books including Pro SQL Azure) is offering a free two-day hands-on training course in all of Windows Azure in downtown San Francisco on June 13-14 (registration and information here).

I'll also be presenting and discussing the forthcoming release of the Windows Azure AppFabric June CTP including the AppFabric Development Platform and show you how to get your distributed cloud-based applications up and running quickly. In addition, my colleague Brian Swan will also be there to discuss using PHP and Odata and Java in Windows Azure.

Scott has tons of experience to help you understand Azure, its services, and get you started building applications over two days -- few could be better to learn from. I am really looking forward to it. If you're in the area and interested, please come.

Heroku, the platform-as-a-service provider that Salesforce.com acquired last year, has added Node.js to its existing Ruby offering as part of its new public beta called Celadon Cedar. Other new features include consolidated logging, real-time dynotype monitoring and instant roll-backs.

The first question asked at the press and analyst Q&A session at Dreamforce on Salesforce.com's acquisition of Heroku was how long it would be until Heroku/Salesforce.com had a Node.js PaaS. We now have our answer.

Heroku has been a popular choice among Ruby developers and its name has become practically synonymous with PaaS. but it faces increased competition in the PaaS marketplace with companies like RedHat and VMware bringing Ruby-capable PaaSes online.

In short, Heroku is in a strong position and the new beta brings several important new features, but has its work cut out for it.

• Herman Mehling asserted “New integration platform as a service (iPaaS) offerings aim to relieve the pain of SaaS and cloud integration, which has been so onerous that many organizations have pulled the plug on SaaS projects” as a deck for his Can Nascent iPaaS Solve Cloud and SaaS Integration Problems? article of 5/31/2011 for DevX:

New integration platform as a service (iPaaS) offerings aim to relieve the pain of SaaS and cloud integration, which has been so onerous that many organizations have pulled the plug on SaaS projects.

The number one cloud and SaaS challenge for many developers and organizations might just surprise you. It's not security, lack of standards, or even reliability, but... dramatic drum roll... integration.

Recently, Gartner did a study of companies transitioning to SaaS. The study found that many businesses were actually pulling their data back out of cloud-based applications, so Gartner asked why.

The research firm asked 270 executives "Why is your organization currently transitioning from a SaaS solution to an on-premises solution?" For 56 percent of respondents, the number one reason was the unexpectedly significant requirements of integration.

More than half of the people who tried moving their businesses to a cloud-based application and pulled back did so because integrating those applications with the rest of their business proved too challenging to be worthwhile.

Based on this apparent pain point, Gartner has predicted that at least 35 percent of all large and midsize organizations worldwide will be using one or more integration platform as a service (iPaaS) offerings by 2015.

Established software vendors face a difficult balancing act between meeting customer demands for pay-per-usage cloud pricing models while guarding against revenue erosion on traditionally priced offerings. If Amazon’s price for Oracle Database on RDS becomes the norm for price discrimination between traditional and per-per-usage licenses, IT buyers could find themselves paying over a 100 percent premium for the flexibility of pay-per-usage pricing.

Note, I am only using Oracle as an example here because the pricing of Amazon RDS for Oracle Database is public. This post intends to make no judgments on Amazon or Oracle’s price points whatsoever.

Pay-per-use software pricing limited to entry level productAmazon RDS for Oracle Database offers two price models, “License Included” or “Bring Your Own License (BYOL)”. The License Included metric is fancy terminology for pay-per-usage, and includes the cost of the software, including Oracle Database, underlying hardware resources and Amazon RDS management.

Three editions of Oracle Database are offered by Amazon, Standard Edition One (SE1), Standard Edition (SE) and Enterprise Edition (EE), listed in order of lowest to highest functionality.

It’s important to note that pay-per-use pricing is only offered on the lowest function edition, namely, Oracle Database SE1. This should not be a surprise as Oracle, like other established vendors, is still experimenting with pay-per-usage pricing models. Customers can also run Standard Edition One using a BYOL model. This fact, along with Oracle’s list pricing, helps us do some quick and interesting calculations.

Oracle Database SE1 software price-per-hour ranges between $0.05 to $0.80The License Included and BYOL prices both include the cost of the underlying hardware resources, OS and Amazon RDS management. The only difference between the two options is the price of the Oracle Database software license.

This allows us to calculate the per hour cost of Oracle Database Standard Edition One as follows:

The Oracle list price for Oracle Database SE1 is $5,800 plus 22 percent, or $1,276 for software update, support and maintenance. Like most enterprise software, customers could expect a discount between 25 to 85 percent. For lower priced software like Oracle Database SE1, let’s assume a 50 percent discount. Although, most customers buying Oracle software are encouraged to enter into Unlimited License Agreements (ULAs) which frequently offer discounts at the higher end of the spectrum.

All told, Oracle Database SE1 after a 50 percent discount would cost a customer $3,538 (($5,800 + $1,276) x 50%) for 1 year or $4,814 ($5,800 + $1,276 + $1,276 + $1,276) x 50%) for 3 years on a single socket quad core machine like this low end Dell server. Note that Oracle doesn’t use their typical processor core factor pricing methodology for products identified as Standard Edition or Standard Edition One as they are targeted at lower performance servers.

A single socket quad core machine would offer the performance of somewhere between the Amazon “Double Extra Large DB Instance” and the “Quadruple Extra Large DB Instance”.

Consider the long term costs of per-per-usageUsing “Double Extra Large DB Instance” pricing, with our calculated cost an Oracle Database SE1 software license on Amazon of $0.40/hr, we can calculate a 1 year cost of $3,504 and a 3 year cost of $10,512. These figures represent a 1 percent lower and 118 % higher cost of using Amazon’s per-per-usage offering versus licensing Oracle Database SE1 through Oracle for on premises deployment or a BYOL for deployment on Amazon RDS.

There are obviously multiple caveats to consider, like the ability to get lower or higher discounts from Oracle, or comparing with the “Quadruple Extra Large DB Instance” price point.

A customer that is unable to get a 50 percent discount from Oracle could save licensing costs by using Amazon’s pay-per-usage offering for Oracle Database SE1. For instance, with only a 25 percent discount from Oracle, the customer could save up to 34 percent on a 1 year basis, but stands to pay an extra 46 percent a 3 year basis.

Comparing the cost of Oracle Database SE1 using traditional licensing on premises with Amazon’s pricing through RDS, it appears that customers should look hard at Amazon’s per-per-usage offering for up to a 1 year term, but stick with Oracle’s traditional pricing model if the software is going to be used for the typical 3 to 5 year period that companies like to amortize costs over.

The obvious rebuttal to the above calculations would be that a customer electing for a pay-per-usage model would not necessarily run for 24 hours a day for a full year. While this is true, buyers should understand the long term cost implications before making short term decisions.

Pop over to iCloud.com today and you’ll see a doomed web page. The domain, which redirects to Xcerion’s CloudMe software, is sitting on some prime real estate, namely Apple’s new iCloud service.

In a short release, Apple confirmed the existence and name:

Apple® CEO Steve Jobs and a team of Apple executives will kick off the company’s annual Worldwide Developers Conference (WWDC) with a keynote address on Monday, June 6 at 10:00 a.m. At the keynote, Apple will unveil its next generation software – Lion, the eighth major release of Mac OS® X; iOS 5, the next version of Apple’s advanced mobile operating system which powers iPad®, iPhone® and iPod touch®; and iCloud®, Apple’s upcoming cloud services offering.

We’ve been hearing about the potential cloud services for months now and it seems the stars have finally aligned. The MobileMe service recently received some considerable upgrades to improve performance and stability and there has been oodles of talk about a potential music service in the cloud similar to Rdio or Spotify. That we now know it’s called iCloud, officially, is just icing on the cake.

What will iCloud include? It will probably be a considerable revamp of the Me.com services including calendar and email syncing. As TUAW notes, many parts of MobileMe will probably be available for free leaving us to wonder what the rest of the service will include.

We’ve also discovered that Apple is signing partners to offer what amounts to a mirrored version of your iTunes database, a service that will be considerably improved over current “locker” models used by Amazon and Google. However, there are currently plenty of those cloud-based sharing services on offer, which suggests Apple may have a trick or two up its sleeve.

This would probably also replace the nearly useless iDisk offering currently available with MobileMe. With competitors like Dropbox, the old ways just won’t cut it.

We’ll be there live on Monday June 6 but until then get out your prophesying hats and start prophesying in comments!

“Project Olympus” is going to be launched soon by Citrix. This platform hopes to help businesses build an infrastructure for private cloud computing using their own firewall even while running a service provider on a public cloud. This announcement was made in the Citrix Synergy 2011held last week at San Francisco. This is just one of the announcements by Citrix, a company known for its virtual desktop infrastructure (VDI).

In its tireless efforts, Citrix made additional enhancements to its existing portfolio to bring VDI to accommodate even small-to-medium enterprises; it also hopes to run them on various portable devices such as wireless laptops and netbooks, mobile phones and tablet PC’s.

In the recent Open Stack conference held in Santa Clara, Califoria, Project Olympus was born. It was based on the scheme made in Open Stack where a collaboration of various vendors and service providers where Citrix is a participant have come together to build a common open source platform for cloud computing.

Project Olympus will be operating together with the Citrix XenServer and will also support other virtualization platforms such as Microsoft’s Hyper-V and also VMWare’s vSphere. These platforms with the Citrix XenDesktop will bring together all ends meet to create a strong virtual platform.

Sameer Dholakia, Vice President of Production Marketing for Data Center and Cloud Computing at Citrix said while updating reporters present in the conference, “We’re serious about open, about giving people choice and leveraging the investments they have already made and so they don’t get locked into the legacy server virtualization.”

Just before the start of the conference, Mark Templeton, Citrix CEO announced that they have just completed their acquisition of Kaviza. Kaviza according to him is the maker of “VDI-in-a-box,” which is expected to deliver cloud computing to desktops from small-to-medium enterprises. He said, “It’s simple and easy to install and yet has all the capabilities and user experience that Citrix is famous for with Xen Desktop,” and added, “complexity is optional.”

Another announcement Citrix made is the new upgrade they have for Citrix Receiver, their widespread software that delivers PC images, data and enterprise applications to many of their end user gadgets in the IT business. Citrix proudly announced that Receiver now supports 1,000 Mac and other PC’s, 149 various mobile phones, 37 tablet PC’s and 10 client desktops. In this announcement Citrix focused on the consumerization of the IT trend, which allows employees to use their personal gadgets and devices to work, so IT people need to find means how to make that work.

GoToMeeting from Citrix has also been improved, which includes collaboration features that can be used during a meeting. It now has a beta GoToMeeting that will include HD Faces plus high definition audio-video conference via Xen Desktop.

Paul Burrin, Citrix vice president said during the reporters briefing that, ‘We want telepresence, we think it’s a fantastic capability but we can’t afford these high-end systems,” he also mentioned that they hope to target SMB’s with this software.

Last of the announcements is their preview of Citrix HDX technology for improved performance in media delivered VDI environment, which includes 3-D graphics and better audio-video in Xen Desktop and Xen App with its multitasking tasks.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.