When reading Transact-SQL documentation, I usually skip the Backus–Naur Form (BNF) at the top of the documentation and go directly to the samples. So, to add on to Cihan Biyikoglu blog post about the new SQL Azure database sizes available June 28, 2010, I want to show some samples of new CREATE DATABASE syntax.

You can still create a database without any parameters; this will generate the smallest database of the web edition:

CREATE DATABASE Test

This is the same as declaring:

CREATE DATABASE Test (EDITION=’WEB’, MAXSIZE=1GB)

The database created can hold up to 1 Gigabyte of data and then will return a 40544 error when trying to add more data. See Cichan’s blog post for more details.

You can also create a web edition database with a larger maximum size of 5 Gigabytes like this:

CREATE DATABASE Test (EDITION=’WEB’, MAXSIZE=5GB)

Business edition databases will start with a maximum size of 10 Gigabytes:

CREATE DATABASE Test (EDITION=’BUSINESS’)

However, they can be increased to a 50 Gigabytes using the maximum size parameter:

CREATE DATABASE Test (EDITION=’BUSINESS’, MAXSIZE=50GB)

The valid MAXSIZE settings for WEB edition are 1 and 5 GB. The valid options for BUSINESS edition are 10, 20, 30, 40, and 50 GB. …

Alter Database

Most of the time you will know the database size you need before you deploy to SQL Azure, however if you are in a growth scenario you can start out with a web edition database and change it to business edition as it grows. This will save you some money. To change the database edition you can use the ALTER DATABASE Syntax like this:

Most browsers today will automatically enable their RSS/Atom reader option when you're on a page that has a feed in it. This is because the page has one or more <link> elements pointing to RSS/Atom endpoints, for example:

It would be great if all OData clients could automatically discover the location of the data feed that has the data represented by the current web page, or more in general, by the document fetched through some arbitrary URL. We had this discussion with Scott some time ago and he rolled the results in the NerdDinner.com site, but I failed to post this so it stays documented somewhere and others can follow, so here it goes.

Servers can advertise the OData endpoints that correspond to a resource using two mechanisms. Ideally servers would implement both, but sometimes limitations in the hosting environment and such may make it hard to support the header-based approach.

Using <link> elements

The first mechanism consists of adding one or more <link> headers to any HTML web page that shows data where that data can also be accessed as an OData feed using some other URL. There are two kinds of links for OData, those that direct clients to the service root (usually where the service document lives)and those that point at the specific dataset that corresponds to the data displayed in the page (e.g. point to a specific collection and includes whatever filters and sort order is used for display). Taking the example from the home page of NerdDinner.com:

In this example the relation "odata.service" indicates that the href has a URL pointing to the root of a service and the relation "odata.feed" specifies the URL to the specific dataset for the page (dinners in this case). Note that it's perfectly fine to have multiple of each, although I would expect that the common thing to do would be go have one "odata.service" and zero or more "odata.feed", depending on the specific page you're looking at.

If your web page has the typical syndication icon to indicate that it has feeds in it, and you'd like to indicate visually that it also has links to OData feeds you can use the OData icon like NerdDinner.com does (next to the syndication icon on the right in the screenshot):

Using links in response headers

There is a proposal currently in-flight to standardize how servers can send links to related resources of a given resource using response headers instead of including them as part of the content of the response body. This is great because it means you don't have to know how to parse the content-type of the body (e.g. HTML) in order to obtain a link. In our context, your OData client wouldn't need to do any HTML parsing to obtain the OData feed related to a web page or some other random resource obtained through a user-provided URL.

The "Web Linking" proposal is described in the current draft in the IETF web site:

As for the specifics for OData we would define two relations, "http://odata.org/link/service" and "http://odata.org/link/feed", corresponding to the "odata.service" and "odata.feed" relations used in the <link> element above. So for NerdDinner.com these would look like this in the response header section:

For all folks hosting OData services, please consider adding one or if possible both of these to advertise your services. For anybody writing a client, this is the best way to discover services given an arbitrary URL that may point to a web page or something like that, and you can still make it nicely redirect to the right location and things will "just work" for your users.

The true power of OData is that the programming model is the same for any feed. I spend a lot of time building and demoing my own feeds- usually building an OData service around Northwind or AdventureWorks. To realize the power of OData you also need to know that you can consume public feeds. Let’s take a look at consuming the Microsoft TechEd Sessions OData Service. The TechEd service can be found here: http://odata.msteched.com/sessions.svc/

Being a RESTful service, we can drill down a little and investigate our data. I will do some URL querying and look at a list of all the speakers as well as their sessions. For example I can drill down to see all speakers named “Forte”

This is the beauty of OData, we don’t know how it was created, we also don’t care. All we care is if we can consume it easily. Let’s do so with an ASP.net application and the OData client for ASP.NET.

To get started, create a new ASP.NET application. In the application, right click on the References folder of the project in the Solution Explorer and select “Add Service Reference”. Put in the public URL of the TechED 2010 OData Service. This creates a proxy so you can code against the service locally and not know the difference.

Next set a reference to System.Data.Services.Client. This will enable us to use the OData client library and LINQ on the ASP.net client. Then drag a textbox, button, and a gridview to the ASP page. We’ll fill the gridView with the Speaker data filtered on the last name field based on what was typed in to the textbox. We accomplish this with the following code on the button click.

1: //set a reference to ServiceReference1 and System.Data.Services.Client

I wanted to watch the Teched 2010 videos, but the problem I had was going to the site manually to download files for offline viewing. And I was also interested only in Dev sessions which were level 300 / 400. Thanks to OData for TechEd http://odata.msteched.com/sessions.svc/, I could write three statements in LINQPad and had them all downloaded using wget:

On a plane between Philadelphia and Oslo: I am flying there for NDC2010, where I have a couple of sessions (on WIF. Why do you ask?:-)). It’s a lifetime that I want to visit Norway, and I can’t tell you how grateful to the NDC guys to have me!

This is literally the 50th flight I am on since last August, the last fiscal year has been cR@Zy. Good crazy, but still crazy. As a result, I am astonishingly behind on my Programming WIF book and it’s now time to wrap the manuscript; I am writing every time I have a spare second, which means I have very little time for any “OOB” activity, including blogging. One example: yesterday I got a mail from Dinesh, a guy who attended the WIF workshop in Redmond, asking me about sliding sessions. That’s definitely worth a blog post, but see above re:time; hence I decided to share here on the blog the DRAFT of the section of the book in which I discuss sliding sessions. That’s yet to be reviewed, both for language and technical scrub, I expect that the final form will have much shorter sentences, less passive forms, consistent pronouns, and in general will be cleansed from all the other flaws of my unscripted style that Peter and the (awesome!) editorial team at MS Press mercilessly rubs my snout in (ok, this one is intentional exactly for making a point… say hi to Godel :-)). Also, the formatting (especially for the code and reader aids like notes) is a complete mess, but hopefully the content will be useful!

More about Sessions

I briefly touched the topic of sessions at the end of Chapter 3, where I showed you how you can keep the size of the session cookie independent from the dimension of its originating token by saving a reference to session state stored server side. WIF’s programming model goes well beyond that, allowing you complete control over how sessions are handled. Here I would like to explore with you two notable examples of that principle in action: sliding sessions and network load-balancer friendly sessions.

Sliding Sessions

By default, WIF will create SessionSecurityTokens whose validity is based on the validity of the incoming token. You can overrule that behavior without writing any code, by adding to the <microsoft.identityModel> element in the web.config something to the effect of the following:

Note: the lifetime property can only restrict the validity expressed by the token to begin with. In the snippet above I set the lifetime to 2 minutes, but if the incoming security token was valid for just 1 minute the session token will have 1 minute validity. If you want to increase the validity beyond what the initial token specified, you need to do so in code (by subclassing SessionSecurityTokenHandler or by handling SessionSecurityTokenReceived).

Now, let’s say that you want to implement a more sophisticated behavior. For example, you want to keep the session alive indefinitely as long as the user is actively working with the pages; however, you want to terminate the session if you did not detect user activity in the last 2 minutes, regardless of the fact that the initial token would still be valid. This is a pretty common requirement for Web sites which display personally identifiable information (PII), control banking operations and the like. Those are cases in which you want to ensure that the user is in front of the machine and the pages are not abandoned at the mercy of anybody walking by.

In Chapter 3 I hinted at the scenario, suggesting that it could be solved by subclassing the SessionAuthenticationModule: that would be the right strategy if you expect to reuse this functionality over and over again across multiple applications, given that it neatly packages it in a class you can include in your codebase. In fact, SharePoint 2010 offers sliding sessions and implemented those precisely in that way. If instead for you this is an improvement you need to apply only occasionally, or you own just one application, you can obtain the same effect simply by handling the SessionSecurityTokenReceived event. Take a look at the following code.

As you certainly guessed, this is a fragment of the global.asax file of the RP application. SessionSecurityTokenReceived gets called as soon as the session cookie is deserialized (or resolved from the cache if we are in session mode). Here I verify if we are within the second half of the validity window of the session token: if we are, I extend the validity to another 2 minutes, starting from now. The change takes place on the in memory instance of the SessionSecurityToken: setting ReissueToken to true instructs the SessionAuthenticationModule to persist the new settings in the cookie once the execution leaves SessionSecurityTokenReceived. Let’s say that the token is valid between 10:00am and 10:02am: if the current time falls between 10:01am and 10:02am, say 10:01:15, the code sets the new validity boundaries to go from 10:01:15 to 10:02:15 and saves those in the session cookie.

Note: Why renewing the session only during the second half of the validity interval? Well, writing the cookie is not for free; this is just a heuristic for reducing the times in which the session gets refreshed, but you can certainly choose to apply different strategies.

If the current time is outside the validity interval, this implementation of SessionSecurityTokenReceived will have no effect; the SessionAuthenticationModule will take care of handling the expired session right after. Note that an expired session does not elicit any explicit sign out process. If you recall the discussion about SSO and Single Sign-Out just few pages earlier, you’ll realize that if the STS session outlives the RP session the user will just silently re-obtain the authentication token and have the session renewed without even realizing anything ever happened. …

Vibro continues with a detailed “Sessions and Network Load Balancers” section. I wondered why he was so quiet lately.

Jeffrey Schwartz claims “The new Microsoft Active Directory Federation Services release promises to up the ante on cloud security” in his ADFS 2.0 Opens Doors to the Cloud post for the June 2010 issue of Redmond Magazine:

Microsoft Active Directory Federation Services (ADFS) 2.0, a key add-in to Windows Server 2008, was released in May. It promises to simplify secure authentication to multiple systems, as well as to the cloud-based Microsoft portfolio. In addition, the extended interoperability of ADFS 2.0 is expected to offer the same secure authentication now provided by other cloud providers, such as Amazon.com Inc., Google Inc. and Salesforce.com Inc.

ADFS 2.0, formerly known as "Geneva Server," is the long-awaited extension to Microsoft Active Directory that provides claims-based federated identity management. By adding ADFS 2.0 to an existing AD deployment, IT can allow individuals to log in once to a Windows Server, and then use their credentials to sign into any other identity-aware systems or applications.

Because ADFS 2.0 is already built into the Microsoft cloud-services portfolio -- namely Business Productivity Online Suite (BPOS) and Windows Azure -- applications built for Windows Server can be ported to those services while maintaining the same levels of authentication and federated identity management.

"The bottom line is we're streamlining how access should work and how things like single sign-on should work from on-premises to the cloud," says John "J.G." Chirapurath, senior director in the Microsoft Identity and Security Business Group.

While ADFS 2.0 won't necessarily address all of the security issues that surround the movement of traditional systems and data to the cloud, by all accounts it removes a key barrier -- especially for applications such as SharePoint, and certainly for the gamut of applications. Many enterprises have expressed reluctance to use cloud services, such as Windows Azure, because of security concerns and the lack of control over authentication.

"Security [issues], particularly identity and the management of those identities, are perhaps the single biggest blockers in achieving that nirvana of cloud computing," Chirapurath says. "Just like e-mail led to the explosive use of Active Directory, Active Directory Federation Services will do the same for the cloud."

Because ADFS 2.0 is already built into Windows Azure, organizations can use claims-based digital tokens, or identity selectors, that will work with both Windows Server 2008 and the cloud-based Microsoft services, enabling hybrid cloud networks. The aim is to let a user authenticate seamlessly into Windows Server or Windows Azure and share those credentials with applications that can accept a SAML 2.0-based token.

Windows 7 and Windows Vista have built-in CardSpaces, which allow users to input their identifying information. Developers can also make their .NET applications identity-aware with Microsoft Windows Identity Foundation (WIF).

WIF provides the underlying framework of the Microsoft claims-based Identity Model. Implemented in the Windows Communication Foundation of the Microsoft .NET Framework, apps developed with WIF present authentication schema, such as identification attributes, roles, groups and policies, along with a means of managing those claims as tokens. Applications built by enterprise developers and ISVs based on WIF will also be able to accept these tokens.

Pass-through authentication in ADFS 2.0 is enabled by accepting tokens based on both the Web Services Federation (WSFED), WS-Trust and SAML standards. While Microsoft has long promoted WSFED, it only agreed to support the more widely adopted SAML spec 18 months ago. …

When I first started working with Windows Azure, this book was one of my first purchases. Not only does it cover the basics of working with Azure such as:

Azure Roles

Table and blob storage

Queues

Moving from the Enterprise to the Cloud

Security and authentication

SQL Azure Services

but it also gives a very in-depth explanation of the inner-workings of the Windows Azure platform so you could actually understand what was happening physically, when you upload and publish your Azure solution.

There are plenty of nuts and bolts code segments and samples to show you how things are done and Wrox Press makes these samples available for download here. Additionally, chapters 12 and 13, covering SQL Azure, are actually in an online-only format and can be found at the same site.

The only issues that I had with the book were some late-breaking technology changes that shipped in the final Windows Azure release which were different from the community technology preview (CTP) releases so their examples didn’t quite work the same. Overall, these issues were quite minor and a little digging into the RTM examples showed me the new and proper way of doing things.

Overall, Cloud Computing with the Windows Azure Platform is a great addition to your programming library should you be leaning toward Windows Azure as a solution.

Final Note

Roger’s company, OakLeaf Systems, has a blog that posts a daily summary of Azure-related articles and announcements. I have no idea how long it takes someone to assemble each post, but I would imagine it takes quite a bit of time. I find these posts invaluable and greatly appreciate each summary.

Gunther Lenz ISV Architect Evangelist with Microsoft, interviews Jim Zimmerman, CTO of Thuzi and author of the Windows Azure Toolit for Facebook and CloudPoll reference application. Learn what the toolkit has in store and check out CloudPollhttp://bit.ly/CloudPoll reference application, built on the Windows Azure Toolkit for Facebook, and free to use for any Facebook user.

The difference between fault isolation and fault tolerance is not necessarily intuitive. The differences, though subtle, are profound and have a substantial impact on data center architecture.

Fault tolerance is an attribute of systems and architecture that allow it to continue performing its tasks in the

event of a component failure. Fault tolerance of servers, for example, is achieved through the use of redundancy in power-supplies, in hard-drives, and in network cards. In an architecture, fault tolerance is also achieved through redundancy by deploying two of everything: two servers, two load balancers, two switches, two firewalls, two Internet connections. The fault tolerant architecture includes no single point of failure; no component that can fail and cause a disruption in service. load balancing, for example, is a fault tolerant-based strategy that leverages multiple application instances to ensure that failure of one instance does not impact the availability of the application.

Fault isolation on the other hand is an attribute of systems and architectures that isolates the impact of a failure such that only a single system, application, or component is impacted. Fault isolation allows that a component may fail as long as it does not impact the overall system. That sounds like a paradox, but it’s not. Many intermediary devices employ a “fail open” strategy as a method of fault isolation. When a network device is required to intercept data in order to perform its task – a common web application firewall configuration – it becomes a single point of failure in the data path. To mitigate the potential failure of the device, if something should fail and cause the system to crash it “fails open” and acts like a simple network bridge by simply forwarding packets on to the next device in the chain without performing any processing. If the same component were deployed in a fault-tolerant architecture, there would be deployed two devices and hopefully leveraging non-network based failover mechanisms.

Similarly, application infrastructure components are often isolated through a contained deployment model (like sandboxes) that prevent a failure – whether an outright crash or sudden massive consumption of resources – from impacting other applications. Fault isolation is of increasing interest as it relates to cloud computing environments as part of a strategy to minimize the perceived negative impact of shared network, application delivery network, and server infrastructure. …

Lori continues with a SIMILARITIES and DIFFERENCES topic and then:

HERE COMES the FENG SHUI

Data center Feng Shui is about the right solution in the right place in the right form factor. So when we look at application delivery controllers (a.k.a. load balancers) we need to look at both the physical (pADC) and the virtual (vADC) and how each one might – or might not – meet the needs for each of these fault-based architectures.

In general, when designing an architecture for fault tolerance there needs to be provisions made to address any single component level failure. Hence the architecture is redundant, comprising two of everything. The mechanisms through which fault tolerance is achieved is failover and finely grained monitoring capabilities from the application layer through the networking stack down to the hardware components that make up the physical servers. pADC hardware designs are carrier-hardened for rapid failover and reliability. Redundant components (power, fans, RAID, and hardware watchdogs) and serial-based failover make for extremely high up-times and MBTF numbers.

vADC are generally deployed on commodity hardware and will lack the redundancy, serial-based failover, and finely grained hardware watchdogs as these types of components are costly and would negate much of the savings achieved through standardization on commodity hardware for virtualization- based architectures. Thus if you are designing specifically for fault tolerance, a physical (hardware) ADC should be employed.

Conversely, vADC more naturally allows for isolation of application-specific configurations a la architectural multi-tenancy. This means fault isolation can be readily achieved by deploying a virtualized application delivery controller on a per-application or per-customer basis. This level of fault isolation cannot be achieved on hardware-based application delivery controllers (nor on most hardware network infrastructure today) because the internal architecture of these systems is not designed to completely isolate configuration in a multi-tenant fashion. Thus if fault isolation is your primary concern, a vADC will be the logical choice.

It follows, then, if you are designing for both fault-tolerance and fault-isolation that a hybrid virtualized infrastructure architecture will be best suited to implementing such a strategy. An architectural multi-tenant approach in which the pADC is used to aggregate and distribute requests to individual vADC instances serving specific applications or customers will allow for fault tolerance at the aggregation layer while ensuring fault isolation by segregating application or customer-specific ADC functions and configuration.

A recent interview I did with Alex Bewley of Uptime Software is finally available. Although the podcast is nominally about cloud computing for mid-tier enterprises, we actually cover much broader ground. Alex’s blog posting lists the core topics as:

what kinds of businesses are using cloud

how you should go about evaluating it

how to avoid being outsourced as an IT department

what are the barriers to adoption; monitoring in the cloud (near and dear to our hearts)

designing applications for failure awareness

where he thinks the cloud is going

More important, for me personally, is that I think this is one of my better podcasts. The audio is clear, my responses, while long, are reasonably crisp, and you can tell that the general thinking around here has evolved a lot. Some key messages come through loud and clear, which I think aren’t well understood still:

Cloud computing isn’t about virtualization

This is disruptive sea change, be the disrupter, not the disrupted

Whole new areas of opportunity, applications, etc. are opening up that didn’t exist before

I really think it’s worth a listen. It’s a little less than 20 minutes and moves pretty quickly. Please enjoy and a big thanks to Alex who did a great job with the interview. Head over to the original blog post to listen to the podcast with Flash in your browser or you can download the MP3 directly if you are using a non-flash capable system.

Bernd Harzog recently wrote a blog entry to examine whether “the CMDB [is] irrelevant in a Virtual and Cloud based world“. If I can paraphrase, his conclusion is that there will be something that looks like a CMDB but the current CMDB products are ill-equipped to fulfill that function. Here are the main reasons he gives for this prognostic:

A whole new class of data gets created by the virtualization platform – specifically how the virtualization platform itself is configured in support of the guests and the applications that run on the guest.

A whole new set of relationships between the elements in this data get created – specifically new relationships between hosts, hypervisors, guests, virtual networks and virtual storage get created that existing CMDB’s were not built to handle.

New information gets created at a very rapid rate. Hundreds of new guests can get provisioned in time periods much too short to allow for the traditional Extract, Transform and Load processes that feed CMDB’s to be able to keep up.

The environment can change at a rate that existing CMDB’s cannot keep up with. Something as simple as vMotion events can create thousands of configuration changes in a few minutes, something that the entire CMDB architecture is simply not designed to keep up with.

Having portions of IT assets running in a public cloud introduces significant data collection challenges. Leading edge APM vendors like New Relic and AppDynamics have produced APM products that allow these products to collect the data that they need in a cloud friendly way. However, we are still a long way away from having a generic ability to collect the configuration data underlying a cloud based IT infrastructure – notwithstanding the fact that many current cloud vendors would not make this data available to their customers in the first place.

The scope of the CMDB needs to expand beyond just asset and configuration data and incorporate Infrastructure Performance, Applications Performance and Service assurance information in order to be relevant in the virtualization and cloud based worlds.

William continues with a highly detailed critique of Bernds essay.

Wayne Walter Berrry’s Transferring Assets in the Cloud post of 6/15/2010 explains the benefit of easy asset transfer to an acquiring corporation with Windows Azure and SQL Azure:

Every web startup plans to hit the big payday and exit through IPO or acquisition; SQL Azure and the Windows Azure Platform can make the acquisition process easier.

What most young entrepreneurs do not understand is that when they sell their web site for $100 million dollars, they do not get all the money up-front when they sign the contract. In fact, full payment does not come until all the assets are transferred. Usually these assets include domain names, intellectual property, physical assets like desk/chairs and digital assets like source code, web sites, and databases. It can take up to a year to transfer all assets, delaying payment considerably. One of the hardest assets to transfer is physical servers in your datacenter.

Your assets on Windows Azure and SQL Azure can be transferred as easily as changing the service account.

Typically, the entrepreneur when creating the startup spends a lot of time designing the computer systems for growth and scaling, including vetting the data center, purchasing the machines, installation, tuning, routing, backups and failover. When your business is acquired, the purchaser wants to consolidate resources, usually moving your servers to their datacenter to be maintained by their IT staff.

If you do the leasing of servers, and renting data center space, moving datacenters can be a considerable hassle. You need to plan for downtime, physical transfer of the servers (ship them across country potentially), getting them installed and bringing their IT staff up to speed. The headaches can be enormous, and the risks great since once the complete asset transfer is complete only then will you usually receive final payment. Imagine shipping a server that is literally worth $20 million if it arrives safely.

SQL Azure makes the transfer of web server and databases easier then the domain name transfer. Just change the server account to the purchaser’s information and you are done. You can do this by modifying the service account at the Microsoft Online Customer Portal.

In addition, the purchaser knows that they are in a trusted redundant and scalability environment when running on the Windows Azure Platform.

TechNet is kicking off the new "TechNet On" feature series with an in-depth look at securing and deploying applications in the cloud. You'll find new articles and videos in three tracks, including a background track on the Windows Azure platform, a security track with best practices on enterprise-class security for the cloud, and a strategy track for understanding your options and getting started. Read TechNet program manager Mitch Ratcliffe's blog for more on the new TechNet On approach to content.

From the Feature Package: Securing and deploying applications in the cloud

As part of our Azure Security Guidance project, we tested setting up SSL as part of our exploration. To do so, we created a self-signed certificate and deployed it to Azure. This is a snapshot of the rough steps we used:

Step 1 - Create and Install a test certificate

Step 2 - Create a Visual Studio project

Step 3 - Upload the certificate to Windows Azure Management portal

Step 4 - Publish the project to Windows Azure

Step 5 - Test the SSL

Step 1 - Create and Install a test certificate

Open a Visual Studio command prompt

Change your active directory to the location where you wish to place your certificate files

In the Windows Explorer window that pops up, copy the path to the directory displayed into the clipboard

Switch to your browser with the Windows Azure Management portal open

If you are still in the manage certificates screen, return to the service management screen

Click the "Deploy" button

Under "Application Package" area, select the "Browse" button

In file open dialog that pops up, paste the path from your clipboard to navigate to your VS package

Select the AzureSSL.cspkg, and click "Open"

Under the "Configuration Settings" area, select the "Browse" button

Select the ServiceConfiguration.cscfg file, and click "Open"

At the bottom of the Deploy screen, enter AzureSSL in the textbox

Click "Deploy"

When the deployment completes, click the "Run" button

Step 5 - Test the SSL

Once the Web Role has completed initializing, click on the "Web Site URL" link

Change the URL scheme to HTTPS (in other words change http to https), and open the page

Your results may vary here based on your browser, but you'll most likely see a warning about the certificate being for a different site, or not being from a trusted source. If you permit access to the site, the page will render empty and you browser should indicate that the page was delivered over SSL with a lock icon or something similar.

There's so much fear, uncertainty, and doubt about the topic of security in the cloud that I wanted to dedicate a post to the topic, inspired in part by the security-related comments to last week's post.

Let's start by acknowledging that, yes, technology can fail. But this happens regardless of how it is deployed. Massive amounts of data are lost every day through the failure of on-premise technology. Anyone who's worked at a big company knows how often e-mails or files on your local or shared drives are lost or corrupted. Or how easy it is in many companies to plug into their network without credentials. And this doesn't even take into account the precious data walking out the door every day on thumb drives and lost or stolen laptops. But these incidents are primarily kept quiet inside company walls, or worse, not even noticed at all.

When public cloud technology fails, on the other hand, it makes headlines. That's part of what keeps the leading cloud providers at the top of their game. Cloud leaders such as Salesforce, Amazon, and Google spend millions of dollars on security and reliability testing every year, and employ some of the best minds out there on these topics. The public cloud providers' business absolutely depends on delivering a service that exceeds the expectations of the most demanding enterprises in this regard.

The fact of the matter is your data is probably safer in a leading cloud platform than it is in most on-premise data centers. I love what Genentech said at Google I/O: "Google meets and in many cases exceeds the security we provide internally"

For some people data just "feels" safer when you have it in your own data center (even if its co-located), where you think it's under your control. It's similar to keeping your money hidden under your mattress. It "feels" safer to have it there in your bedroom where you can physically touch and see it.

That feeling of security is an illusion. That's why public banks exist -- it's a much safer place to keep your money even if the occasional bank robbery makes headlines. Examining why banks are safer sheds some light on the topic of security and the public cloud. Consider these three reasons:

Expertise: Banks are experts at security. They hire the best in the business to think about how to keep your money safe and (hopefully) working for you.

Efficiency: Even if you knew as much about security as your bank, it simply wouldn't be efficient for you to secure your bedroom the way banks can secure a single facility for thousands of customers.

Re-use / Multi-tenancy: Both of the above arguments also apply to "single tenant" safety deposit boxes. But there's an additional benefit to putting your money into a checking account, a "multi-tenant" environment where your money is physically mixed together with everyone else's. Here, the security of your individual dollar bill isn't important -- what matters is your ability to withdraw that dollar (+ some interest!) when you want.

Of course, one of the reasons we feel comfortable putting our money in a bank is that it is insured -- a level of maturity that hasn't come to the public cloud yet. But remember, your on-premise technology doesn't come with any sort of insurance policy either. When you buy a hard drive, there's no insurance policy to cover the business cost if you lose the data on it. You may get your money back (or at least a new hard drive) if the one you buy is defective, but no one is going to write you a check to compensate you for the productivity or data lost.

How do companies handle this risk with their existing on-premise technology? They take reasonable precautions to prevent the loss (e.g., encrypting data, making backups) and then do what is referred to as "self insurance." They suck it up and get on with business. And that's exactly what you have to do in the cloud today as well -- self-insure.

But that's today -- the public nature of the cloud drives a much faster rate of innovation around security than we've seen with on-premise technology. Gartner predicts "cloud insurance" services will soon be offered from an emerging set of cloud brokerages, a topic that I've blogged on in the past. Two-factor authentication is sure to be standard on cloud applications before on-premise applications. And any improvement in a cloud provider's security is instantly available to all their customers because everyone is on the updated version.

So where are you going to keep your most precious asset ... your company's information? Under a mattress? Or in a bank with top notch security? Enhanced security is rapidly becoming a reason to adopt cloud solutions, despite all the F.U.D. to the contrary.

Ryan is the Vice President of Cloudsourcing and Cloud Strategy for Appirio.

When adding an OData service to Visual Studio (Service reference) you can select the “View Diagram” menu option to generate a nice metadata view of the service

MSFT has a standardised wireform for Expression Trees (LINQ to URL) enabling LINQ Expression tree to OData Url syntax and the reverse on the server

Had a very brief discussion with Jonathan around how finance is building RIA using streaming servers instead of web services. Net out Microsoft is going after the 80% data case, and streaming (push) of data in the financial world (real-time web) is a small subset of the world today and hence very not in the current vision

Windows “Dallas” – The iTunes store for data. Provides security, business model and hosting. There are a number of data provides leveraging this infrastructure today, curious to see if a financial services gets on this bandwagon (maybe from a trade research perspective)

Douglas personal view appears to be that the web (browser) is the only cross platform solution. Steve Jobs and his Adobe Flash would therefore appear to be the correct view – HTML5 and open web standards. Adobe Flash and Microsoft Silverlight (the RIA world) are native platforms). Hence OData is betting on HTTP, and is geared to the web.

Abstract

The Open Data Protocol (OData) is an open protocol for sharing data. It provides a way to break down data silos and increase the shared value of data by creating an ecosystem in which data consumers can interoperate with data producers in a way that is far more powerful than currently possible, enabling more applications to make sense of a broader set of data. Every producer and consumer of data that participates in this ecosystem increases its overall value.

OData is consistent with the way the Web works - it makes a deep commitment to URIs for resource identification and commits to an HTTP-based, uniform interface for interacting with those resources (just like the Web). This commitment to core Web principles allows OData to enable a new level of data integration and interoperability across a broad range of clients, servers, services, and tools. OData is released under the Open Specification Promise to allow anyone to freely interoperate with OData implementations.

In this talk Chris will provide an in depth knowledge to this protocol, how to consume a OData service and finally how to implement an OData service on Windows using the WCF Data Services product.

Bio: Chris Woodruff (or Woody as he is commonly known as) has a degree in Computer Science from Michigan State University’s College of Engineering. Woody has been developing and architecting software solutions for almost 15 years and has worked in many different platforms and tools. He is a community leader, helping such events as Day of .NET Ann Arbor, West Michigan Day of .NET and CodeMash. He was also instrumental in bringing the popular Give Camp event to Western Michigan where technology professionals lend their time and development expertise to assist local non-profits. As a speaker and podcaster, Woody has spoken and discussed a variety of topics, including database design and open source. He is a Microsoft MVP in Data Platform Development. Woody works at RCM Technologies in Grand Rapids, MI as a Principal Consultant.

Woody is the co-host of the popular podcast “Deep Fried Bytes” and blogs at www.chriswoodruff.com. He is the President of the West Michigan .NET User Group and also is a co-founder of the software architecture online portal nPlus1.org.

Angela is the Midwest district’s Developer Tools technical specialist and has been part of the DPE organization for over 2 years.

Intel and Univa announced by a 6/16/2010 e-mail message an Executive Roundtable: Cloud Computing to be held on 6/22/2010 from 3:00 PM to 7:00 PM at the Mission Bay Conference Center, San Francisco, CA:

Intel and Univa invite you to attend a roundtable discussion and cocktail receptionwith experts from our cloud technology teams -- along with a cloud computing end user from Broadcom who will be present to discuss how his company evaluated and plans to use their cloud solution.

At this 2-hour discussion and Q&A session, our experts in cloud technology and delivery will discuss the reality of where cloud computing can benefit you and how your company can best evaluate options to validate the business case.

Eucalyptus Systems and its open source private cloud software are going to support Windows as well as Linux virtual machines in the new Eucalyptus Enterprise Edition (EE) 2.0, the major upgrade of the company's commercial software for private and hybrid cloud computing released Tuesday.

Windows support will let users integrate any application or workload running on the Windows operating system into a Eucalyptus private cloud. The widgetry covers images running on Windows Server 2003, 2008 and Windows 7, along with an installed application stack. Users can connect remotely to their Windows VMs via RDP and use Amazon get-password semantics.

The rev also provides new accounting and user group management features that provide a new level of permissioning control and cost tracking for different groups of users throughout an enterprise, enhancing its usability for large-scale corporate deployments.

A Eucalyptus administrator can define a group of users, for instance, by departments such as "development" or "operations," and allocate different levels of access based on the group's needs. Groups can be associated with a specific server cluster to further refine access within a Eucalyptus cloud. There are new capabilities to track cloud usage and costs per group, which can be used in a charge-back model or for greater overall visibility.

Ubuntu is using Eucalyptus as its cloud. VMware, a Eucalyptus competitor or soon to be one, seems to be leaning toward SUSE since it started OEMing the Linux distro from Novell last week raising speculation it might try to buy it.

When reading Transact-SQL documentation, I usually skip the Backus–Naur Form (BNF) at the top of the documentation and go directly to the samples. So, to add on to Cihan Biyikoglu blog post about the new SQL Azure database sizes available June 28, 2010, I want to show some samples of new CREATE DATABASE syntax.

You can still create a database without any parameters; this will generate the smallest database of the web edition:

CREATE DATABASE Test

This is the same as declaring:

CREATE DATABASE Test (EDITION=’WEB’, MAXSIZE=1GB)

The database created can hold up to 1 Gigabyte of data and then will return a 40544 error when trying to add more data. See Cichan’s blog post for more details.

You can also create a web edition database with a larger maximum size of 5 Gigabytes like this:

CREATE DATABASE Test (EDITION=’WEB’, MAXSIZE=5GB)

Business edition databases will start with a maximum size of 10 Gigabytes:

CREATE DATABASE Test (EDITION=’BUSINESS’)

However, they can be increased to a 50 Gigabytes using the maximum size parameter:

CREATE DATABASE Test (EDITION=’BUSINESS’, MAXSIZE=50GB)

The valid MAXSIZE settings for WEB edition are 1 and 5 GB. The valid options for BUSINESS edition are 10, 20, 30, 40, and 50 GB. …

Alter Database

Most of the time you will know the database size you need before you deploy to SQL Azure, however if you are in a growth scenario you can start out with a web edition database and change it to business edition as it grows. This will save you some money. To change the database edition you can use the ALTER DATABASE Syntax like this:

Most browsers today will automatically enable their RSS/Atom reader option when you're on a page that has a feed in it. This is because the page has one or more <link> elements pointing to RSS/Atom endpoints, for example:

It would be great if all OData clients could automatically discover the location of the data feed that has the data represented by the current web page, or more in general, by the document fetched through some arbitrary URL. We had this discussion with Scott some time ago and he rolled the results in the NerdDinner.com site, but I failed to post this so it stays documented somewhere and others can follow, so here it goes.

Servers can advertise the OData endpoints that correspond to a resource using two mechanisms. Ideally servers would implement both, but sometimes limitations in the hosting environment and such may make it hard to support the header-based approach.

Using <link> elements

The first mechanism consists of adding one or more <link> headers to any HTML web page that shows data where that data can also be accessed as an OData feed using some other URL. There are two kinds of links for OData, those that direct clients to the service root (usually where the service document lives)and those that point at the specific dataset that corresponds to the data displayed in the page (e.g. point to a specific collection and includes whatever filters and sort order is used for display). Taking the example from the home page of NerdDinner.com:

In this example the relation "odata.service" indicates that the href has a URL pointing to the root of a service and the relation "odata.feed" specifies the URL to the specific dataset for the page (dinners in this case). Note that it's perfectly fine to have multiple of each, although I would expect that the common thing to do would be go have one "odata.service" and zero or more "odata.feed", depending on the specific page you're looking at.

If your web page has the typical syndication icon to indicate that it has feeds in it, and you'd like to indicate visually that it also has links to OData feeds you can use the OData icon like NerdDinner.com does (next to the syndication icon on the right in the screenshot):

Using links in response headers

There is a proposal currently in-flight to standardize how servers can send links to related resources of a given resource using response headers instead of including them as part of the content of the response body. This is great because it means you don't have to know how to parse the content-type of the body (e.g. HTML) in order to obtain a link. In our context, your OData client wouldn't need to do any HTML parsing to obtain the OData feed related to a web page or some other random resource obtained through a user-provided URL.

The "Web Linking" proposal is described in the current draft in the IETF web site:

As for the specifics for OData we would define two relations, "http://odata.org/link/service" and "http://odata.org/link/feed", corresponding to the "odata.service" and "odata.feed" relations used in the <link> element above. So for NerdDinner.com these would look like this in the response header section:

For all folks hosting OData services, please consider adding one or if possible both of these to advertise your services. For anybody writing a client, this is the best way to discover services given an arbitrary URL that may point to a web page or something like that, and you can still make it nicely redirect to the right location and things will "just work" for your users.

The true power of OData is that the programming model is the same for any feed. I spend a lot of time building and demoing my own feeds- usually building an OData service around Northwind or AdventureWorks. To realize the power of OData you also need to know that you can consume public feeds. Let’s take a look at consuming the Microsoft TechEd Sessions OData Service. The TechEd service can be found here: http://odata.msteched.com/sessions.svc/

Being a RESTful service, we can drill down a little and investigate our data. I will do some URL querying and look at a list of all the speakers as well as their sessions. For example I can drill down to see all speakers named “Forte”

This is the beauty of OData, we don’t know how it was created, we also don’t care. All we care is if we can consume it easily. Let’s do so with an ASP.net application and the OData client for ASP.NET.

To get started, create a new ASP.NET application. In the application, right click on the References folder of the project in the Solution Explorer and select “Add Service Reference”. Put in the public URL of the TechED 2010 OData Service. This creates a proxy so you can code against the service locally and not know the difference.

Next set a reference to System.Data.Services.Client. This will enable us to use the OData client library and LINQ on the ASP.net client. Then drag a textbox, button, and a gridview to the ASP page. We’ll fill the gridView with the Speaker data filtered on the last name field based on what was typed in to the textbox. We accomplish this with the following code on the button click.

1: //set a reference to ServiceReference1 and System.Data.Services.Client

I wanted to watch the Teched 2010 videos, but the problem I had was going to the site manually to download files for offline viewing. And I was also interested only in Dev sessions which were level 300 / 400. Thanks to OData for TechEd http://odata.msteched.com/sessions.svc/, I could write three statements in LINQPad and had them all downloaded using wget:

On a plane between Philadelphia and Oslo: I am flying there for NDC2010, where I have a couple of sessions (on WIF. Why do you ask?:-)). It’s a lifetime that I want to visit Norway, and I can’t tell you how grateful to the NDC guys to have me!

This is literally the 50th flight I am on since last August, the last fiscal year has been cR@Zy. Good crazy, but still crazy. As a result, I am astonishingly behind on my Programming WIF book and it’s now time to wrap the manuscript; I am writing every time I have a spare second, which means I have very little time for any “OOB” activity, including blogging. One example: yesterday I got a mail from Dinesh, a guy who attended the WIF workshop in Redmond, asking me about sliding sessions. That’s definitely worth a blog post, but see above re:time; hence I decided to share here on the blog the DRAFT of the section of the book in which I discuss sliding sessions. That’s yet to be reviewed, both for language and technical scrub, I expect that the final form will have much shorter sentences, less passive forms, consistent pronouns, and in general will be cleansed from all the other flaws of my unscripted style that Peter and the (awesome!) editorial team at MS Press mercilessly rubs my snout in (ok, this one is intentional exactly for making a point… say hi to Godel :-)). Also, the formatting (especially for the code and reader aids like notes) is a complete mess, but hopefully the content will be useful!

More about Sessions

I briefly touched the topic of sessions at the end of Chapter 3, where I showed you how you can keep the size of the session cookie independent from the dimension of its originating token by saving a reference to session state stored server side. WIF’s programming model goes well beyond that, allowing you complete control over how sessions are handled. Here I would like to explore with you two notable examples of that principle in action: sliding sessions and network load-balancer friendly sessions.

Sliding Sessions

By default, WIF will create SessionSecurityTokens whose validity is based on the validity of the incoming token. You can overrule that behavior without writing any code, by adding to the <microsoft.identityModel> element in the web.config something to the effect of the following:

Note: the lifetime property can only restrict the validity expressed by the token to begin with. In the snippet above I set the lifetime to 2 minutes, but if the incoming security token was valid for just 1 minute the session token will have 1 minute validity. If you want to increase the validity beyond what the initial token specified, you need to do so in code (by subclassing SessionSecurityTokenHandler or by handling SessionSecurityTokenReceived).

Now, let’s say that you want to implement a more sophisticated behavior. For example, you want to keep the session alive indefinitely as long as the user is actively working with the pages; however, you want to terminate the session if you did not detect user activity in the last 2 minutes, regardless of the fact that the initial token would still be valid. This is a pretty common requirement for Web sites which display personally identifiable information (PII), control banking operations and the like. Those are cases in which you want to ensure that the user is in front of the machine and the pages are not abandoned at the mercy of anybody walking by.

In Chapter 3 I hinted at the scenario, suggesting that it could be solved by subclassing the SessionAuthenticationModule: that would be the right strategy if you expect to reuse this functionality over and over again across multiple applications, given that it neatly packages it in a class you can include in your codebase. In fact, SharePoint 2010 offers sliding sessions and implemented those precisely in that way. If instead for you this is an improvement you need to apply only occasionally, or you own just one application, you can obtain the same effect simply by handling the SessionSecurityTokenReceived event. Take a look at the following code.

As you certainly guessed, this is a fragment of the global.asax file of the RP application. SessionSecurityTokenReceived gets called as soon as the session cookie is deserialized (or resolved from the cache if we are in session mode). Here I verify if we are within the second half of the validity window of the session token: if we are, I extend the validity to another 2 minutes, starting from now. The change takes place on the in memory instance of the SessionSecurityToken: setting ReissueToken to true instructs the SessionAuthenticationModule to persist the new settings in the cookie once the execution leaves SessionSecurityTokenReceived. Let’s say that the token is valid between 10:00am and 10:02am: if the current time falls between 10:01am and 10:02am, say 10:01:15, the code sets the new validity boundaries to go from 10:01:15 to 10:02:15 and saves those in the session cookie.

Note: Why renewing the session only during the second half of the validity interval? Well, writing the cookie is not for free; this is just a heuristic for reducing the times in which the session gets refreshed, but you can certainly choose to apply different strategies.

If the current time is outside the validity interval, this implementation of SessionSecurityTokenReceived will have no effect; the SessionAuthenticationModule will take care of handling the expired session right after. Note that an expired session does not elicit any explicit sign out process. If you recall the discussion about SSO and Single Sign-Out just few pages earlier, you’ll realize that if the STS session outlives the RP session the user will just silently re-obtain the authentication token and have the session renewed without even realizing anything ever happened. …

Vibro continues with a detailed “Sessions and Network Load Balancers” section. I wondered why he was so quiet lately.

Jeffrey Schwartz claims “The new Microsoft Active Directory Federation Services release promises to up the ante on cloud security” in his ADFS 2.0 Opens Doors to the Cloud post for the June 2010 issue of Redmond Magazine:

Microsoft Active Directory Federation Services (ADFS) 2.0, a key add-in to Windows Server 2008, was released in May. It promises to simplify secure authentication to multiple systems, as well as to the cloud-based Microsoft portfolio. In addition, the extended interoperability of ADFS 2.0 is expected to offer the same secure authentication now provided by other cloud providers, such as Amazon.com Inc., Google Inc. and Salesforce.com Inc.

ADFS 2.0, formerly known as "Geneva Server," is the long-awaited extension to Microsoft Active Directory that provides claims-based federated identity management. By adding ADFS 2.0 to an existing AD deployment, IT can allow individuals to log in once to a Windows Server, and then use their credentials to sign into any other identity-aware systems or applications.

Because ADFS 2.0 is already built into the Microsoft cloud-services portfolio -- namely Business Productivity Online Suite (BPOS) and Windows Azure -- applications built for Windows Server can be ported to those services while maintaining the same levels of authentication and federated identity management.

"The bottom line is we're streamlining how access should work and how things like single sign-on should work from on-premises to the cloud," says John "J.G." Chirapurath, senior director in the Microsoft Identity and Security Business Group.

While ADFS 2.0 won't necessarily address all of the security issues that surround the movement of traditional systems and data to the cloud, by all accounts it removes a key barrier -- especially for applications such as SharePoint, and certainly for the gamut of applications. Many enterprises have expressed reluctance to use cloud services, such as Windows Azure, because of security concerns and the lack of control over authentication.

"Security [issues], particularly identity and the management of those identities, are perhaps the single biggest blockers in achieving that nirvana of cloud computing," Chirapurath says. "Just like e-mail led to the explosive use of Active Directory, Active Directory Federation Services will do the same for the cloud."

Because ADFS 2.0 is already built into Windows Azure, organizations can use claims-based digital tokens, or identity selectors, that will work with both Windows Server 2008 and the cloud-based Microsoft services, enabling hybrid cloud networks. The aim is to let a user authenticate seamlessly into Windows Server or Windows Azure and share those credentials with applications that can accept a SAML 2.0-based token.

Windows 7 and Windows Vista have built-in CardSpaces, which allow users to input their identifying information. Developers can also make their .NET applications identity-aware with Microsoft Windows Identity Foundation (WIF).

WIF provides the underlying framework of the Microsoft claims-based Identity Model. Implemented in the Windows Communication Foundation of the Microsoft .NET Framework, apps developed with WIF present authentication schema, such as identification attributes, roles, groups and policies, along with a means of managing those claims as tokens. Applications built by enterprise developers and ISVs based on WIF will also be able to accept these tokens.

Pass-through authentication in ADFS 2.0 is enabled by accepting tokens based on both the Web Services Federation (WSFED), WS-Trust and SAML standards. While Microsoft has long promoted WSFED, it only agreed to support the more widely adopted SAML spec 18 months ago. …

When I first started working with Windows Azure, this book was one of my first purchases. Not only does it cover the basics of working with Azure such as:

Azure Roles

Table and blob storage

Queues

Moving from the Enterprise to the Cloud

Security and authentication

SQL Azure Services

but it also gives a very in-depth explanation of the inner-workings of the Windows Azure platform so you could actually understand what was happening physically, when you upload and publish your Azure solution.

There are plenty of nuts and bolts code segments and samples to show you how things are done and Wrox Press makes these samples available for download here. Additionally, chapters 12 and 13, covering SQL Azure, are actually in an online-only format and can be found at the same site.

The only issues that I had with the book were some late-breaking technology changes that shipped in the final Windows Azure release which were different from the community technology preview (CTP) releases so their examples didn’t quite work the same. Overall, these issues were quite minor and a little digging into the RTM examples showed me the new and proper way of doing things.

Overall, Cloud Computing with the Windows Azure Platform is a great addition to your programming library should you be leaning toward Windows Azure as a solution.

Final Note

Roger’s company, OakLeaf Systems, has a blog that posts a daily summary of Azure-related articles and announcements. I have no idea how long it takes someone to assemble each post, but I would imagine it takes quite a bit of time. I find these posts invaluable and greatly appreciate each summary.

Gunther Lenz ISV Architect Evangelist with Microsoft, interviews Jim Zimmerman, CTO of Thuzi and author of the Windows Azure Toolit for Facebook and CloudPoll reference application. Learn what the toolkit has in store and check out CloudPollhttp://bit.ly/CloudPoll reference application, built on the Windows Azure Toolkit for Facebook, and free to use for any Facebook user.

The difference between fault isolation and fault tolerance is not necessarily intuitive. The differences, though subtle, are profound and have a substantial impact on data center architecture.

Fault tolerance is an attribute of systems and architecture that allow it to continue performing its tasks in the

event of a component failure. Fault tolerance of servers, for example, is achieved through the use of redundancy in power-supplies, in hard-drives, and in network cards. In an architecture, fault tolerance is also achieved through redundancy by deploying two of everything: two servers, two load balancers, two switches, two firewalls, two Internet connections. The fault tolerant architecture includes no single point of failure; no component that can fail and cause a disruption in service. load balancing, for example, is a fault tolerant-based strategy that leverages multiple application instances to ensure that failure of one instance does not impact the availability of the application.

Fault isolation on the other hand is an attribute of systems and architectures that isolates the impact of a failure such that only a single system, application, or component is impacted. Fault isolation allows that a component may fail as long as it does not impact the overall system. That sounds like a paradox, but it’s not. Many intermediary devices employ a “fail open” strategy as a method of fault isolation. When a network device is required to intercept data in order to perform its task – a common web application firewall configuration – it becomes a single point of failure in the data path. To mitigate the potential failure of the device, if something should fail and cause the system to crash it “fails open” and acts like a simple network bridge by simply forwarding packets on to the next device in the chain without performing any processing. If the same component were deployed in a fault-tolerant architecture, there would be deployed two devices and hopefully leveraging non-network based failover mechanisms.

Similarly, application infrastructure components are often isolated through a contained deployment model (like sandboxes) that prevent a failure – whether an outright crash or sudden massive consumption of resources – from impacting other applications. Fault isolation is of increasing interest as it relates to cloud computing environments as part of a strategy to minimize the perceived negative impact of shared network, application delivery network, and server infrastructure. …

Lori continues with a SIMILARITIES and DIFFERENCES topic and then:

HERE COMES the FENG SHUI

Data center Feng Shui is about the right solution in the right place in the right form factor. So when we look at application delivery controllers (a.k.a. load balancers) we need to look at both the physical (pADC) and the virtual (vADC) and how each one might – or might not – meet the needs for each of these fault-based architectures.

In general, when designing an architecture for fault tolerance there needs to be provisions made to address any single component level failure. Hence the architecture is redundant, comprising two of everything. The mechanisms through which fault tolerance is achieved is failover and finely grained monitoring capabilities from the application layer through the networking stack down to the hardware components that make up the physical servers. pADC hardware designs are carrier-hardened for rapid failover and reliability. Redundant components (power, fans, RAID, and hardware watchdogs) and serial-based failover make for extremely high up-times and MBTF numbers.

vADC are generally deployed on commodity hardware and will lack the redundancy, serial-based failover, and finely grained hardware watchdogs as these types of components are costly and would negate much of the savings achieved through standardization on commodity hardware for virtualization- based architectures. Thus if you are designing specifically for fault tolerance, a physical (hardware) ADC should be employed.

Conversely, vADC more naturally allows for isolation of application-specific configurations a la architectural multi-tenancy. This means fault isolation can be readily achieved by deploying a virtualized application delivery controller on a per-application or per-customer basis. This level of fault isolation cannot be achieved on hardware-based application delivery controllers (nor on most hardware network infrastructure today) because the internal architecture of these systems is not designed to completely isolate configuration in a multi-tenant fashion. Thus if fault isolation is your primary concern, a vADC will be the logical choice.

It follows, then, if you are designing for both fault-tolerance and fault-isolation that a hybrid virtualized infrastructure architecture will be best suited to implementing such a strategy. An architectural multi-tenant approach in which the pADC is used to aggregate and distribute requests to individual vADC instances serving specific applications or customers will allow for fault tolerance at the aggregation layer while ensuring fault isolation by segregating application or customer-specific ADC functions and configuration.

A recent interview I did with Alex Bewley of Uptime Software is finally available. Although the podcast is nominally about cloud computing for mid-tier enterprises, we actually cover much broader ground. Alex’s blog posting lists the core topics as:

what kinds of businesses are using cloud

how you should go about evaluating it

how to avoid being outsourced as an IT department

what are the barriers to adoption; monitoring in the cloud (near and dear to our hearts)

designing applications for failure awareness

where he thinks the cloud is going

More important, for me personally, is that I think this is one of my better podcasts. The audio is clear, my responses, while long, are reasonably crisp, and you can tell that the general thinking around here has evolved a lot. Some key messages come through loud and clear, which I think aren’t well understood still:

Cloud computing isn’t about virtualization

This is disruptive sea change, be the disrupter, not the disrupted

Whole new areas of opportunity, applications, etc. are opening up that didn’t exist before

I really think it’s worth a listen. It’s a little less than 20 minutes and moves pretty quickly. Please enjoy and a big thanks to Alex who did a great job with the interview. Head over to the original blog post to listen to the podcast with Flash in your browser or you can download the MP3 directly if you are using a non-flash capable system.

Bernd Harzog recently wrote a blog entry to examine whether “the CMDB [is] irrelevant in a Virtual and Cloud based world“. If I can paraphrase, his conclusion is that there will be something that looks like a CMDB but the current CMDB products are ill-equipped to fulfill that function. Here are the main reasons he gives for this prognostic:

A whole new class of data gets created by the virtualization platform – specifically how the virtualization platform itself is configured in support of the guests and the applications that run on the guest.

A whole new set of relationships between the elements in this data get created – specifically new relationships between hosts, hypervisors, guests, virtual networks and virtual storage get created that existing CMDB’s were not built to handle.

New information gets created at a very rapid rate. Hundreds of new guests can get provisioned in time periods much too short to allow for the traditional Extract, Transform and Load processes that feed CMDB’s to be able to keep up.

The environment can change at a rate that existing CMDB’s cannot keep up with. Something as simple as vMotion events can create thousands of configuration changes in a few minutes, something that the entire CMDB architecture is simply not designed to keep up with.

Having portions of IT assets running in a public cloud introduces significant data collection challenges. Leading edge APM vendors like New Relic and AppDynamics have produced APM products that allow these products to collect the data that they need in a cloud friendly way. However, we are still a long way away from having a generic ability to collect the configuration data underlying a cloud based IT infrastructure – notwithstanding the fact that many current cloud vendors would not make this data available to their customers in the first place.

The scope of the CMDB needs to expand beyond just asset and configuration data and incorporate Infrastructure Performance, Applications Performance and Service assurance information in order to be relevant in the virtualization and cloud based worlds.

William continues with a highly detailed critique of Bernds essay.

Wayne Walter Berrry’s Transferring Assets in the Cloud post of 6/15/2010 explains the benefit of easy asset transfer to an acquiring corporation with Windows Azure and SQL Azure:

Every web startup plans to hit the big payday and exit through IPO or acquisition; SQL Azure and the Windows Azure Platform can make the acquisition process easier.

What most young entrepreneurs do not understand is that when they sell their web site for $100 million dollars, they do not get all the money up-front when they sign the contract. In fact, full payment does not come until all the assets are transferred. Usually these assets include domain names, intellectual property, physical assets like desk/chairs and digital assets like source code, web sites, and databases. It can take up to a year to transfer all assets, delaying payment considerably. One of the hardest assets to transfer is physical servers in your datacenter.

Your assets on Windows Azure and SQL Azure can be transferred as easily as changing the service account.

Typically, the entrepreneur when creating the startup spends a lot of time designing the computer systems for growth and scaling, including vetting the data center, purchasing the machines, installation, tuning, routing, backups and failover. When your business is acquired, the purchaser wants to consolidate resources, usually moving your servers to their datacenter to be maintained by their IT staff.

If you do the leasing of servers, and renting data center space, moving datacenters can be a considerable hassle. You need to plan for downtime, physical transfer of the servers (ship them across country potentially), getting them installed and bringing their IT staff up to speed. The headaches can be enormous, and the risks great since once the complete asset transfer is complete only then will you usually receive final payment. Imagine shipping a server that is literally worth $20 million if it arrives safely.

SQL Azure makes the transfer of web server and databases easier then the domain name transfer. Just change the server account to the purchaser’s information and you are done. You can do this by modifying the service account at the Microsoft Online Customer Portal.

In addition, the purchaser knows that they are in a trusted redundant and scalability environment when running on the Windows Azure Platform.

TechNet is kicking off the new "TechNet On" feature series with an in-depth look at securing and deploying applications in the cloud. You'll find new articles and videos in three tracks, including a background track on the Windows Azure platform, a security track with best practices on enterprise-class security for the cloud, and a strategy track for understanding your options and getting started. Read TechNet program manager Mitch Ratcliffe's blog for more on the new TechNet On approach to content.

From the Feature Package: Securing and deploying applications in the cloud

As part of our Azure Security Guidance project, we tested setting up SSL as part of our exploration. To do so, we created a self-signed certificate and deployed it to Azure. This is a snapshot of the rough steps we used:

Step 1 - Create and Install a test certificate

Step 2 - Create a Visual Studio project

Step 3 - Upload the certificate to Windows Azure Management portal

Step 4 - Publish the project to Windows Azure

Step 5 - Test the SSL

Step 1 - Create and Install a test certificate

Open a Visual Studio command prompt

Change your active directory to the location where you wish to place your certificate files

In the Windows Explorer window that pops up, copy the path to the directory displayed into the clipboard

Switch to your browser with the Windows Azure Management portal open

If you are still in the manage certificates screen, return to the service management screen

Click the "Deploy" button

Under "Application Package" area, select the "Browse" button

In file open dialog that pops up, paste the path from your clipboard to navigate to your VS package

Select the AzureSSL.cspkg, and click "Open"

Under the "Configuration Settings" area, select the "Browse" button

Select the ServiceConfiguration.cscfg file, and click "Open"

At the bottom of the Deploy screen, enter AzureSSL in the textbox

Click "Deploy"

When the deployment completes, click the "Run" button

Step 5 - Test the SSL

Once the Web Role has completed initializing, click on the "Web Site URL" link

Change the URL scheme to HTTPS (in other words change http to https), and open the page

Your results may vary here based on your browser, but you'll most likely see a warning about the certificate being for a different site, or not being from a trusted source. If you permit access to the site, the page will render empty and you browser should indicate that the page was delivered over SSL with a lock icon or something similar.

There's so much fear, uncertainty, and doubt about the topic of security in the cloud that I wanted to dedicate a post to the topic, inspired in part by the security-related comments to last week's post.

Let's start by acknowledging that, yes, technology can fail. But this happens regardless of how it is deployed. Massive amounts of data are lost every day through the failure of on-premise technology. Anyone who's worked at a big company knows how often e-mails or files on your local or shared drives are lost or corrupted. Or how easy it is in many companies to plug into their network without credentials. And this doesn't even take into account the precious data walking out the door every day on thumb drives and lost or stolen laptops. But these incidents are primarily kept quiet inside company walls, or worse, not even noticed at all.

When public cloud technology fails, on the other hand, it makes headlines. That's part of what keeps the leading cloud providers at the top of their game. Cloud leaders such as Salesforce, Amazon, and Google spend millions of dollars on security and reliability testing every year, and employ some of the best minds out there on these topics. The public cloud providers' business absolutely depends on delivering a service that exceeds the expectations of the most demanding enterprises in this regard.

The fact of the matter is your data is probably safer in a leading cloud platform than it is in most on-premise data centers. I love what Genentech said at Google I/O: "Google meets and in many cases exceeds the security we provide internally"

For some people data just "feels" safer when you have it in your own data center (even if its co-located), where you think it's under your control. It's similar to keeping your money hidden under your mattress. It "feels" safer to have it there in your bedroom where you can physically touch and see it.

That feeling of security is an illusion. That's why public banks exist -- it's a much safer place to keep your money even if the occasional bank robbery makes headlines. Examining why banks are safer sheds some light on the topic of security and the public cloud. Consider these three reasons:

Expertise: Banks are experts at security. They hire the best in the business to think about how to keep your money safe and (hopefully) working for you.

Efficiency: Even if you knew as much about security as your bank, it simply wouldn't be efficient for you to secure your bedroom the way banks can secure a single facility for thousands of customers.

Re-use / Multi-tenancy: Both of the above arguments also apply to "single tenant" safety deposit boxes. But there's an additional benefit to putting your money into a checking account, a "multi-tenant" environment where your money is physically mixed together with everyone else's. Here, the security of your individual dollar bill isn't important -- what matters is your ability to withdraw that dollar (+ some interest!) when you want.

Of course, one of the reasons we feel comfortable putting our money in a bank is that it is insured -- a level of maturity that hasn't come to the public cloud yet. But remember, your on-premise technology doesn't come with any sort of insurance policy either. When you buy a hard drive, there's no insurance policy to cover the business cost if you lose the data on it. You may get your money back (or at least a new hard drive) if the one you buy is defective, but no one is going to write you a check to compensate you for the productivity or data lost.

How do companies handle this risk with their existing on-premise technology? They take reasonable precautions to prevent the loss (e.g., encrypting data, making backups) and then do what is referred to as "self insurance." They suck it up and get on with business. And that's exactly what you have to do in the cloud today as well -- self-insure.

But that's today -- the public nature of the cloud drives a much faster rate of innovation around security than we've seen with on-premise technology. Gartner predicts "cloud insurance" services will soon be offered from an emerging set of cloud brokerages, a topic that I've blogged on in the past. Two-factor authentication is sure to be standard on cloud applications before on-premise applications. And any improvement in a cloud provider's security is instantly available to all their customers because everyone is on the updated version.

So where are you going to keep your most precious asset ... your company's information? Under a mattress? Or in a bank with top notch security? Enhanced security is rapidly becoming a reason to adopt cloud solutions, despite all the F.U.D. to the contrary.

Ryan is the Vice President of Cloudsourcing and Cloud Strategy for Appirio.

When adding an OData service to Visual Studio (Service reference) you can select the “View Diagram” menu option to generate a nice metadata view of the service

MSFT has a standardised wireform for Expression Trees (LINQ to URL) enabling LINQ Expression tree to OData Url syntax and the reverse on the server

Had a very brief discussion with Jonathan around how finance is building RIA using streaming servers instead of web services. Net out Microsoft is going after the 80% data case, and streaming (push) of data in the financial world (real-time web) is a small subset of the world today and hence very not in the current vision

Windows “Dallas” – The iTunes store for data. Provides security, business model and hosting. There are a number of data provides leveraging this infrastructure today, curious to see if a financial services gets on this bandwagon (maybe from a trade research perspective)

Douglas personal view appears to be that the web (browser) is the only cross platform solution. Steve Jobs and his Adobe Flash would therefore appear to be the correct view – HTML5 and open web standards. Adobe Flash and Microsoft Silverlight (the RIA world) are native platforms). Hence OData is betting on HTTP, and is geared to the web.

Abstract

The Open Data Protocol (OData) is an open protocol for sharing data. It provides a way to break down data silos and increase the shared value of data by creating an ecosystem in which data consumers can interoperate with data producers in a way that is far more powerful than currently possible, enabling more applications to make sense of a broader set of data. Every producer and consumer of data that participates in this ecosystem increases its overall value.

OData is consistent with the way the Web works - it makes a deep commitment to URIs for resource identification and commits to an HTTP-based, uniform interface for interacting with those resources (just like the Web). This commitment to core Web principles allows OData to enable a new level of data integration and interoperability across a broad range of clients, servers, services, and tools. OData is released under the Open Specification Promise to allow anyone to freely interoperate with OData implementations.

In this talk Chris will provide an in depth knowledge to this protocol, how to consume a OData service and finally how to implement an OData service on Windows using the WCF Data Services product.

Bio: Chris Woodruff (or Woody as he is commonly known as) has a degree in Computer Science from Michigan State University’s College of Engineering. Woody has been developing and architecting software solutions for almost 15 years and has worked in many different platforms and tools. He is a community leader, helping such events as Day of .NET Ann Arbor, West Michigan Day of .NET and CodeMash. He was also instrumental in bringing the popular Give Camp event to Western Michigan where technology professionals lend their time and development expertise to assist local non-profits. As a speaker and podcaster, Woody has spoken and discussed a variety of topics, including database design and open source. He is a Microsoft MVP in Data Platform Development. Woody works at RCM Technologies in Grand Rapids, MI as a Principal Consultant.

Woody is the co-host of the popular podcast “Deep Fried Bytes” and blogs at www.chriswoodruff.com. He is the President of the West Michigan .NET User Group and also is a co-founder of the software architecture online portal nPlus1.org.

Angela is the Midwest district’s Developer Tools technical specialist and has been part of the DPE organization for over 2 years.

Intel and Univa announced by a 6/16/2010 e-mail message an Executive Roundtable: Cloud Computing to be held on 6/22/2010 from 3:00 PM to 7:00 PM at the Mission Bay Conference Center, San Francisco, CA:

Intel and Univa invite you to attend a roundtable discussion and cocktail receptionwith experts from our cloud technology teams -- along with a cloud computing end user from Broadcom who will be present to discuss how his company evaluated and plans to use their cloud solution.

At this 2-hour discussion and Q&A session, our experts in cloud technology and delivery will discuss the reality of where cloud computing can benefit you and how your company can best evaluate options to validate the business case.

Eucalyptus Systems and its open source private cloud software are going to support Windows as well as Linux virtual machines in the new Eucalyptus Enterprise Edition (EE) 2.0, the major upgrade of the company's commercial software for private and hybrid cloud computing released Tuesday.

Windows support will let users integrate any application or workload running on the Windows operating system into a Eucalyptus private cloud. The widgetry covers images running on Windows Server 2003, 2008 and Windows 7, along with an installed application stack. Users can connect remotely to their Windows VMs via RDP and use Amazon get-password semantics.

The rev also provides new accounting and user group management features that provide a new level of permissioning control and cost tracking for different groups of users throughout an enterprise, enhancing its usability for large-scale corporate deployments.

A Eucalyptus administrator can define a group of users, for instance, by departments such as "development" or "operations," and allocate different levels of access based on the group's needs. Groups can be associated with a specific server cluster to further refine access within a Eucalyptus cloud. There are new capabilities to track cloud usage and costs per group, which can be used in a charge-back model or for greater overall visibility.

Ubuntu is using Eucalyptus as its cloud. VMware, a Eucalyptus competitor or soon to be one, seems to be leaning toward SUSE since it started OEMing the Linux distro from Novell last week raising speculation it might try to buy it.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.