••Arnon Rotem-Gal-Oz expands on his CRUD is bad for REST thesis in this 6/24/2009 post to Dr. Dobbs CodeTalk. In brief, Arnon’s position is:

[T]he main reason CRUD is wrong for REST is an architectural one. One of the base characteristics(*) of REST is using hypermedia to externalize the statemachine of the protocol (a.k.a. HATEOS– Hypertext as the engine of state). The URI to URI transition is what makes the protocol tick (the transaction implementation by Alexandros discussed in the previous post shows a good example of following this principle). …

Hosting and deploying ASP.NET MVC applications on Windows Azure works like a charm. However, if you have been reading my blog for a while, you might have seen that I don’t like the fact that my ASP.NET MVC views are stored in the deployed package as well… Why? If I want to change some text or I made a typo, I would have to re-deploy my entire application for this. Takes a while, application is down during deployment, … And all of that for a typo…

Luckily, Windows Azure also provides blob storage, on which you can host any blob of data (or any file, if you don’t like saying “blob”). These blobs can easily be managed with a tool like Azure Blob Storage Explorer. Now let’s see if we can abuse blob storage for storing the views of an ASP.NET MVC web application, making it easier to modify the text and stuff. We’ll do this by creating a new VirtualPathProvider.

Note that this approach can also be used to create a CMS based on ASP.NET MVC and Windows Azure.

Bruno Terkaly continues his series on Azure tables with these three posts of 6/21 – 6/22/2009:

Darned if this post hasn’t been rough to write. I don’t know if its my continued lack of caffeine (quit it about 10 days ago now), or the constant interruptions. At least the interruptions have been meaningful. But after 2 days of off and on again effort, this post is finally done.

As some of you reading this may already been aware, I’ve spent much of my spare time the last several weeks diving into Microsoft’s .NET Services. I’m finally ready to start sharing what I’ve learned in what I hope is a much more easily digestible format. Nothing against all the official documents and videos that are out there. They’re all excellent information. The problem is that there’s simply too much of it. :)

Brent’s summary of the three major .NET Services includes Workflow, the demise of which in Azure v1 I reported on earlier. Workflow won’t return to .NET Services until after .NET 4.0 RTMs.

The code that takes care of authentication is traditionally one of the nastiest spot of every distributed application. The current situation derives from multiple causes, from tightly coupling with specific technologies to trusting non-experts to write security code. Microsoft has been among the thought leaders who proposed a strategic solution to the problem, the Identity MetaSystem and its claim based identities, achieving vast consensus across the industry. Come to this session to learn how you can finally put that vision in practice thanks to the new 'Geneva' products line.

Another such feature is the enhanced support for X.509 certificate credentials in Information Cards.

Using Information Cards backed by an X509 certificate provides the added benefit of increased security, and with “Geneva” Server Beta 2 it becomes very easy to provision such a card. Pretty much all that you need to do is to check the “Certificate” checkbox in the Information Card Properties dialog in Geneva Server (right-click on Information Card tab in the navigation pane, and select Properties from the context menu).

[F]or a variety of reasons, an application that takes advantage of the Geneva Framework will not work “as is” when hosted in Windows Azure, including Microsoft products that were written to use the Geneva Framework. You may have heard that the new full trust settings we announced for Windows Azure at MIX would make the above scenario work, however that’s not the case: there is more than full trust for enabling the complete range of possibilities offered by claims based access.

My question about Geneva Beta 2 in a comment to this post remains unanswered.

••• Microsoft’s Open Government Data Initiative (OGDI) site has expanded with more Azure-hosted Washington, D.C data sets on the Data Page, and details on the OGDI API on the Developers page. According to the Home page:

The Open Government Data Initiative (OGDI) is an initiative led by Microsoft Public Sector Developer Evangelism team. OGDI uses the Azure Services Platform to make it easier to publish and use a wide variety of public data from government agencies. OGDI is also a free, open source ‘starter kit’ (coming soon) with code that can be used to publish data on the Internet in a Web-friendly format with easy-to-use, open API's. OGDI-based web API’s can be accessed from a variety of client technologies such as Silverlight, Flash, JavaScript, PHP, Python, Ruby, mapping web sites, etc.

Microsoft partner IDV Solutions has created a terrific mapping overlay tool that can get map data from any KML source.

Since OGDI natively emits KML, its a great demonstration of web standards enabling open government data. They’ve included DC data from OGDI, and some national data (parks, earthquakes), but you can easily add any KML data set just by entering a URL.

1. Hohm is a hosted serice running on Azure, Microsoft’s cloud platform. There are relatively few Microsoft services that already are running fully on top of Azure. HealthVault is one; Live Mesh is another. The calculations upon which the Hohm service is built are “really complicated,” Balterberry said, and require historical modeling. By running on Azure, Hohm can be scaled up or down, depending on demand, to use lots of compute cycles during peak demand.

2. Speaking of HealthVault, Hohm was patterned after it and uses the same security and privacy mechanisms that Microsoft’s health-information service uses. While energy consumption data doesn’t seem as in need of guarding that patient health data is, energy usage and pricing are information that is sensitive and to which access needs to be controlled, said Balterberry.

… To date, the majority of Microsoft's software has come paired with servers and hardware that IT departments run and manage in-house. Now, with online services, Microsoft can manage the software in its own data centers while employees at customer companies around the world access applications through a web browser.

According to Microsoft executives, companies can realize huge cost savings by not hiring staff to manage Exchange servers or reallocate current IT staff to other areas-a refrain software as a service (SaaS) vendors have been pushing for years now.

"IT is dominated by the people cost," says Bob Muglia, president of the Microsoft Server & Tools division. "It's the single largest expense in IT. By leveraging the scale online services can deliver, you can leverage costs and be leaner." …

Ingersoll Rand was running the e-mail system in-house. It had also developed many custom apps on the Lotus Domino server, but the cost was taking its toll, Kalka says. After looking at the on-premise, traditional version of Exchange, Kalka says "the numbers didn't look much better."

Then Microsoft approached him about online version of Exchange. Kalka saw the cheap per user price. Coupled with the fact he didn't need to manage hardware, he decided to sign up.

"That big e-mail cost went away," he says. "We had e-mail servers all around the world. 95 percent are shut down or re-allocated for something else." …

[L]atest Silverlight-Azure reference application which is called Joint Venture. Joint Venture provides a workspace for cross-business project teams. That is, teams made up of people from more than one business who are working on some kind of business collaboration. This is an example of a Multi-Enterprise Business Application (MEBA), an app used by multiple businesses who have a relationship with each other. The cloud is an ideal place for business collaboration, providing a neutral location that can be easily and universally accessed.

David requests your Azure Developer Contest vote in his Vote for Me! post of the same date.

The thing I love about my iPhone is that it’s not a piece of technology I think about but rather, it’s the way interact with it to get what I want done. It has its quirks, but it works…for millions of people.

The point here is that Cloud is very much like the iPhone. As Sir James (Urquhart) says “Cloud isn’t a technology, it’s an operational model.” Just like the iPhone.

Cloud is still relatively immature and it doesn’t have all the things I want or need yet (and probably never will) but it will get to the point where its maturity and the inclusion of capabilities (such as better security, interoperability, more openness, etc.) will smooth its adoption even further and I won’t feel like we’re settling anymore…until the next version shows up on shelves.

New survey results cast doubt on whether cloud computing adoption will ramp up in the next 12 months, with only 15% of corporate customers having adopted or considering adopting cloud technology over the next year.

A survey of 300 corporations worldwide found that 38% are undecided or unsure about whether they will adopt cloud services, and another 47% said they are not considering implementing cloud in the next year. Security is the biggest roadblock.

“An overwhelming 85% majority of corporate customers will not implement a private or public cloud computing infrastructure in 2009 because of fears that cloud providers may not be able to adequately secure sensitive corporate data,” writes Information Technology Intelligence Corp. principal analyst Laura DiDio in a new report.

••Stephen Lawson reports in his Cloud is Internet's next generation, HP executive says post of 6/25/2009 that Cloud-Services “CTO Russ Daniels says the cloud makes the Internet more than an infrastructure for automating business processes or letting people view information.”

There was general agreement [at ISC ‘09 in Hamburg, Germany] on the benefits of cloud computing: elastic capacity, pay-per-use model, platform abstraction, economies of scale, and built-in fault tolerance. Unfortunately -- and maybe significantly -- there didn't seem to be much consensus about whether the clouds would usurp traditional HPC infrastructure as the platform of choice.

CIOs and similar high-ranking user executives see promise in Cloud Computing and, for the most part, believe that they understand what it is, and how to benefit from it. But insights from a recent four-day series of events with CIOs around the US indicate that, in reality, there are multiple definitions of Cloud Computing - and relatively few executives can see the scope of its effects.

From June 11 through June 18, Saugatuck Research VP Charlie Burns took part in four expert panel and networking reception events examining the realities of Cloud Computing, and their effects on user business and IT strategy, planning and management.

And continues with an analyzes of “[d]iscussions during the events and private conversations with session attendees.”

Horizontal scaling of applications is a fairly well understood process that involves (old skool) server virtualization of the network kind: making many servers (instances) look like one to the outside world. When you start adding instances to increase capacity for your application, load balancing necessarily gets involved as it’s the way in which horizontal scalability is implemented today. …

A new survey released by IBM and Securities Industry and Financial Markets Association (SIFMA) finds that IT budgets are tight on Wall Street, but things are loosening up, and there’s going to be plenty of demand for new technology initiatives in the near future as firms on the Street look to “transformational” solutions to help better manage risk.

The survey of more than 350 Wall Street IT professionals found a “significant” increase in interest in new technologies and computing models, in particular cloud computing, as firms seek to overcome budgetary restrictions and skills shortages. Almost half of the respondents now see cloud computing as a disruptive force. …

Increasingly, … procurement is self-educating via the Internet. I’ve been seeing this a bit in relationship to the cloud (although there, the big waves are being made by business leadership, especially the CEO and CFO, reading about cloud in the press and online, more so than Purchasing), and a whole lot in the CDN market, where things like Dan Rayburn’s blog posts on CDN pricing provide some open guidance on market pricing. Bereft of context, and armed with just enough knowledge to be dangerous, purchasing folks looking across a market for the cheapest place to source something, can arrive at incorrect conclusions about what IT is really trying to source, and misjudge how much negotiating leverage they’ll really have with a vendor.

Chris provides more details of the five points, while Toby delivers more background.

Daryl Plummer is a managing vice president and chief Gartner fellow.

••Mary Jo Foley’s “All about Azure” Webcast of 6/24/2009 and slides should be are available for download here, but the link doesn’t work. Will update if and when ZDNet fixes it. See the Cloud Computing Events section for more details.

One-third of 1,200 organizations (33%) plan to convert their application environments away from a traditional, client-server model to one based on virtualization and cloud computing over the next two years, according to a study commissioned by Microsoft and released today. The study sought to broadly determine global IT spending priorities.

While the survey was far from comprehensive, it did uncover a few silver-lining facts. IT spending budgets will not be cut, with 98% saying they will generally maintain or increase their planned investment. Nearly 2/3 say the economy has created reason to invest more in one or more areas of technology. And of those, virtualization, security, systems management and cloud computing are the areas of choice. Specifically:

The survey confirmed Microsoft’s in-house belief that IT budgets still have room for investment in infrastructure innovations, he said. The Redmond folks hope that will include convincing corporate IT departments, which pretty much skipped the Vista era, to finally move from Windows XP to Windows 7.

Abstract: High-scale cloud services provide economies of scale of five to ten over small-scale deployments, and are becoming a large part of both enterprise information processing and consumer services. Even very large enterprise IT deployments have quite different cost drivers and optimizations points from internet-scale services. The former are people-dominated from a cost perspective whereas internet-scale service costs are driven by server hardware and infrastructure with people costs fading into the noise at less than 10%.

In this talk we inventory where the infrastructure costs are in internet-scale services. We track power distribution from 115KV at the property line through all conversions into the data center tracking the losses to final delivery at semiconductor voltage levels. We track cooling and all the energy conversions from power dissipation through release to the environment outside of the building. Understanding where the costs and inefficiencies lie, we ll look more closely at cooling and overall mechanical system design, server hardware design, and software techniques including graceful degradation mode, power yield management, and resource consumption shaping.

Recently I read an article about a traditional enterprise grid computing company who is attempting to enter the nascent cloud computing market. Without naming names, I will say the technology is probably decent, what they seem to lack is any real insight into the cost advantages that cloud computing enables. What I'm getting at is the ability to scale your resources -- hardware and software alike as you need them only paying for what your need, when you need it. This is arguably one of the key advantages of cloud computing, be it a private or public cloud.

My biggest issue with enterprise software companies applying traditional software licensing to cloud infrastructure software is that by charging $1,000 per year / per node, you are in a sense applying a static costing model to a dynamic environment which basically negates any of the costs advantages that cloud computing brings. It's almost like they're saying this how we've always done it, so why change? To put it another way, on one hand they're saying "reinvent your datacenter" yet on the other hand they saying" we don't need to reinvent how we bill you".

Just because you have software packaged as a virtual machine and running in Amazon EC2 does not mean you have a “cloud” offering.

As easy as it sounds in most cases when a vendor claims they have their software available as a service/cloud offering – it is just that: a virtual machine image (such as Amazon Machine Image – AMI) and maybe a hosting partner eager to host this virtual machine for you.

10Gen is developing MongoDB, a database for the cloud that supports Ruby, Python, Java, C++, PHP, Perl, and server-side Javascript and has more features than key-value (Entity-Attribute-Value, EAV) databases.

Acquiring Yahoo, one employee at a time: Microsoft has recruited Kevin Timmons, former lead of Yahoo’s data center team, to head up its Data Center Services organization. Timmons was once director of Operations at GeoCities and worked his way up to VP of Operations at Yahoo, where he led the build-out of the company’s data centers and infrastructure.

[He cannot] help but ponder if one motivation for moving to the cloud was this “need” to not be limited by existing infrastructure. How many folks will look to the cloud not because of cost, or features, but simply because the near endless resources it brings mean that they are no longer bound by the constraints imposed by their existing infrastructure. They can operate outside of enterprise infrastructure governance and budgeting.

I’ve turned one of my earlier blog entries, Smoke-and-mirrors and cloud software into a full-blown research note: “Software on Amazon’s Elastic Compute Cloud: How to Tell Hype From Reality” (clients only). It’s a Q&A for your software vendor, if they suggest that you deploy their solution on EC2, or if you want to do so and you’re wondering what vendor support you’ll get if you do so. The information is specific to Amazon (since most client inquiries of this type involve Amazon), but somewhat applicable to other cloud compute service providers, too.

More broadly, I’ve noticed an increasing tendency on the part of cloud compute vendors to over-promise. It’s not credible, and it leaves prospective customers scratching their heads and feeling like someone has tried to pull a fast one on them. Worse still, it could leave more gullible businesses going into implementations that ultimately fail. This is exactly what drives the Trough of Disillusionment of the hype cycle and hampers productive mainstream adoption. …

Ben Kepes summarizes the first session of the Enterprise 2.0 2009 conference by Alistair Croll in his Cloud Computing – A Real World Guide post of 6/22/2009. Croll is co-author of Complete Web Monitoring and a principal analyst for Bitcurrent.

Reuven Cohen says “On second thoughts, ‘Multiverse’ does little to describe how each of those clouds interact” in his The Cloud Computing Metaverse post of 6/21/2009:

In describing my theory on the Cloud Multiverse, I may have missed the few obvious implications of using the prefix "multi" or consisting of more than one part or entity. Although the Cloud Multiverse thesis suggests there will be more then one internet based platform or cloud to choose from. It does little to describe how each of those clouds interact. For this we need another way to describe how each of these virtualized interconnected environments interact with one another.

In place of "multi" I suggest we use the prefix "Meta" (from Greek: μετά = "after", "beyond", "with", "adjacent", "self").

The cloud promises to change the way businesses, governments and consumers access, use and move data. For many organizations, a big selling point in cloud infrastructure services is migrating massive data sets to relieve internal storage requirements, leverage vast computing power, reduce or contain their data center footprint, and free up IT resources for strategic business initiatives. As we move critical and non-critical data to the cloud, reliable, secure and fast access to that information is crucial. But given bandwidth and distance constraints, how do we move and manage that data to and from the cloud, and between different cloud services, in a cost-efficient, scalable manner?

SaaSGrid℠ is a comprehensive Platform as a Service (PaaS) offering that drastically reduces time-to-market, allows organizations to build complex and powerful SaaS applications and affords them the ability to easily manage their SaaS business. SaaSGrid focuses on reducing the barrier to entry for SaaS by smashing significant technical hurdles like multi-tenancy and by providing "out of the box" application services like monetization and billing, while supplying ongoing value with an arsenal of management tools to manage a SaaS business and associated application maintenance.

Build real enterprise SaaS applications with technologies you already know. SaaSGrid applications are written using Microsoft .NET languages and the simple yet powerful SaaSGrid API. There is no need to learn new programming languages or flashy online 'drag and drop editors' that impose artificial limitations on your business. In fact, with SaaSGrid, the web-based enterprise apps you've already built using .NET are probably closer to SaaS-enabled than you think. SaaSGrid allows you to take advantage of your existing assets and knowledge, and extend them with massive SaaS-focused value. [Emphasis Apprenda’s.]

I’d certainly like to see a point-by-point comparison with Azure WebRoles and .NET Services.

"It's an evolution of the industry." And transitioning does not require overhauling all computer programs and hardware. "The first entree from a transparency perspective is to put publicly available data into the cloud. That's the least risky," Adams said.

To ensure Microsoft remains a player in the growing cloud market, company officials are developing software that is interoperable, or able to exchange information among multiple systems and services. "It's all about choices," she said. "It's going to be a hybrid world.”

“Most public clouds are run in a more secure manner than the networks enterprises maintain on their own. Not all private companies maintain the same discipline,” he said Thursday at the Structure 09 conference in San Francisco.

This is a common refrain that few CTOs, CIOs or CISOs appear to believe. Greg is CTO and Executive Vice President of Research and Development at Sun Microsystems.

An IBM researcher has solved a thorny mathematical problem that has confounded scientists since the invention of public-key encryption several decades ago. The breakthrough, called "privacy homomorphism," or "fully homomorphic encryption," makes possible the deep and unlimited analysis of encrypted information -- data that has been intentionally scrambled -- without sacrificing confidentiality.

And adds this caveat in an update:

According to a Forbes article, Gentry's elegant solution has a catch: It requires immense computational effort. In the case of a Google search, for instance, performing the process with encrypted keywords would multiply the necessary computing time by around 1 trillion, Gentry estimates. But now that Gentry has broken the theoretical barrier to fully homomorphic encryption, the steps to make it practical won't be far behind, predicts professor Rivest. "There's a lot of engineering work to be done," he says. "But until now we've thought this might not be possible. Now we know it is." [Emphasis added.]

Audit and enterprise risk - they're inextricably linked. As cyber threats grow - from the inside and out - require organizations and their regulators to pay closer attention to technology and information security.

What are some of the key audit and risk trends to track? David Melnick of Deloitte answers that question in an interview focusing on:

Top challenges for financial institutions and government agencies;

Successful strategies being deployed to mitigate threats;

Trends organizations should track as they eye 2010.

Melnick is a principal in security and privacy services within the audit and enterprise risk services practice in the Los Angeles office of Deloitte and brings more than 17 years of experience designing, developing, managing and auditing large scale secure technology infrastructure. Melnick has authored several technology books and is a frequent speaker on the topics of security and electronic commerce.

This session will present Design Patterns for cloud computing on the Azure platform. Azure provides oodles of functionality that range from application hosting and storage to enterprise-grade security and workflow. Design patterns help you think about these capabilities in the right way and how they can be combined into composite applications. We'll cover design patterns for hosting, data, communication, synchronization, and security as well as composite application patterns that combine them. We'll be doing hands-on code demos of a number of composite applications, including a grid computing application. Azure Design Patterns Web Site.

Brandon says in his What Is Cloud Computing? post of 6/25/2009 from Structure 09 that “the word ‘cloud’ is catnip for nerds.” … “Next up on the zeitgeist watch? Attaching the word “scale” to the name of your company.”

I will be giving two session[s] at the conference, “Introduction to Clouds” and “Clouds in the Enterprise”. In my “Clouds in the Enterprise” will cover IBM’s new “Blue Cloud/Cloudbursting” announcement. If you happen to be in the Columbus area next Tuesday you should come and learn more about Cloud Computing. If you have any questions please feel free to contact me. Also, I have reserved extra tickets for Tivoli users.

Mary Jo Foley will “help sort out what Azure is (and what it isn’t) in a live Webcast on Wednesday, June 24 at 1:00 PM ET / 10:00 AM PT / 5:00 PM GMT” according to Jason Hiner. “This is a good opportunity to get up to speed on Azure before Microsoft launches it later this year.”

Jason describes the content:

ZDNet’s “All About Microsoft” blog editor Mary Jo Foley will offer an Azure primer. She’ll explain what Azure is — from the base operating system level, to the higher-level services layers, to the “user experience.” Foley will compare Azure to competing cloud platforms from Amazon, Google and other players. She will discuss how Microsoft is using and plans to use the platform itself. And Foley will differentiate between what we know about Azure from what many are anticipating from the platform.

Even if you’re dragging your heels about moving your apps and data “to the cloud,” it’s not too soon to hear more about Microsoft’s cloud plans. This Webcast will provide a high-level overview of where Microsoft has been and where it’s going in the cloud/utility computing market.

To help public sector entities meet these demands, Microsoft announced the Open Government Data Initiative (OGDI) on May 7, 2009. OGDI provides an Internet-standards-based approach to house existing public government data in Microsoft’s cloud computing platform, called Windows Azure. The approach makes the data accessible in a programmatic manner that uses open, industry-standard protocols and application programming interfaces (APIs).

Typically, federal, state and local government data is available via download from government Web sites, which requires citizen developers to host and maintain the data themselves. Through OGDI, Microsoft is highlighting the importance of programmatic access to government data (versus downloading the data).

Companies are adopting Software as a Service (SaaS) business intelligence (BI) solutions at a record pace as they upgrade from complex collections of spreadsheets and augment their existing BI deployments. Before your company jumps into the fray of deploying BI using the Cloud computing model, join industry expert Wayne Eckerson, Director of The Data Warehousing Institute (TDWI) Research, for straight talk about pitfalls to avoid and how to achieve a rapid Return on Investment (ROI).

The Orange County Azure User Group next meets on Thursday, June 25 at 6pm. The topic for this month's meeting is Silverlight and Azure. David Pallmann and Richard Fencel will both be presenting.

In David's presentation, you'll learn how to create rich Silverlight applications that are Azure-hosted and take advantage of cloud services. We'll build an Azure-hosted Silverlight application from the ground up that utilizes web services and cloud storage.

IBM recently made its most significant cloud computing announcement to date, which one executive compares to the launch of Big Blue's venerable System/360 mainframe 40 years ago. Following is my list of the top 10 things you need to know about IBM's emerging cloud strategy. …

Following his outburst against cloud computing last year, it appears that Larry Ellison has warmed up to the cloud computing model, if not the buzz phrase itself. Oracle's CEO yesterday said it's a goal to become the software industry's "number one on-demand application company."

Ellison last year lambasted cloud computing, referring to the hype around it as "idiocy," "gibberish," and "crazy." As I pointed out at the time, however, Oracle was moving into cloud computing even as its leader railed against it. During a conference call yesterday with analysts to discuss Oracle's financial results, Ellison provided evidence that Oracle is indeed making progress on this front and has ambitious goals in the software-as-a-service market.

"We think we can be the number one applications company, the number one on-premise application company, and the number one on-demand application company. That's our goal," he said. …

Throughout all of this, Ellison didn't use the term cloud computing, referring instead to on-demand software. One analyst observed, "It sounds like you're getting into cloud computing." To which Ellison, the cloud antagonist, responded: "Little bit."

Effectively, your servers are “joined” to the cloud. This is my “marketecture” view from my conversation with James and Bryan, and what they end up releasing may look very different. But if what they say is true, they may be one of the first to have actually deployed a hybrid cloud intro production. That’s huge - like Santa Claus is Real kind of huge!

When it comes to data center efficiency, Yahoo has maintained a lower profile than rivals Google and Microsoft. But the Yahoo team is building a compelling data center story of its own, with innovations in cooling design and energy efficiency ratings approaching the best that Google has achieved.

Yahoo’s Adam Bechtel began telling the story yesterday at the O’Reilly Velocity 2009 conference in San Jose, Calif. Bechtel, the chief architect of Yahoo’s data center operations, shared details of a patented cold-aisle containment system that integrates an overhead cooling module, building the air conditioning units into the top of a “podule” of cabinets packed with servers. …

Right now, a huge number of service providers are making plans to launch computing clouds, and I thought it would be interesting to outline some of the requirements I often hear from prospective cloud providers here. …

Our clouds need to run on inexpensive storage.

We want to build on an Open-Source Hypervisor.

We need a way to integrate with our Billing & Provisioning apps.

We need to support both Windows and Linux VMs, and that means image based pricing.

We want an API, but also a UI that makes admin simple for end-users.

Cloud images need to be more reliable than dedicated servers.

We want a turn-key solution, not something we have to maintain.

The post includes details of the seven “requirements.” (Apparently, the original post was named “7 Challenges for the Would-Be Cloud Architect)

As Amazon’s cloud continues to grow, the company is investing in real-world brick-and-mortar data centers to provide additional capacity. The retail/infrastructure company recently leased a 110,000 square foot property in northern Virginia to expand its data center footprint.

The additional space will help accommodate dramatic growth for Amazon Web Services, the suite of services that allow companies to run their applications on Amazon’s infrastructure and pay based on usage. More than 500,000 developers are now using AWS, and Amazon’s S3 storage now houses more than 50 billion objects.

The issue here is that cloud computing is really about, well, cloud computing. Existing hardware and software vendors, including Microsoft, Cisco, HP, etc., and of course IBM, seem to find that thought a bit scary and continue to toss traditional hardware and software at the problem. …

I don’t believe Microsoft is throwing the same hardware into its data center as Cisco, HP and IBM want to sell to private cloud wannabees.

The diagram above below gives a bit of insight into where IBM is today and where they are heading. I posted this last week, but removed the diagram at IBM’s Request. Now I’m reposting it after seeing Sean Poulay from IBM presented the chart at the Enterprise 2.0 Conference in Boston.

Immutable Service Containers (ISC) are an architectural deployment pattern used to describe a foundation for highly secure service delivery. ISCs are essentially a container into which a service or set of services is configured and deployed. First and foremost, ISCs are not based upon any one product or technology. In fact, an actual instantiation of an ISC can and often will differ based upon customer and application requirements. That said, each ISC embodies at its core the key principles inherent in the Sun Systemic Security framework including: self-preservation, defense in depth, least privilege, compartmentalization and proportionality.

Lately, if you have listened to the pronouncements of vendors large and small, they all are enthusiastically embracing cloud computing as the next wave of software and service delivery.

However, the Wall Street Journal’s Ben Worthen and Justin Scheck have a different take on all this happy cloud talk. The way they see it, the recent economic slump and tighter IT budgets have pushed many vendors into the cloud world, kicking and screaming. Oracle, HP, IBM, Microsoft, and SAP all run the risk of seeing business move into a lower-margin space, with a longer timeframe to see revenues, they write.

HP Software Chief Tom Hogan even offers an eye-opening comment, admitting to WSJ that the move from traditional to cloud software is “highly disruptive,” and that “shareholders don’t like it, and it’s a real conflict between business strategy and fiduciary duty.” …

Such success is driving some new players to seek the spotlight, however. I wanted to highlight two that I found most interesting. They are very different from one another, but those differences highlight the breadth of opportunity that remains in the PaaS market.

And goes on to describe AppScale, AppEngine, and TIBCO Silver, but not Azure as PaaS players.

We weren’t able to ship these capabilities in the .NET Framework 4.0 Beta 1 so we’ve decided to release them alongside the Beta. This CTP is an early preview of these features and as such we’re looking for lots of feedback on these components. This functionality is currently not scheduled to be part of the .NET Framework 4.0 and we expect to release another CTP of these features based on the feedback we get from you. [Emphasis added.]

Model Defined Functions are a great addition to EF4. It allows you to add functions directly into your model rather than having to place the additional logic into business classes. This not only allows the functions to be “just there”, but you can use them in queries, something that you cannot do with properties that are defined in the classes.

This sample demonstrates how to create a WPF Forms solution that checks user input with validation code, demonstrates common controls such as DataGrid and ComboBox, and shows typical data manipulation including create, read, update, and delete. The sample solution is available in both Visual Basic and C# and is intended for use with Visual Studio 2010 Beta 1 and with the .NET Framework 3.5. In the future, we will release a sample that performs with the .Net Framework 4.0 Beta.

In the first part of this series I looked at how you might go about building an (incredibly tiny) domain specific language for analysing data. The context I gave was a scenario where project managers were required to work with a continuous stream of data in the form of a known schema. This ‘known’ schema is most commonly used in moving and transforming data between various systems in a domain where the central or end target is a Document Management System. The ‘known’ schema is an agreed format that all systems in this particular industry use to extract and subsequently load. It is common to see the project managers struggling with tools like access to compose queries to analyse the data before or after these ETL processes and hence proposition of a DSL.

Matthieu Mezil explains how to implement a “sub EntitySet” property in his SubObjectSet post of 6/16/2009. Matthieu writes:

With EF, when you use TPH or TPC inheritance mapping scenarii, the EntitySet is on the base class.

As I mentioned often in the past with EF v1, you can add a property in your context which returns the EntitySet.OfType<MySubType>().

Ok it’s interesting but… In EF v1, the EntitySet is an ObjectQuery<T> property and our property also but in EF v2 the EntitySet is an ObjectSet<T>. This class implements the IObjectSet<T> interface which has three methods to add, attach and delete entities.

One guy tells me that he wants to be able to use these methods directly on the “sub EntitySet” property.

In this post, Faisal “look[s] at how we might be able to take our example a bit further and use some of the common patterns such as Repository and Unit Of Work so that we can implement persistence specific concerns in our example.”

The ASP.NET AJAX 4 release has some really cool features in it that can help lower the barrier of entry into developing client-side web applications (jQuery doesn’t hurt either). One of the more compelling new classes is the DataContext. Basically, the DataContext is an object that is capable of consuming a server-side resource that serves JSON data. In its most basic form, you simply give it the URI of a service and the operation name to execute and it handles making the underlying request. If you had an AJAX-enabled ASMX service like so (note: I’m using the Entity Framework)… [Emphasis added.]

We learned that a significantly large portion of customers use our partners’ ADO.NET providers for Oracle; with regularly updated support for Oracle releases and new features. In addition, many of the third party providers are able to consistently provide the same level of quality and support that customers have come to expect from Microsoft. This is strong testament of our partners support for our technologies and the strength of our partner ecosystem. It is our assessment that even if we made significant investments in ADO.Net OracleClient to bring it at parity with our partners based providers, customers would not have a compelling reason to switch to ADO.Net OracleClient.

This week the guys talk to Damien Guard, a developer working on LINQ to SQL and Entity Framework. After discussing data access for a while, they talk about the programming font Damien publishes, Envy Code R.

As we already announced, the samples for chapters 1-8 of our LINQ in Action book are available through LINQPad. This includes the LINQ to Objects and LINQ to SQL. I've been working on the LINQ to XML chapters (9-11) and hope that we will add them to the download soon. In the process, I've needed to learn a bit about how LINQPad works under the covers in order to add specialized classes. …

If you need to refer to external methods or add other classes, choose the Program option. This will add a Sub Main method and allow you to add additional methods. …

And now I’m back with yet another post on this topic, but this time with a much simpler and improved approach! The big difference is that I’m now doing the generation at design time instead of runtime. As you will see, this has a lot of advantages.

Update: current version is now 0.9.0006 (attached to his post as a zip file)

Some time ago, someone came by the MSDN Windows Azure forums and asked a question regarding performance of Azure Queues. They didn’t just want to know something simple like call performance, but wanted to know more about throughput, from initial request until final response was received. So over the last month I managed to put together something that lets me create what I think is a fairly solid test sample. The solution involves a web role for initializing the test and monitoring the results, and a multi-threaded worker role that actually performs the test. Multiple worker roles could also have been used, but I wanted to create a sample that anyone in the CTP or using the local development fabric could easily execute. …

Let me highlight some of the features of Tables as noted in their announcement

• All users can add data simultaneously - solving one of the biggest problems with shared worksheets. All data is always in up-to-date for everyone. • Presence - lets you know who else is working on the table and where they are working • Private and common views - allows the team to work together, but see the information that is important each person. Private views let you see information that is important to you, without disturbing others working on the sheet. • Filtering is real time so you can play with the data and adjust your filter in real time, without having to open a dialog box for every change. • Sorting - quick, simple and always includes all of the data

The most interesting feature for me is the idea of Private View and Common View. This feature really solves the problem encountered by people collaborating on spreadsheets online. Apart from these features, the functionality is very basic (well, that is the reason this product is still in the labs) and they have promised to add more features in the near future.

Donovan is a senior technical evangelist and a host for this very show: he worked on identity since he joined Microsoft in 2005, and is a well known expert in the ADFS community. In this episode Vittorio talks with Donovan about the relationship between ADFS and Geneva Server: Donovan explains in details how to map the old terminology to the new concepts introduced in Geneva, focusing on differences and similarities in the two approaches, and in general equipping today’s ADFS expert with everything he or she needs for hitting the ground running with Geneva Server.

•• Matias Woloski describes the Claims-Driven Modifier control’s expressions for ClaimValue, Condition and Mapping, which the designer ordinarily sets for you (see below for more about the control).

While pretty much everybody can understand (& appreciate) the high level story about claims, it is not always easy to make it concrete for everybody. The developer who had to deal with code handling multiple credentials, or had to track down where a certain authorization decision happen, sees very clearly where and how claims can make his life easier: UI developers, however, may have found challenging to bridge the gap between understanding the general story and finding tangible ways in which claims make their work easier. Until now (at least i hope).

We have put together a demo which shows an example of what you could build on top of the Geneva Framework infrastructure and further raise the lever of abstraction, to the point that a web developer is empowered to take advantage of the information unlocked by the claims with just few clicks. This touches on the theme of customization, which somehow gets less attention that authentication and authorization (for obvious reasons) but that deserves its place nonetheless. In any case, it’s not rocket science: it is a simple ASP.NET control that can modify the value of properties of other controls on the page, according to the value of the incoming claims. Despite its simplicity, it allows a surprising range of tricks :-)

The single app has its own identity silo and the federated app relies on an STS (like Geneva Server). I find this analogy useful to explain how things differ from the non-federated non-claim-based world.

Do you remember the PDC session in which Kim announced all the new wave of identity products, including Geneva?

During that session I showed a pretty comprehensive demo, where all the products & services worked together for enabling a fairly realistic end-to-end scenario. You have seen demos based on the same scenario at TechEd EU, TechDays and in many presentations from my colleagues in the various subsidiaries; finally, if you came at the Geneva booth at RSA chances are that you got an detailed walkthrough of it. Since people liked it so much, we thought it would have been nice to extract just the main web application from that scenario, and make it available to everyone in form of an in-depth example. You can find the code in a handy self-installing file on code gallery, at http://code.msdn.microsoft.com/FabrikamShipping (direct link here).

Mary Jo Foley’s Too many .Nets, too little time? gets the word out that the .NET Services team is dropping .NET Workflow Services until .NET 4 releases, as I reported in last week’s post.

One obstacle that administrators looking to deploy information cards in an enterprise will inevitably face is getting information cards to their users. Nobody wants to have to send an email to their users saying that in order to access a web service, they’ll need to go to an issuance website and download an information card. Things should just work. With that in mind, the “Geneva” Server and CardSpace teams created Silent Card Provisioning, a feature that uses Group Policy to deploy information cards to domain users automatically.

WF 4 ships with an activity palette that consists of many activities – some of these are control flow activities that represent the different modeling styles developers can use to model their business process. Sequence and Flowchart are a couple of modeling styles we ship in WF 4. In this post, we will present these modeling styles, learn what they are, when to use what, and highlight the main differences between them.

Leon Welicki is a Program Manager on Microsoft’s Connected Framework Team

By now you must be aware of the significantly enhanced Windows Workflow Foundation (WF) scheduled to be released with .Net Framework 4.0. The road to WF 4.0 and .Net Framework 4.0 Beta1 documentation for WF can give you more details. Being a member of the team responsible for the development of the WF tracking feature, I am excited to discuss the components that constitute this feature. In a nutshell, tracking is a feature to gain visibility into the execution of a workflow. The WF tracking infrastructure instruments a workflow to emit records reflecting key events during the execution. For example, when a workflow instance starts or completes tracking records are emitted. Tracking can also extract business relevant data associated with the workflow variables. For example, if the workflow represents an order processing system the order id can be extracted along with the tracking record. In general, enabling WF tracking facilitates diagnostics or business analytics over a workflow execution. For people familiar with WF tracking in .Net 3.0 the tracking components are equivalent to the tracking service in WF 3. In WF 4.0 we have improved the performance and simplified the programming model for WF tracking feature.

• Robert Le Moine calls Taking.NET Development to the Cloud “a leap of faith” in this 6/17/2009 post that describes how his employer uses the Azure Services Platform as virtual laboratory for application development:

Cloud computing platforms, such as Microsoft Azure, offer compelling advantages for building new scalable .NET applications. But can the Cloud be used for developing existing .NET applications? In this article, I'll explain how we've made the leap to Cloud-based development for our internal applications and the lessons we've learned along the way. Specifically, I'll describe our checklist for selecting a Cloud vendor and how we've used the virtualization capabilities of the Cloud to improve our agile development process. I'll also outline the quantifiable benefits we've seen, including saving $100,000 in capital expenditure and reducing our iteration cycle times by 25%.

As the development team lead for Buildingi, a corporate real estate consultancy that specializes in back-office technology solutions to manage large portfolios, I'm responsible for building Web-based applications using Visual Studio, the Microsoft .NET Framework, and Silverlight. Last year we started looking at Cloud Computing to gain the advantages of a scalable, virtualized platform for software. …

There are a few use cases where the Cloud is not recommended for testing (see below). These include tests that require specific x86 hardware (e.g., BIOS driver tests) and some types of performance and stress testing. If an application requires an onsite Web Service behind a firewall, this can usually be accessed using a VPN connection. …

In addition to developing the interactive Taste of Chicago map, West Monroe Partners and Microsoft also partnered to create a hosting solution that could handle the web site's user load-including half a million views. The hosted Microsoft Windows Azure solution provides the equivalent capacity of 25 purchased servers, with no infrastructure investment required by the City of Chicago. [Link added.]

• Dan Griffin’s Cloud Backup application “is a disaster recovery solution that allows you to export a Hyper-V virtual machine, archive it in Azure, and later restore it,” which he posted to CodePlex on 6/16/2009. Dan’s Cloud Backup whitepaper, CloudBackup.pdf, is a fully illustrated guide to using his solution.

In this episode we walk through the demo in some detail. The Wide World Importers Conference site we use here is the main site for a fictitious conference. The self-service part of this is entirely hosted on Windows Azure. As we walk through the registration process the information is retrieved and stored directly in Dynamics CRM Online. Naturally, as we’ve said in the past, Dynamics CRM is great at managing both contact and transactional information. We also look at how, by using 3rd party web services, we can compose new capabilities into our system. In this case we show how to integrate an internet flight booking service into the attendee registration process and then store that complex flight booking information in the Dynamics CRM data store. Finally we show how to use Silverlight to build a compelling user experience for a self-service portal. This one is pretty slick.

Recently, Chris Hoff posted an interesting concept for simply defining the logical parts of a cloud computing stack. Part of his concept is something he is calling the "Metastructure" or "essentially infrastructure is comprised of all the compute, network and storage moving parts that we identify as infrastructure today. The protocols and mechanisms that provide the interface between the infrastructure layer and the applications and information above it".

Actually I quite like the concept and the simplicity he uses to describe it. Hoff's variation is the practical implementation for a meta-abstraction layer that sits nicely between existing hardware and software stacks while enabling a bridge for future yet undiscovered / undeveloped technologies.. The idea of a Metastructure provides an extensible bridge between the legacy world of hardware based deployments and the new world of virtualized / unified computing. (You can see why Hoff is working at Cisco, he get's the core concepts of unified computing -- one API to rule them all)

I wanted to be able to take the work I did in developing a visual model to expose the component Cloud SPI layers into their requisite parts from an IT perspective and make it even easier to understand.

Specifically, my goal was to produce another visual and some terminology that would allow me to take it up a level so I might describe Cloud to someone who has a grasp on familiar IT terminology, but do so in a visual way:

Will the cloud succumb to the same short-sighted, market pressure that doomed the ASP model and still plagues SaaS approaches?

As the hype cycle for the cloud computing continues to gather steam, an increasing number of end users are starting to see the silver lining, while others are simply lost in the fog. It is clear that the debate over the definition, business model, and benefits of cloud will continue for some time, but it is also clear that the sluggish economic environment is increasing the appeal of having someone else pay for the robust infrastructure needed to run one’s applications. Yet, all this talk of leveraging cloud capabilities, or perhaps even building one’s own cloud, whether for public or private consumption, introduces thorny problems. How can we make sure that the cloud will bring us closer to the heavenly vision of IT we search for rather than a fog that hides a complex mess? Who will make sure that the cloud vision isn’t just another reinterpretation of the Software-as-a-Service (SaaS), Application Service Provider (ASP), grid and utility computing model that provided some technical answers but didn’t simplify anything for the internal organization? Who is architecting this mess?

Back when I did a lot of security work, we used to joke around that single sign on should be called "single vulnerability". Maybe single provider cloud models should be called "single point of failure".

Toodledo went down hard last week . I rely massively on Toodledo to organize my massively complicated work and family life. But I wasn't terribly upset because my data lives in more than one place. I wrote a draft of this blog on the Toodledo site, but I could have easily written it on the equipment that houses the synchronized copy of my notes. The site being down was annoying but not, as we say in the support business, without its workaround. …

What standards do you follow if you're interested in getting started in cloud computing? The short answer is, there are few clearly defined standards in what remains a loosely defined area. Nevertheless, the main outline is clear. Follow the leaders and follow the Web.

In an InformationWeek Webcast on The Cloud and Virtualization June 16, I tried to lay out a few of the standards that will dominate cloud computing. One assumption is that cloud computing will adopt the most efficient paradigms found on the Internet, say the massive and uniformly managed server farms of Google and Amazon.

Is EDS, in fact, a cloud provider? And how will IT departments properly factor their decisions on what to keep on-premises in data centers versus placing assets and workloads on someone else's cloud infrastructure?

In this article, I will guide you through this new environment and point out some of these design challenges that the cloud presents to us. I will also propose an architectural style, and some additional guidance, that can be used to overcome many of these challenges. Furthermore I'll give you an overview of the tools offered by the Azure cloud platform that can be used to implement such a system.

She lists traits common to most cloud providers: premium equipment, VMWare-based, private VLANs, private connectivity, and co-located dedicated gear but doesn’t really get into what really is – or should be – the focus of cloud offerings: services. To be more specific, infrastructure services.

A cloud provider of course wants a solid, reliable infrastructure. That’s why they tend to use the same set of “premium” equipment. But as Lydia points out differentiation requires services above and beyond simple hosting of applications in somebody else’s backyard.

Companies are adopting cloud systems infrastructure services in two different ways: job-based “batch processing”, non-interactive computing; and request-based, real-time-response, interactive computing. The two have distinct requirements, but much as in the olden days of time-sharing, they can potentially share the same infrastructure. …

Observation: Most cloud compute services today target request-based computing, and this is the logical evolution of the hosting industry. However, a significant amount of large-enterprise immediate-term adoption is job-based computing.

Dilemma for cloud providers: Optimize infrastructure with low-power low-cost processors for request-based computing? Or try to balance job-based and request-based compute in a way that maximizes efficient use of faster CPUs?

Last week, a lightning strike rendered part of Amazon EC2 belonging to a single zone cutoff from the real world. I don't want to go into whether it is an outage or not debate but towards a different kind of debate. Ever since Cloud Computing started gaining traction, we have a debate in the industry about whether the instance based setup is better or a fabric based one. I thought I will revisit this debate again in the light of the recent Amazon EC2 "it's not an outage" incident. Let me do a brief recap of the terminologies and, then, see how the debate shapes up in the aftermath of the "Amazon lightning incident". …

Federal CIO Vivek Kundra is well known for innovative approaches to government IT. He introduced Google Apps to the city of Washington, D.C. when he was its CTO of back in 2007.

He's brought with him to the federal government a philosophy that cloud computing could save money, facilitate faster procurement and deployment of technologies, and allow government agencies to concentrate more on strategic IT projects.

InformationWeek sat down with him at his office last week to discuss his thoughts about cloud computing in government, and what it would take to make cloud technologies easier to adopt in the federal space.

TheMichigan State Medical Society (MSMS) today announced a collaboration with Microsoft Corp., Compuware subsidiary Covisint and MedImpact Healthcare Systems, Inc., to be first in the nation to provide statewide connectivity of medical and pharmacy data for Michigan. Patients and physicians who use the medical society's electronic portal, MSMS Connect, will now have access to critical health care data in one location -- Microsoft HealthVault. This new collaboration expands MSMS' nation-leading effort to help implement electronic health care technology statewide. …

When fully implemented into MSMS Connect, the addition of HealthVault will enable patients to store their individual health data, or their whole family's health record, in one location at no cost. Through HealthVault, which is built on a privacy- and security-enhanced foundation, patients will have complete control over their electronic health data and can give permission to their physicians and other health care providers to view it. Patients can access data from their physicians, health plans, and pharmacies, as well as upload information from medical devices that monitor a number of factors including heart rate, blood pressure and blood sugar.

When Microsoft announced Azure, it said that all of the applications would be run from its data centers. However, Watson said the company is also looking at ways that partners can host cloud-based solutions.

"We've had some interesting conversations," Watson said.

Watson’s comments about enabling partners to host cloud-based solutions bodes well for potential on-site (private-cloud) Azure implementations, which would downplay cloud lock-in issues other than a choice of operating system. It’s a foregone conclusion that moving Azure projects to platforms other than Windows Server will be impossible.

Azure watchers have expected details about the Service Level Agreement (SLA) for Azure, but none of the articles about the interview mention SLAs explicitly. However, its likely that Azure SLAs and pricing will be interdependent.

This is about the money. Hate to say it, I really hate to say it, but what is going to make cloud computing take off is the financial - the economic realities of hardware, staff, and power consumption. Each of our money wizards has a perspective on this, and we will take them one at a time.

Cloud computing is "buzz" concept of the year for 2009. It has its place, especially for high-risk/low-capital applications like startups or small business or web sites, but for enterprise computing and — especially for improving existing core applications — I have a more jaundiced view.

As a concept, cloud computing is a pointer to the future, but there is much hype around the present. As James Maguire of Datamation put it recently: "As Cloud computing has emerged as a red hot trend, tech vendors of every stripe have painted the term 'Cloud' on their products, much like food brands all tout that they're 'low fat'."

This question comes from our cloud computing virtual conference, and asks: Where is the revenue stream in cloud computing? Who controls the money. If you are using services you are not responsible for, how will different providers receive their revenue?

Cloud models are starting to provide an attractive option for large and influential regional medical centers to get lots of small, local, laggard doctor offices trading in their paper patient files for electronic medical records. Are there clouds in your forecast?

Beth Israel Deaconess Medical Center (BIDMC), together with its Beth Israel Deaconess Physicians Organization (BIDPO), is just one of a handful of large and prestigious health care organizations in the country helping small doctor offices in their region (in this case, the Boston area) to deploy e-medical record systems.

A cloud model allows these doctor offices to use software to manage their practices and patient data, but the servers are located remotely and supported by BIDMC and Concordant, a services provider. BIDMC is covering about 85% of the non-hardware expenses for the practices to deploy the eClinicalWorks software, and the doctor offices pay a monthly subscription fee of between $500 and $600 for support.

A similar cloud plan is also being used by University Health System of Eastern Carolina to get small doctor practices in rural North Carolina using 21st century technology, says CIO Stuart James. "Most providers can't afford to hire IT people to keep these systems running," he says. "This keeps the costs down." …

Greg Ness analyzes Nick Carr's Cloud-Network Disconnect in this 6/15/2009 post that carries “Virtualization and cloud computing are promising to change the way in which IT services are delivered” as its deck.

Nicholas Carr told a recent audience at IDC Directions that "Cloud computing has become the center of investment and innovation." While he is not a technologist, his sometimes shocking insight into the transformation of IT have been prescient, even if he doesn't sweat the details of how complex IT infrastructures can morph into the equivalent of today's public utilities.

To his credit Carr has predicted the rise of the cloud computing press release, multiple cloud conferences and panels and even the SaaS repositioning exercise. He also foresaw the rise in Amazon and Google cloud announcements, perhaps years ahead of profits and/or material revenue. …

Aggregators …, such as FaceBook and Apple, are taking notice of what they are publishing to their sites these days with a growing concern that their own brand will be affected by poor performance by association. This forces SaaS vendors to look beyond their own cool features and rethink how with whom they deploy their applications with. Even the leading Managed Service Providers (Rackspace, Terramark, and Savvis) and emerging Cloud Platform Providers (Amazon, IBM, and Force.com) are rushing to deliver newer Services to ensure their customer’s that they have the most reliable deployment environment for SaaS based applications. Reliability matters more today then ever!

This book is the bible for those looking to take advantage of the convergence of SOA and cloud computing, including detailed technical information about the trend, supporting technology and methods, and a step-by-step guide for doing your own self-evaluation, and, finally, reinventing your enterprise to become a connected, efficient money-making machine. This is an idea-shifting book that sets the stage for the way information technology is delivered. This is more than just a book that defines some technology; this book defines a class of technology, as well as approaches and strategies to make things work within your enterprise.

Author David S. Linthicum has written the book in such a way that IT leaders, developers, and architects will find the information extremely useful. Many examples are included to make the information easier to understand, and ongoing support from the book’s Web site is included. Prerequisites for this book are a basic understanding of Web services and cloud computing, and related development tools and technologies at a high level. However, the non-technical will find this book just as valuable as a means of understanding this revolution and how it affects your enterprise.

You can read the TOC, but nothing else, at no charge.

Mache Creeger describes his Cloud Computing: An Overview survey article for the Association for Computing Machinery (ACM) Queue magazine as a “summary of important cloud-computing issues distilled from ACM CTO Roundtables.” Topics include:

What is Cloud Computing?

CapEx vs. OpEx Tradeoff

Benefits

Use Cases

Distance Implications between Computation and Data

Data Security

Advice

Unanswered Questions

I don’t usually include survey articles in my cloud posts, but publication by ACM Queue gives this article higher than average clout.

I like Power Usage Effectiveness as a course measure of infrastructure efficiency. Its gives us a way of speaking about the efficiency of the data center power distribution and mechanical equipment without having to qualify the discussion on the basis of server and storage used or utilization levels, or other issues not directly related to data center design. But, there are clear problems with the PUE metric. Any single metric that attempts reduce a complex system to a single number is going to both fail to model important details and it is going to be easy to game. PUE suffers from some of both nonetheless, I find it useful.

In what follows, I give an overview of PUE, talk about some the issues I have with it as currently defined, and then propose some improvements in PUE measurement using a metric called tPUE.

The Hartford has a dedicated insurance offering called CyberChoice that pays off if failure of the IT infrastructure results in liability for loss of personal information, intellectual property and the like. The insurance pays for investigation of the failure and payment of the costs of notifying customers if there is a reportable breach.

Passing the insurance company’s test of whether to insure a business is not easy, says Drew Bartkiewicz, vice president of technology and new media markets for The Hartford. Only a very few corporations – mostly Fortune 500 – even apply for the insurance, and of those who do, two thirds are turned away for coverage because they don’t live up to the requirements.

I managed to squeak out some additional time at the end of my first docking with the Mothership in San Jose next week such that I can attend Cisco Live!/Networkers the week after. I’ll be at Live! up to closing on 7/1. …

If you’re going to be there, let’s either organize a tweet-up (@beaker) or a blog-down…

Microsoft offers up security advice on how to fend off attacks against corporate IT resources by looking at ways that attackers can undermine an organization in its “IT Infrastructure Threat Modeling Guide” published today.

“Look at it from the perspective of an attacker,” says Russ McRee, senior security analyst for online services at Microsoft, the primary author of the 32-page guide that discusses the fundamentals and tactics of network defense. McRee said the “IT Infrastructure Threat Modeling Guide” is actually the outcome of a lot of thinking about the topic at Microsoft, which itself is using the guide as a reference.

The guide is not about Microsoft products and in fact “needs to be agnostic so it can work for anyone,” says McRee. “An organization has to figure out what their threats are.” The guide offers ways that IT staff—especially those without formal security training—can analyze their own wired and wireless networks, model them for security purposes, in some cases along the lines of “trust boundaries and levels,” to determine where defenses should be. …

If you’re following along thus far, you’ll also see the possibility for trusted 3rd party auditors to digitally ’sign’ individual policy statements made by cloud providers they have audited. That signature could itself reflect the assurance level you need. This in turn could help drive the nascent cyberinsurance market for cloud…assuming the auditor is open to counterclaims by the insurer ;-).

Microsoft’s SAS 70 attestations and ISO/IEC 27001:2005 certifications by the British Standards Institution (BSi), as described in Charlie McNerney’s Securing Microsoft’s Cloud Infrastructure post of 5/27/2009, are a step in the right direction.

Kevin Jackson reports on an interchange of Tweets with cloud security expert Chris Hoff (a.k.a. @Beaker) in this Maneuver Warfare in IT: A Cheerleading Pundit post of 6/15/2009. Chris has just taken a high-level job with Cisco.

Our 2nd meeting has been booked! Our first event was a fantastic success and we hope to emulate this with the next two speakers.

Richard Godfrey will demonstrate his KoodibooK product and demonstrate how it can be scaled using Azure.

Bert Craven will discuss how Azure can be used from a technical and commercial proposition from an enterprise such as EasyJet. He will also demonstrate moving a WCF service into the cloud using the .NET Service Bus and Relay Bindings.

Learn about Windows Azure and Azure services which enable developers to easily create or extend their applications and services. From consumer-targeted applications and social networking web sites to enterprise class applications and services, these services make it easy for you to give your applications and services the most compelling experiences and features.

9:00-10:30 Introduction to Azure

10:45-12:15 Azure Storage

12:15-1:15 Working Lunch - Putting it together - Building a simple Azure Application

The workshop will discuss the emergence of cloud computing and the advantages that it offers, particularly in terms of cost savings. The workshop will also highlight various challenges that need to be addressed with a special focus on connectivity, business models, efficiency, reliability, integration, security, privacy and interoperability issues.

The key objective is to clarify the rather misty concept of cloud computing for both World Bank staff and our country clients. There is a lot of confusion around this idea with over 20 definitions offered so far by various parties. The workshop will also clarify the potential role of the World Bank and other development organizations in helping developing countries to realize this opportunity.

This workshop is organized by the Global ICT Department and other partners as part of the Government Transformation Initiative, a collaboration between World Bank and the private sector aimed at supporting government leaders pursuing ICT-enabled public sector transformation.

In this informative webcast we’ll take you through the basics of implementing SOA systems that leverage cloud computing. We’ll focus on how to manage these systems, taking into account the special requirements posed by transactions flowing from the enterprise to the cloud and back.

SOA and cloud computing expert David Linthicum, author of “Cloud Computing and SOA Convergence in Your Enterprise,” will walk you through the approach of bringing transactional SOA to the clouds, and the best practices in SOA governance. Ed Horst, Vice President of Product Strategy for industry leader AmberPoint, will cover best practices for managing composite application that leverage cloud computing.

Who knows who created the intercloud term, but it is a major development in articulating the enterprise cloud payoff. Check out this Cisco blog and intercloud preso. It is a grand and spectacular vision of where computing needs to go.

Think of the intercloud as an elastic mesh of on demand processing power deployed across multiple data centers. The payoff is massive scale, efficiency and flexibility.

Just when you thought that Google and Amazon would control the skies, along comes Cisco with a brilliant vision that amplifies the role of the network and offers enterprises a sexy alternative.

••Kevin Jackson’s Two Days with AWS Federal post describes an upcoming two days of training with Amazon Web Services (AWS) Federal:

Today, I start two days of training with Amazon Web Services (AWS) Federal. If that's the first time you've ever heard about an AWS Federal division, your not alone. Held in downtown Washington, DC the course was invite-only and attendance was IT services firms that had demonstrated a clear track record of success in the Federal market.

He then goes on to list the companies in attendance, describes AWS’s use of the term “70/30 switch” and describes the first days session contents.

… Turning to the Amazon event, four Amazon customers presented and discussed their use of cloud computing (my discussion of the following is from notes and memory, as the slides are not yet available). …

BT is about to formally launch a virtualised infrastructure service called BT Virtual Data Centre, which will form the basis of its cloud-computing strategy.

VDC involves the virtualisation of servers, storage, networks and security delivered to customers via an online portal as cloud-based services. On Thursday, BT's Global Services division announced the customer rollout of VDC, which will initially target multinational corporate customers and the public sector.

"VDC is the basis of our cloud-computing offering," Neil Sutton, BT Global Services's product chief, told ZDNet UK on Thursday. "We've begun to deliver communications-as-a-service and hosted services for voice, unified communications and CRM, and we see a roadmap where people want to be able to provision an infrastructure end-to-end. We want to deliver those things as a service in a predictable and flexible manner."

• Himanshu Vasishth’s System.Data.OracleClient Update post of 6/15/2009 to the ADO.NET Team Blog announces that the System.Data.OracleClient class will be deprecated in .NET Framework 4.0 in favor of third-party versions:

… We learned that a significantly large portion of customers use our partners’ ADO.NET providers for Oracle; with regularly updated support for Oracle releases and new features. In addition, many of the third party providers are able to consistently provide the same level of quality and support that customers have come to expect from Microsoft. This is strong testament of our partners support for our technologies and the strength of our partner ecosystem. It is our assessment that even if we made significant investments in ADO.Net OracleClient to bring it at parity with our partners based providers, customers would not have a compelling reason to switch to ADO.Net OracleClient. …

My guess is that the "Industry in a Box" vision mentioned by Charles Phillips, Oracle's co-president, will actually become the next wave of cloud computing. In a previous column, I recommended that Google ( GOOG - news - people ) get into the appliance business. My guess is Oracle will follow this path with a vengeance. Solaris will power Oracle's cloud offerings, but through appliances, Oracle will bring the cloud to the data center.

Remember that Google, the leading provider of large-scale computing services in the cloud, does so by building its own hardware and software that is integrated and optimized for the task. I believe that Oracle recognizes that there are limits to the amount of enterprise IT that can be put into the cloud. Problems such as security, disaster recovery and moving huge amounts of data are significant barriers to cloud migration. But many of the same economic and operational benefits of the cloud can be achieved through remotely managed appliances that integrate software and hardware in one box. Oracle can run these over the Net using the Smart Services model I wrote about in Mesh Collaboration. The customer gets all the benefits of the cloud without having to move data off premise.

There seems to be an endless parade of hosting companies eager to explain to me that they have an “enterprise class” cloud offering. (Cloud systems infrastructure services, to be precise; I continue to be careless in my shorthand on this blog, although all of us here at Gartner are trying to get into the habit of using cloud as an adjective attached to more specific terminology.)

If you’re a hosting vendor, get this into your head now: Just because your cloud compute service is differentiated from Amazon’s doesn’t mean that you’re differentiated from any other hoster’s cloud offering. …

Sun has been working on the Rock project for more than five years, hoping to create a chip with many cores that would trounce competing server chips from I.B.M. and Intel. The company has talked about Rock in the loftiest of terms and built it up as a game-changing product. In April 2007, Jonathan Schwartz, the chief executive of Sun, bragged about receiving the first test versions of Rock. …

This marks the second high-end chip in a row that Sun has canceled before its release. These types of products cost billions of dollars to produce, and Sun now has about a 10-year track record of investing in game-changing chips that failed to materialize.

You can bet your children’s college fund that Oracle had something to do with killing Rock.

Google Labs recently announcedGoogle Fusion Tables, an "experimental system" for fusing data management and collaboration. In other words, it's a means to merge many data sources, including any electronic conversations around data, visualization and data queries. Fusion Tables provide a platform to analyze data along with tools for electronically collaborating about that analysis.

The use cases here are numerous, but the core idea is that users will upload data, and then analyze and visualize the data on Google Maps or mashed up with other APIs, such as the Google Visualization API. Nothing new there, right? Wrong. Fusion Tables also provide for the discussion of data at the row or column level, or even specific data elements... think database and business intelligence meets Google Docs. However, the biggest bang for this new cloud service is the ability to "fuse" multiple sets of data that are logically related and then determine patterns.

This looks to me like the capability that Jon Udell has been seeking for his calendar curating project the last several months.

IBM’s first true cloud computing products, announced today, consists of workload-specific clouds that can be run by an enterprise on special-purpose IBM gear, Big Blue building that same cloud on its special-purpose gear running inside a firewall, or running the workload on IBM’s hosted cloud. The offering seems like a crippled compromise between the scalability and flexibility that true computing clouds offers and what enterprises seem to be demanding when it comes to controlling their own infrastructure. I spoke today with the chief technology officer of IBM’s cloud computing division, Kristof Kloeckner, to learn more. [Emphasis added.]

Well IBM has gone and done it, they've announced a cloud offering yet again. Actually what's interesting about this go, is not that they're getting into the cloud business (again) but instead this time they're serious about it. And like it or not, they're approach actually does kind of make sense for, assuming you're within their target demographic (the large enterprise looking to save a few bucks).

My summary of the "Big Blue Cloud" is as follows: It's not what you can do for the cloud, but what the cloud can do for you. Or simply, it's about the application, duh? …

The race for your data center has already begun. Google, Microsoft, and Amazon are the leading players in a global data center build-out that has not been slowed by the current economic recession and that over next decade will change the face of both consumer computing and IT departments.

The reason why these three companies are building out data center capacity around the world at a breakneck pace is that they want to be ready with enough capacity to handle the two big developments that are preparing to transform the technology world:

Cloud computing: Applications and services delivered over the Internet

Utility computing: On-demand server capacity powered by virtualization and delivered over the Internet

With both of these trends, the biggest target is private data centers. Cloud computing wants to run the big commoditized applications (mail, groupware, CRM, etc.) so that an IT department doesn’t have to run them from a private data center. …

This week IBM is rolling out new products that begin to bring some definition to its cloud computing roadmap. IBM is offering several services enabling public cloud computing. But Big Blue’s sharpest focus is on the private cloud, which presents an opportunity to sell hardware and software rather than monthly subscriptions.

Here’s what IBM is announcing:

Public Cloud: IBM can run your application testbed in its public cloud today, and will soon offer a subscription service to host virtual desktops in its data centers. The IBM Smart Business Test Cloud Services taps into, while the upcoming IBM Smart Business Desktop Cloud will establish a beachhead for expected future growth in enterprise desktop virtualization as a service delivery strategy. …

Private Cloud: IBM CloudBurst provides customers with a private cloud in a single 42U rack for about $200,000. Included is a Websphere CloudBurst Appliance that comes pre-loaded with images for quickly deploying application environments based on IBM’s WebSphere software. …

Salesforce.com announced on June 15 the release of the Force.com Free Edition, a stripped-down version of its cloud computing platform for the enterprise. By relying on cloud-based resources, Force.com clients can run Websites and build Web applications without an on-premises infrastructure.

Each client utilizing the free version of Force.com can deploy their newly built Web applications to up to 100 users. In addition, the free edition gives clients access to one Website with up to 250,000 page views per month, 10 custom objects/custom database tables per user, a sandbox development environment, free online training, and a library of sample applications.

With the addition of Force.com Sites, companies can now use Force.com to build and run applications for their internal business processes as well as public-facing Web sites - entirely on salesforce.com's real-time cloud computing platform.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.