You might be thinking, pfft, I'm never going to need to use Binary Serialization...that's old school. And you might be right, but think about this: Azure Storage charges you by how much you're storing and some aspects of Azure also charge you based on the bandwidth consumed. Do you want to store/transmit a big-ass bloated pile of XML or do you want to store/transmit a condensed binary serialization of your object graph?

I'm using Blob and Queue storage for several things and I've actually got a couple of projects going right now where I'm using binary serialization for both Blobs and Queue messages. The problem shows up when you try and use the BinaryFormatter class' Serialize method. This method requires the Security privilege, which your code doesn't have when its running in the default Azure configuration. [Emphasis Kevin’s.]

So how do you fix this problem so that you can successfully serialize/deserialize binary object graphs and maybe save a buck or two? Easy! Turn on full-trust in your service definition for whichever role is going to be using the binary serialization (in my case both my worker and web roles will be using it...).

One of the double-edged swords of Azure is that it feels so much like building regular web applications. This is a good thing in that you can re-use so much of your existing skills, knowledge, and best practices and they will still apply in the Azure world. However, it is really easy to make assumptions about how things work that turn out to be wrong. …

and then describes when to use a service configuration setting versus a web.config setting for storage account and configuration data.

CloudBerry Lab is looking for beta testers for their CloudBerry Explorer for Azure Blob Storage. It is a freeware application that helps you to manage Azure blob storage with FTP like interface. Currently CloudBerry Explorer is the most popular Amazon S3 client on Windows platform and we decided to extend it with Azure storage support.

This is an attempt to learn something of SQL Azure and VS2010 Addin development by creating an SQL Azure Explorer *Addin for Visual Studio 2010 Beta 1*, even though Microsoft will probably provide such a tool eventually, this is for learning and having some programming fun.

This addin will only work for Visual Studio 2010 Beta 1. Also, the performance for anything but small databases is very slow right now, as it is doing a big chunk of querying at startup. This is going to be fixed though. This is a start :)

The Visual Studio project templates included with the Azure Tools provide a quick way to get started with a cloud-hosted web application. Unfortunately, it only supports classic ASP.NET web projects by default. This tutorial will get you going on deploying an ASP.NET MVC web application to Azure.

PreRequisites

To get started, you’ll need to have the following tools installed on your machine:

Installing the Azure SDK will install the two important local development components – Development Fabric, which simulates the cloud on your local machine, and Development Storage, which simulates the Azure Storage components (table, blob, and queue) using SQL Server.

Let's take a look at this pretty common scenario. You're building an ASP.NET application (MVC or otherwise) and you intend to publish it in the cloud and you're using Azure Storage (not SQL Azure) for your underlying data store. You've already hooked your app up with the sample Azure-based Membership provider that comes with the Azure SDK and everything is running along nicely.

Your application has quite a bit of administrator-only functionality so, after you've been using it locally for a while you put in some safeguards to block access to the admin areas unless the user is in the Administrators role. That's awesome and ASP.NET and ASP.NET MVC both have some really great code shortcuts for enabling this kind of situation and you can make yourself an administrator pretty darn easily.

So you're an admin and you deploy your application to staging and you go to run it and you try to log in. Whoops your account isn't there. This is because for the last couple of weeks you've been running against your local SQL 2008 (or SQL Express) database and you forgot that you did a few tweaks to make yourself an administrator. In the last couple of weeks you removed the code on the site that allows users to self-register since your application is an LOB app with a manually administered user list. …

was able to get NopCommerce running on Azure in just a few hours with relatively little fuss. In a real project, there would of course be all of the normal issues, such as setting up products, design, and such, but Azure was really not much more difficult than your typical hosting provider.

Dan continues with a detailed description of how he ported NopCommerce to an Azure Web app and SQL Azure database.

Neudesic's Azure ROI Calculator has been updated. There are two primary changes in Beta 2 of the calculator:

Compute Time now defaults to 24 hrs/day for all scenarios. Having received some clarification since the July pricing announcement, it's now clear that compute time charges are not based on application usage but chronological time. Therefore, you'll always be computing your charges based on 24 hours a day for each hosting instance. The calculator now reflects this.

Vertical scrolling is now in place. Previously, you couldn't see all of the calculator on smaller resolution displays.

These fixes make the ROI calculator much easier for most folks to use.

The results of a recent survey suggest that Medicare beneficiaries who use Kaiser Permanente’s personal health record are overwhelmingly satisfied with the service, and are in general quite comfortable using the Internet to manage their health care online.

The health plan’s PHR—known as My Health Manager—is available only to Kaiser enrollees and so far as we know, is the only PHR that links directly to an electronic health record (in this case it is HealthConnect, Kaiser’s modified version of an Epic product).

Kaiser presented the gratifying findings last week at the World Health Care Congress’ 5th Annual Leadership Summit on Medicare in Washington, D.C.

Twenty-three percent of the seniors responded to the e-mail survey, which was distributed to more than 15,000 people.

The survey examined respondents’ Internet utilization habits and comfort with computers, as well as current health status and use of prescription drugs.

Nearly 88% of survey respondents reported being satisfied or very satisfied with Kaiser’s PHR.

When CIOs debate the difficulty of installing electronic medical records, they inevitably point to Kaiser Permanente. The $40 billion healthcare organization has been deploying electronic medical records (EMR) in various pockets of its provider and insurance network for more than a decade and decided to link them all into one companywide system. System outages, physician rebellion, privacy issues Kaiser has dealt with it all. CIO Phil Fasano, who joined Kaiser in 2006, talks about weathering the ups and downs.

Much of the work that we have collaborated on in the past several months has been centered around PHP, but rest assured we have been focused on other technologies as well. Take Java, for example. A big congratulations goes out this week to Noelios Technologies, which just released a new bridge for Java and .NET.

Microsoft collaborated with the France-based consulting services firm and provided funding to build this extension to the Restlet Framework. It’s always very exciting for me, as a French citizen living in the United States, to witness French companies like Noelios collaborating with Microsoft to develop new scenarios and bridges between different technologies. Noelios specializes in Web technologies like RESTful Web, Mobile Web, cloud computing, and Semantic Web, and offers commercial licenses and technical support plans for the Restlet Framework to customers around the world.

REST can play a key role in order to facilitate the interoperability between Java and Microsoft environments. To demonstrate this, the Restlet team collaborated with Microsoft in order to build a new Restlet extension that provides several high level features for accessing ADO.NET Data Services.

The Open Government Data Initiative (OGDI) is an initiative led by Microsoft. OGDI uses the Azure platform to expose a set of public data from several government agencies of the United States. This data is exposed via a restful API which can be accessed from a variety of client technologies, in this case Java with the dedicated extension of the Restlet framework. The rest of the article shows how to start with this extension and illustrates its simplicity of use. … [Emphasis added.]

This looks to me like the RESTful start of a StorageClient library for Java programmers.

The Indiana University Health Center in Bloomington early this year began testing a free personal health record for students. The goal was to work out bugs, and offer the PHR to the incoming freshman class this fall (see healthdatamanagement.com/issues/2009_67/-28272-1.html).

Just weeks into the new semester, 3,100 of 7,200 incoming students--40% of the class--have activated a PHR and entered some data, says Pete Grogg, associate director at the health center. And half of those with a PHR are sharing data with the center as they start seeking treating. "We're very happy, we weren't quite sure what to expect," Grogg says.

The university this fall expects to complete integration work and populate PHRs with pertinent patient data from the center's electronic health records system. Students presently can populate the PHR with data they receive from their primary care physician, or the health center can scan that information into the PHR. The PHR vendor, Fort Wayne-based NoMoreClipboard.com, soon will add features to enable students to request medication refills and view their financial history online.

CVS Caremark (NYSE: CVS) today announced the expansion of its partnership with Microsoft HealthVault. Now, CVS/pharmacy customers have the ability to securely download their prescription histories to their individual Microsoft HealthVault record. By visiting CVS.com, consumers who fill their prescriptions at CVS/pharmacy stores can now easily add their prescription history into their HealthVault record.

CVS Caremark has been a partner with Microsoft HealthVault since June 2008. Consumers using CVS Caremark for pharmacy benefit management services can already store, organize, and manage their prescription history information online using Microsoft's HealthVault platform. In addition, patients who receive treatment at MinuteClinic, the retail-based health clinic subsidiary of CVS Caremark, can securely import their visit summaries and laboratory test results into their personal HealthVault record. …

Steve Lohr’s E-Records Get a Big Endorsement article of 9/27/2009 describes how a New York regional hospital group plans to offer affiliated physicians up to about 90% of the maximum federal subsidy for adopting Electronic Medical Record (EMR) technology:

North Shore-Long Island Jewish Health System plans to offer its 7,000 affiliated doctors subsidies of up to $40,000 each over five years to adopt digital patient records. That would be in addition to federal support for computerizing patient records, which can total $44,000 per doctor over five years.

The federal [ARRA] program includes $19 billion in incentive payments to computerize patient records, as a way to improve care and curb costs. And the government initiative has been getting reinforcement from hospitals. Many are reaching out to their affiliated physicians — doctors with admitting privileges, though not employed by the hospital — offering technical help and some financial assistance to move from paper to electronic health records.

Efforts by hospital groups to assist affiliated doctors include projects at Memorial Hermann Healthcare System in Houston and Tufts Medical Center in Boston. But the size of the North Shore program appears to be in a class by itself, according to industry analysts and executives.

Big hospitals operators like North Shore, analysts say, want to use electronic health records that share data among doctors’ offices, labs and hospitals to coordinate patient care, reduce unnecessary tests and cut down on medical mistakes.

…[T]he reason that so many of these mainstream articles get it so wrong, is they’re trying to explain cloud computing as a consumer-oriented phenomenon, and it’s basically not. Not the exciting or “new” part, anyway. Even technology vendors drift into this as they try to tout their cloud offerings: witness a recent TV commercial from IBM entitled “My Cloud: Virtual Servers on the Horizon”, a commercial which would work just as well if it were titled “the incredible power of the Internet”, or even, “aren’t computers cool?” Similarly, that cloud computing “definition” from BusinessWeek is, quite frankly, nonsensical in its broadness: it not only completely misses the point of what makes cloud computing relevant and compelling as a game-changer, it even fails to distinguish it from the last 15+ years of the Internet in general. …

Model-Driven SOA with “Oslo”, by César de la Torre Llorente A shortcut from models to executable code through the next wave of Microsoft modeling technology.

An Enterprise Architecture Strategy for SOA, by Hatay Tuna Key concepts, principals, and methods that architects can practically put to work immediately to help their organizations overcome these challenges and lead them through their SOA- implementation journey for better outcomes.

Enabling Business Capabilities with SOA, by Chris Madrid and Blair Shaw Methods and technologies to enable an SOA infrastructure to realize business capabilities, gaining increased visibility across the IT landscape.

Chris Hoff, my friend and colleague at Cisco Systems, has reached enlightenment regarding the role of the operating system and, subsequently, the need for the virtual machine in a cloud-centric world.

His post last week reflects a realization attained by those who consider the big picture of cloud computing long enough.

James concludes:

So, the problem isn't that OS capabilities are not needed, just that they are ridiculously packaged, and could in fact be wrapped into software frameworks that hide any division between the application and the systems it runs on.

The irony is that Chris Hoff’s “Incomplete Thought” is far more complete than most of mine that I intend to be complete.

I have been actively involved in discussing clouds here on my blog, as well as various customer and industry forums for a little over a year.

I've put forward some fairly definitive concepts (e.g. private cloud) as well as had plenty of time to discuss and occasionally defend my position. It's added up to quite a few posts.

I went back to one of the foundational posts I did way back in January, and was surprised as to how well the thinking has held up over time.

Today, I'd like to pick up the discussion where my esteemed Cisco colleagues Chris Hoff and James Urquhart have taken the discussion, as they give me a convenient jumping-off point for some deeper topics I've been itching to get into.

… When asked in an interview Monday with Network World what the top three threats would be in 2010 for Microsoft's server and tools division, Bob Muglia, president of the unit, pulled a semantic slight-of-hand and said he preferred to refer to them as opportunities. …

"The No. 1 opportunity we have is to look at enterprise applications and grow our share of high-end enterprise applications…" Muglia said. "We still have a disproportionally small percentage of servers and revenue associated with servers that are coming from high-end enterprise applications, which remain predominantly IBM and Oracle based."

Muglia said the second big opportunity is to help companies transition to the cloud. [Emphasis added.]

"We really are the company that should be able to do this for our customers because of the huge install base of Windows server applications that they have," Muglia said. "We should provide the best services at the best cost for customers to move into a cloud environment."

Muglia rounded out his top three opportunities for 2010 saying competition with Linux would be a major focus. …

What about new technologies like Azure, Mesh, etc? Ballmer says they're "dislocators to technology" that overlay all of these opportunities:

[Ballmer:] “I don't list the cloud because the cloud has kind of overlaid all of those opportunities. We have opportunities by offering cloud infrastructure to enhance the margins we make in our server business, in our communications and collaboration and productivity business, and that's where things like exchange online, SharePoint online, Windows Azure, they're not really new value propositions, but they are new potential margin streams and dislocators to technology shifters and some of the existing kind of customer propositions that we invest in.” [Emphasis added.]

Cloud computing promises to become as a foundational element in global enterprise computing; in fact, many companies are exploring aspects of the cloud today. What leadership seeks is a strategic roadmap that allows them to capitalize on the operational benefits of current cloud offerings, while establishing a migration path towards a business and architectural vision for the role cloud computing will play in their future.

Deloitte’s Center for the Edge has spent the past year combining extensive research and industry insights to explore the topics of cloud computing and next-generation Web services. The resulting publication, Cloud Computing: A collection of working papers, explores the topic from different perspectives: business drivers, architectural models, and transformation strategies…

Michael Vizard claims "Although cloud computing, in its current form, is only a couple of years old with fairly limited adoption, it’s already becoming a commodity” in his Cloud Computing: The End Game post of 9/28/2009 to the ITBusinessEdge.com site:

Every hosting company in the planet has already jumped in, trying to forestall any potential loss of market share to any number of emerging cloud computing infrastructure providers. However, given the downturn in the economy and the simple fact that there is a lot more server capacity than applications to run on them, the companies that provide cloud computing services are already engaged in a bruising price war.

In response, some cloud computing service providers such as SkyTap and IBM have been moving upstream. They not only provide raw computation power, they also provide application testing capabilities and host commercial applications in the hopes of developing a portfolio of software-as-a-service applications.

That’s all well and good, but cheap computing horsepower derived from cloud computing is not the primary value proposition of cloud computing. In order to drive the next evolution of enterprise computing, cloud computing providers are going to have to evolve in a way that allows services to be dynamically embedded inside customizable business processes that can change in a matter of minutes and days, rather than in weeks and months. …

Michael continues with a list of what’s needed to shed the “commodity” stigma.

Knowledge Management tools emerged in the 90s but never got very far, because for the most part, they relied on individuals to fill out forms about what they knew. Even if they were willing to do that, the forms would provide limited information or become outdated very quickly providing little actual utility. Enterprise 2.0 tools like blogs, wikis and micro-blogging, which you may be adding to your Intranet mix, provide a way to capture knowledge much more organically than its 90s counterparts without people even realizing they are participating in knowledge capture.

Bill Ives, a consultant who has been working in this space for years, and who writes the Portals and KM blog, says today's tools make it much easier to capture knowledge without nearly as much effort as the older generation of knowledge management tools. …

Yesterday I had a lively discussion with Lori MacVittie about the notion of what she described as “edge” service placement of network-based WebApp firewalls in Cloud deployments. I was curious about the notion of where the “edge” is in Cloud, but assuming it’s at the provider’s connection to the Internet as was suggested by Lori, this brought up the arguments in the post above: how does one roll out compensating controls in Cloud?

and expresses the need for “security services such as DLP (data loss/leakage prevention,) WAF, Intrusion Detection and Prevention (IDP,) XML Security, Application Delivery Controllers, VPN’s, etc. … to be configurable by the consumer.”

At a recent demonstration during the Public Health Information Network Conference in Atlanta, the companies showed that security and privacy of web-based health information remains protected with a service as data is encrypted in transit and stored securely in the cloud. The demonstration was implemented over the CONNECT health information exchange platform with a Cisco Systems AXP router. …

… At Gartner we’ve long talked about the need for the “Next Generation Firewall” to deal with the new threats and the new business/IT demands. Greg Young and I are in the final stages of a note on “Defining the Next Generation Firewall” which should be available to Gartner clients next week. Today Greg opines about UTM, which isn’t NGFW – we go through the differences in the research note coming out.

There is a bit of deja vu all over again – back at [Trusted Information Systems] (TIS) in 1995, I thought by now firewalls would have proxies for every application and Moore’s law would have enabled firewalls to do deeper and broader inspection at wire speeds across all of them. As usual, what should happen always takes a back seat to what can happen, which is then further limited by what actually will happen.

• Alysa Hutnik, an attorney with the Kelley Drye firm in Washington DC, specializes in information security and privacy, counseling clients on what to do after a security breach. In Privacy and the Law: Alysa Hutnik of Kelley Drye of 9/30/2009, Alysa discusses:

Do's and don'ts following a data breach;

Privacy legislation trends for 2010;

What organizations can do today to prevent privacy/security challenges tomorrow.

Cloud security includes the obligation to meet regulations about where data is actually stored, something that is having unforeseen consequences for U.S. firms trying to do business in Canada.

Recently several U.S. companies that wanted contracts to help a Canadian program to relocate 18,000 public workers were excluded from consideration because of Canadian law about where personally identifiable information about its citizens can be stored.

The rule is that no matter the location of the database that houses the information, it cannot place the data in danger of exposure. From a Canadian perspective, any data stored in the U.S. is considered potentially exposed because of the U.S. Patriot Act, which says that if the U.S. government wants data stored in the U.S., it can pretty much get it.

That effectively rules out cloud service providers with data centers only in the U.S. from doing business in Canada.

Without an added value security layer, public cloud fails for business applications.

In this case, MPLS is an abbreviation for Multi-Protocol Label Switching not Minneapolis. Cisco defines MPLS in their Routing GLOSSARY:

MPLS is a scheme typically used to enhance an IP network. Routers on the incoming edge of the MPLS network add an 'MPLS label' to the top of each packet. This label is based on some criteria (e.g. destination IP address) and is then used to steer it through the subsequent routers. The routers on the outgoing edge strip it off before final delivery of the original packet. MPLS can be used for various benefits such as multiple types of traffic coexisting on the same network, ease of traffic management, faster restoration after a failure, and, potentially, higher performance.

Health Information Exchanges (HIEs) have received increasing attention in recent months. They are part of the agenda of the Office of the National Coordinator (ONC) for Healthcare IT, as they take steps to create a Nationwide Health Information Network (NHIN). What is the purpose of such things? What data security risks are raised by such networks? How does this relate to already-connected Internet “cloud”-based EHRs? We will attempt to address these questions in this article.

One of the problems with a health IT landscape characterized by legacy, locally-installed Electronic Health Record (EHR) systems is that medical data is segregated into practice-centered data silos, much like medical data in a paper environment – every doctor has his/her own “chart rack” (or EHR database), and a given patient may have segments of his/her medical information scattered among many different places.

There is no one, coherent place where all the information about a patient is kept, and so copying of needed health information and sending to others is how data from outside the practice is updated. Things like lab data, hospital reports, consultation from colleagues, x-ray and imaging reports – all these things make their way into some of the physician’s charts, often in a hit-and-miss fashion.

Create them now and stifle innovation or create them later when it’s too late? That seems to be the breadth of the discussion on cloud standards today. Fortunately, the situation with cloud computing standards is not actually this muddy. In spite of the passionate arguments, the reality is that we need cloud standards both today and tomorrow. In this posting I’ll explore the cloud standards landscape. …

Please join me at the 7th Annual FedFocus Conference, November 5, 2009, at the Ritz Carlton in McLean, VA. This conference has been designed to provide crucial information on upcoming federal government procurement plans. I will be the morning keynote, speaking on the use of cloud computing technologies to increase government efficiency and transparency.

SQL Azure Database: Under the Hood, Jeff Currier: SQL Azure Database is a highly available and secure relational database service that offers customers a friction free provisioning interface while maintaining a compatible programming model with SQL ...

Lap Around the Windows Azure Platform, Manuvir Das: Come hear how the Windows Azure Platform provides a scalable compute and storage environment with Windows Azure, a fully relational database with SQL Azure, and a service bus and access control …

Design considerations for storing data in the cloud with Windows Azure - Wed 30th Sept, 2pm The Microsoft Azure Services Platform includes not one but two (arguably three) ways of storing your data. In this session we will look at the implications of Windows Azure Storage and SQL Data Services on how we will store data when we build applications deployed in the Cloud. We will cover “code near” vs “code far”, relational vs. non-relational, blobs, queues and more.

There are two cloud-related sessions in the “community” section of Microsoft TechEd Europe 2009 and you need to vote for them here if you are attending the conference (and obviously if you want them in the agenda).

Basically, both are on cloud computing: one for developers and the other for IT professionals:

Going to the Cloud: Are we crazy?

Are cloud services about efficiency or negligence? About being able to outsource commodity services and concentrate on core competence or loosing control and risking getting out of compliance? Which IT services can be safely moved to the cloud and which should stay in house? Let’s get together and discuss the present and the future of Software + Services use in our companies, share success stories, lessons learned, discuss concerns and best practices.

Developing on Azure: Stories from the Trenches

Have you given Windows Azure a try? Whether it was just kicking the tires or you are deep in the enterprise application development, let’s get together and share the lessons we learned on the way.

Both topics are near and dear to my heart, and as a matter of fact, will be moderated by me should they get into the agenda.

There will be breakout sessions on the security issues that are unique to the Cloud, such as the crucial distinction between Private and Public clouds. Expert speakers from government and the software industry alike will be looking at issues such as the requirements for how companies can handle government information and how information can be most successfully shared by multiple clouds. Doing more with less is the new reality for most IT departments, and the Government is no exception. So the cost-effectiveness of technologies such as Virtualization will also be foremost on the agenda.

In our session, aimed at Developers & Technical decision makers, David Chappell looks at the Windows Azure platform and how it compares with Amazon Web Services, Google AppEngine, and Salesforce.com’s Force.com.

Following on from David Chappell’s talk David Gristwood & Eric Nelson from Microsoft will provide a deeper technical insight & update on Windows Azure & SQL Azure. The goal is to provide a foundation for thinking about the Windows Azure platform, then offer guidance on how to make good decisions for using it.

With over a hundred speakers and plenty of new live demos and technologies on display on stage and in the exhibit hall, you’ll get a sweeping overview of the ways that information technology and the web are changing healthcare in areas from online search to health focused online communities and social networks that connect patients and clinicians.

Aneesh Chopra, Chief Technology Officer, U.S. Federal Government will present the opening keynote. Other presentation include:

Clinical Groupware and the Next Generation of Clinician-Patient Interaction Tools

Following the passing of the stimulus and the debate over meaningful use, there’s been lots of tension between the “cats” (the major IT vendors) & “dogs” (the web-based “clinical groupware” vendors). The real question is how the new wave of EMRs is going to integrate with the consumer facing and population management tools. Can there be unity around the common themes of better health outcomes through physician and patient use of technology? Or will the worlds of Health 2.0 and the EMR move down separate paths? We have three very outspoken leaders to debate the question.

IBM has been investing in cloud computing for several years, although Willy Chiu, VP of IBM Cloud Labs, acknowledges it may be difficult for those outside IBM to develop a picture of what its cloud initiative will finally look like.

That's because so far IBM has chosen to make point announcements of limited cloud products. Its CloudBurst appliance was announced in June, a blade server that can be loaded with IBM software and used as cloud building block.

At Structure 09, the June 25 cloud computing conference sponsored by GigaOm in San Francisco, Chiu said: "Cloud computing is a new way of consuming IT." That's a radical view, a step ahead of the evolutionary view that the cloud will start out as an IT supplement. That is, it will absorb specific workloads, such as business intelligence or a new consumer facing application. In the long run, Chiu said, it will host many IT activities and services.

In a recent interview, Chiu elaborated. IBM systems management software, Tivoli, has been given a set of services to administer the cloud. They include: Services Automation Manager, Provisioning Manager and Monitoriong Manager. So far these services are designed to provision and manage workloads running in VMware virtual machines, but there is no restriction that limits Tivoli to VMware file formats. …

"The availability of our products and services depends on the continuing operation of our information technology and communications systems. Our systems are vulnerable to damage or interruption from earthquakes, terrorist attacks, floods, fires, power loss, telecommunications failures, computer viruses, computer denial of service attacks, or other attempts to harm our systems.

"Some of our data centers are located in areas with a high risk of major earthquakes. Our data centers are also subject to break-ins, sabotage, and intentional acts of vandalism, and to potential disruptions if the operators of these facilities have financial difficulties. Some of our systems are not fully redundant, and our disaster recovery planning cannot account for all eventualities," the company writes. [Emphasis added.]

David Linthicum describes Microsoft's one chance to move to the cloud with Microsoft Office Web Apps in this 9/24/2009 post with a “Microsoft could give Google Docs a run for its money -- if it's really serious about the cloud” deck:

… As Office Web Apps moves out as a "technical preview," last week there were reports that Google Docs is "widely used" at 1 in 5 workplaces. That's killing Office Web Apps, in my book. As I've stated a few times in this blog, I'm an avid Google Docs user, leveraging it to collaborate on documents and cloud development projects, as well as run entire companies. Although Google Docs provides only a subset of features and functions you'll find in Microsoft Office, it's good enough to be productive. But the collaborative features are the real selling point. …

If Microsoft can provide most of its Office features in the cloud, it has an opportunity to stop Google's momentum, and even perhaps take market share. After all, one of the values of being in the cloud is the ability to change clouds quickly just by pointing your browser someplace else. If Microsoft has a better on-demand product, and the price is right, I'll switch. …

… The economic advantages of the cloud computing model, comparisons of lifecycle costs (TCO) of services vs. acquisition + ongoing maintenance costs of legacy business models, costs of delay, and other detractors of legacy business models compared to the benefits of a public cloud offering like Salesforce.com, as well as the insights and impact of this coming paradigm shift.

We spoke of public and private clouds, advantages and disadvantages of the models, current industry concerns - security, fail-over, real-time mirroring, and several examples of platform application development speed with Force.com (Starbucks example), which is approximately 5X that of other approaches. …

Xerox, based in Norwalk, Conn., has suffered from declining sales of copiers and printers, and the accompanying diminishing uses of ink, toner and paper. The deal for Dallas-based ACS is expected to triple Xerox’s services revenue to an estimated $10 billion next year from 2008’s $3.5 billion.

The move also represents the first bold move by Xerox Chief Executive Ursula Burns, who took over on July 1. Ms. Burns, who become the first African-American woman to head a Fortune 500 company, called the deal “a game-changer” for her company.

Xerox’s agreement comes a week after Dell Inc. agreed to buy information-technology service provider Perot Systems Corp. for $3.9 billion. The sector’s recent merger activity — which includes Hewlett-Packard Co.’s purchase last year of Electronic Data Services — leaves Accenture PLC, Computer Sciences Corp. and Unisys Corp. as some of the larger services companies still independent.

Oracle CEO Larry Ellison has bashed cloud computing hype before. So it was unsurprising but nonetheless entertaining when, during an appearance at the Churchill Club on Sept. 21, Ellison unloaded on cloud computing in response to an audience question relayed by moderator Ed Zander. “It’s this nonsense. What are you talking about?” Ellison nearly shouted. “It’s not water vapor!. All it is, is a computer attached to a network.” Ellison blamed venture capitalist “nitwits on Sand Hill Road” for hype and abuse of cloud terminology. “You just change a term, and think you’ve invented technology.” …

Well its been a year later and the abuse of the term cloud has gone from bad to worse. As a result, when Mr. Ellison appeared at the Churchill Club last week and the question of Oracle’s possible demise at the hand of the cloud came up, he became a bit animated. Enjoy!

(I love Ed Zander’s bemusement and reactions) …

Of note is Larry’s succinct definition of cloud computing: “A computer attached to a network.” And its business model? “Rental.”

SOASTA (www.soasta.com), the leader in cloud testing, and M-Dot Network (www.mdotnetwork.com) today announced the successful completion of an unprecedented 1,000,000-user performance test using SOASTA's CloudTest On-Demand service. The test was run from the SOASTA Global Test Cloud against the M-Dot transaction application, which is deployed in Amazon EC2. CloudTest's comprehensive analytics, displayed and updated as the test was running, identified points of stress in their architecture in real time.

The M-Dot Network platform enables consumers to receive digital coupons via a retailer's web site or micro-web site on their mobile phone. Consumers can find and select coupons online or on their mobile phone. Offers are aggregated and presented directly to consumers from multiple third party digital coupon issuers and from the retailer. …

Intuit, Inc. supplements QuickBooks and QuickBase with the Intuit Workplace App Center, a putative competitor to Google Apps for small businesses, claiming:

Improve your productivity using web-based apps that help you solve everyday business challenges like finding new customers or managing your back office. Plus many of these apps sync with QuickBooks! Start saving time and money today—take these apps on a free trial run.

I didn’t find one instance of the word “cloud” in the marketing propaganda.

The Microsoft Azure Services Platform includes not one but two (arguably three) ways of storing your data. In this session we will look at the implications of Windows Azure Storage and SQL Data Services on how we will store data when we build applications deployed in the Cloud. We will cover “code near” vs “code far”, relational vs non-relational, blobs, queues and more

Many people love to speculate on what the next Killer App might be. Juval Lowy will use his ideal killer app as the basis of his upcoming workshop at PDC09 on November 16th. He’d like to think that the EnergyNet … might provide such a cohesive blending of commercial and residential energy use, the devices which consume that energy and the resources that provide it, that this will become an important piece of our daily lives. It will save money, save energy, and perhaps even save the planet.

In his workshop, he is going to illustrate how the notion of an EnergyNet is made possible through technologies such as Windows Communication Foundation and the .NET Service Bus. [Emphasis added]

The Channel9 description page includes links for more information on the workshop and more details on EnergyNet.

It’s not all skitttles and beer for the SmartMeters that Pacific Gas & Electric Co. (PG&E), our Northern California utility is installing to measure watt-hours on an hourly basis, as Lois Henry’s 'SmartMeters' leave us all smarting article of 9/12/2009 in The Bakersfield Californian:

… Hundreds of people in Bakersfield and around the state reported major problems since Pacific Gas & Electric started installing so-called smart meters two years ago. Complaints have spiked as the utility began upgrading local meters with even "smarter" versions.

It's not just the bills, many of which have jumped 100, 200 -- even 400 percent year to year after the install. It's also problems with the online monitoring function and the meters themselves, which have been blowing out appliances, something I was initially told they absolutely could not do. …

••• Jayaram Krishnaswamy’s Two great tools to work with SQL Azure post of 9/24/2009 gives props to SQL Azure Manager and SQL Azure Migration Wizard as “two great tools to work with SQL Azure.” Jayaram continues:

SQL Azure Migration Wizard is a nice tool. It can connect to (local)Server as well as it supports running scripts. I tried running a script to create 'pubs' on SQL Azure. It did manage to bring in some tables and not all. It does not like 'USE' in SQL statements (to know what is allowed and what is not you must go to MSDN). For running the script I need to be in Master(but how?, I could not fathom). I went through lots of "encountered some problem, searching for a solution" messages. On the whole it is very easy to use tool.

Today, the IIS team released a the Web Platform Installer 2.0 RTW. Among the many cool new things (more tools, new applications, and localization to 9 languages) is the inclusion of the Windows Azure Tools for Microsoft Visual Studio 2008.

Why should you care? As many of you know, before using the Windows Azure Tools, you need install and configure IIS which requires figuring out how to do that and following multiple steps. The Web Platform Installer (we call it the WebPI) makes installing the Tools, SDK and IIS as simple as clicking a few buttons.

What's notable about the open source project announced yesterday, Simple API for cloud computing, are the names that are present, IBM, Microsoft and Rackspace, and the names that are not: Amazon, for one, is not a backer, and let's just stop right there.

The Simple API for Cloud Applications is an interface that gives enterprise developers and independent software vendors a target to shoot for if they want an application to work with different cloud environments. It is not literally a cross cloud API, enabling an application to work in any cloud. Such a thing does not exist, yet.

Essentially, if you use the "Add Service Reference..." feature in Visual Studio or svutil to generate WSDL for a service that is hosted on Windows Azure either locally or in the cloud, the WSDL would contain incorrect URIs.

The problem has to do with the fact that in a load balanced scenario like Windows Azure, there are ports that are used internally (behind the load balancer) and ports that are used externally (i.e. internet facing). The internal ports were showing up in the URIs.

Also note that this patch is not yet in the cloud, but will be soon. i.e. it will only help you in the Development Fabric scenario for now. (Will post when the patch is available in the cloud.)

Back in March 2008, I wrote a post which hypothesised that a company, such as Microsoft, could conceivably create a cloud environment that meshes together many ISPs / ISVs and end consumers into a "proprietary" yet "open" cloud marketplace and consequently supplant the neutrality of the web.

This is only a hypothesis and the strategy would have to be straight out of the "Art of War" and attempt to misdirect all parties whilst the ground work is laid. Now, I have no idea what Microsoft is planning but let's pretend that Sauron was running their cloud strategy. …

With the growth of the Azure marketplace and applications built in this platform, a range of communication protocols will be introduced to enhance productivity in both the office platform (which will increasingly be tied into the network effect aspects of Azure) and Silverlight (which will be introduced to every device to create a rich interface). Whilst the protocols will be open, many of the benefits will only come into effect through aggregated & meta data (i.e. within the confines of the Azure market). The purpose of this approach, is to reduce the importance of the browser as a neutral interface to the web and to start the process of undermining the W3C technologies. …

Following such a strategy, then it could be Game, Set and Match to MSFT for the next twenty years and the open source movement will find itself crushed by this juggernaut. Furthermore, companies such as Google, that depend upon the neutrality of the interface to the web will find themselves seriously disadvantaged. …

Seems to me to be a more likely strategy for SharePoint Server, especially when you consider MOSS is already a US$1 billion business.

I recently came across an interesting paper that is currently under review for ASPLOS. I liked it for two unrelated reasons: 1) the paper covers the Microsoft Bing Search engine architecture in more detail than I’ve seen previously released, and 2) it covers the problems with scaling workloads down to low-powered commodity cores clearly. I particularly like the combination of using important, real production workloads rather than workload models or simulations and using that base to investigate an important problem: when can we scale workloads down to low power processors and what are the limiting factors?

and continues with an in-depth analysis of the capability of a Fast Array of Wimpy Nodes (FAWN) and the like to compete on a power/performance basis with high-end server CPUs.

Virtual machines (VMs) represent the symptoms of a set of legacy problems packaged up to provide a placebo effect as an answer that in some cases we have, until lately, appeared disinclined and not technologically empowered to solve.

If I had a wish, it would be that VM’s end up being the short-term gap-filler they deserve to be and ultimately become a legacy technology so we can solve some of our real architectural issues the way they ought to be solved.

That said, please don’t get me wrong, VMs have allowed us to take the first steps toward defining, compartmentalizing, and isolating some pretty nasty problems anchored on the sins of our fathers, but they don’t do a damned thing to fix them.

VMs have certainly allowed us to (literally) “think outside the box” about how we characterize “workloads” and have enabled us to begin talking about how we make them somewhat mobile, portable, interoperable, easy to describe, inventory and in some cases more secure. Cool.

There’s still a pile of crap inside ‘em.

What do I mean?

There’s a bloated, parasitic resource-gobbling cancer inside every VM. For the most part, it’s the real reason we even have mass market virtualization today.

2009 has been the year in which Cloud Computing entered mainstream industry consciousness. Cloud computing – a model of technology provision where capacity on remotely hosted, managed computing platforms is made publicly available and rented to multiple customers on a self-service basis – is on every IT vendor’s agenda, as well as entering the research agendas of many CIOs.

But how does Cloud Computing really deliver business value to your organisation, and what kinds of scenarios are best suited to it? What’s the real relationship between “Public” and “Private” Cloud offerings in value terms? This report answers these questions.

On Microsoft’s “three screens and the cloud” strategy: Ballmer says it’s a “fundamental shift in the computing paradigm.” He added “We used to talk about mainframe computer, mini computer, PC computing, client server computing, graphical computing, the internet; I think this notion of three screens and a cloud, multiple devices that are all important, the cloud not just as a point of delivery of individual applications, but really as a new platform, a scale-out, very manageable platform that has services that span security contacts, I think it’s a big deal.”

• Steve Marx said “[T]he Windows Azure Platform is part of BizSpark as well” [as Azure] in a comment to this post. Paul Krill wrote a “Microsoft launches BizSpark to boost Azure” article on 11/5/2008 for InfoWorld:

Looking to boost Web-based ventures and its new Windows Azure cloud services platform, Microsoft on Wednesday is announcing Microsoft BizSpark, a program providing software and services to startups.

"The cornerstone [of the program] is to get into the hands of the startup community all of our development tools and servers required to build Web-based solutions," said Dan'l Lewin, corporate vice president of Strategic and Emerging Business Development at Microsoft. Participants around the globe also gain visibility and marketing, Lewin said.

BizSpark will be leveraged as an opportunity to boost the Azure platform, with participants having access to the Azure Services Platform CTP (Community Technology Preview) introduced last week.

"We expect many of them will be taking advantage of cloud services," as part of their company creation, Lewin said.

Steve observed in his message to me: … “You don’t see it in the offering now because Windows Azure hasn’t launched yet (and is free for everyone).” But the Azure Services Platform CTP included with BizSpark hadn’t launched in November 2008 and was “free for everyone” also.

The question, of course, is “How many free instances of Windows Azure and SQL Azure will WebsiteSpark (and BizSpark) participants receive?”

Scott Guthrie’s Announcing the WebsiteSpark Program post of 9/24/2009 and the WebsiteSpark site list lotsa swag “for independent web developers and web development companies that build web applications and web sites on behalf of others” :

I believe Windows Azure and SQL Azure will be included in WebsiteSpark. You don’t see it in the offering now because Windows Azure hasn’t launched yet (and is free for everyone).

It seems to me that being upfront about Windows Azure and SQL Azure swag, including the number of instances, would “incentivize” a large number of Web developers and designers. (Would you believe the Windows Live Writer spell checker likes “incentivize?”)

This is 100% the right answer: Microsoft’s Chiller-less Data Center. The Microsoft Dublin data center has three design features I love: 1) they are running evaporative cooling, 2) they are using free-air cooling (air-side economization), and 3) they run up to 95F and avoid the use of chillers entirely. All three of these techniques were covered in the best practices talk I gave at the Google Data Center Efficiency Conference (presentation, video). …

Microsoft today announced the opening of its first ‘mega data centre’ in Europe to meet continued growth in demand for its Online, Live and Cloud services. The $500 million total investment is part of Microsoft’s long-term commitment in the region, and is a major step in realising Microsoft’s Software plus Services strategy.

The data centre is the next evolutionary step in Microsoft’s commitment to thoughtfully building its cloud computing capacity and network infrastructure throughout the region to meet the demand generated from its Online, Live Services and Cloud Services, such as Bing, Microsoft Business Productivity Online Suite, Windows Live, and the Azure Services Platform. [Emphasis added.]

UK, Irish and European Windows Azure and, presumably, SQL Azure users should find a substantial latency reduction as they move their projects’ location to the new Dublin data center.

I’m in sunny Dublin today (yep, it’s sunny here) for the grand opening of Microsoft’s first “mega datacenter” outside of the US. What you may ask is a mega datacenter? Well basically it’s an enormous facility from we’ll deliver our cloud services to customers in Europe and beyond.

I had the chance to check the place out last month and have a full tour and it’s incredible. Okay there isn’t much to see but that’s sort of the point. It’s this big information factory that is on a scale that you’ll not see in many other places in the world and run with an astonishing level of attention to detail.

It’s also quite revolutionary and turns out to be our most efficient data center thus far. Efficiency is measured by something called PUE that essentially looks at how much power your use vs the power you consume. The ultimate PUE of course is 1.0 though the industry average is from 2-2.4. Microsoft’s data centers on average run at 1.6 PUE but this facility takes that down to 1.25 through use of some smart technology called “air”. Most datacenters rely on chillers and a lot of water to keep the facility cool – because of the climate in Dublin, we can use fine, fresh, Irish air to do the job which has significant benefits from an environmental point of view. Put simply, it saves 18 million litres of water each month.

David Chou claims Infrastructure-as-a-Service Will Mature in 2010 in this brief interview of 9/24/2009 with the Azure Cloud OS Journal’s Jeremy Geelan: “Chou speaks out on where he thinks Cloud Computing will make its impact most noticeably looking forwards”:

While acknowledging that lots of work is currently being done to differentiate and integrate private and public cloud solutions, Microsoft Architect David Chou believes that Infrastructure-as-a-service (IaaS) is the area of Cloud Computing that will make its impact most noticeably in 2010 - especially for startups, and small-medium sized businesses.

David Chou is a technical architect at Microsoft, focused on collaborating with enterprises and organizations in areas such as cloud computing, SOA, Web, distributed systems, security, etc., and supporting decision makers on defining evolutionary strategies in architecture.

What about Platform as a Service, Azure’s strong suite. When does PaaS mature, David?

My primary interest at last week's European version of The Experts Conference was Microsoft's upcoming Forefront Identity Manager 2010. If you haven't been following closely, you might know this soon to release product as Identity Lifecycle Manager (ILM) "2" (I don't know why there are quote marks around the number 2, but it is always written that way).

Thus, FIM is the successor to ILM, which was the successor to Microsoft Identity Integration Server (MIIS), which was the successor to the Microsoft Metadirectory Service (MMS) and on and on. None of these, if memory serves, ever reached version 2. I used to complain about how Sun Microsystems would constantly tinker with the name of their directory product, but even they occasionally got out a version 2 (or higher). …

All in all, this is the best identity product I've seen from Microsoft since Cardspace. If you're heavily invested in Microsoft technology, if you're looking at Microsoft Azure for cloud computing possibilities or if you feel that Active Directory should be the basis of your organization's identity stack, then you should definitely look at Forefront Identity Manager 2010, and even download the release candidate to take it for a test drive. [Emphasis added.]

A quick review of the technical materials available on Microsoft’s FIM Web site doesn’t expose any references to its use with Windows Azure specifically.

is now complete and well worth a read by anyone who plans to process Electronic Medical Records (EMR) or Personal Health Records (PHR) in the cloud.

••• Andrea Di Maio’s Governments on Facebook Are Off The Records post of 9/26/2009 discusses conflicts between the Watergate-inspired Presidential Records Act of 1978 and posts by administration officials to social networking sites:

On 19 September Macon Phillips, the White House Director of New Media, posted on the White House blog about Reality Check: The Presidential Records Act of 1978 meets web-based social media of 2009, addressing the important topic of how to interpret in social media terms a law passed after the Watergate issue to make sure that any record created or received by the President or his staff is preserved, archived by NARA (the National Archives and Records Administration), which in turn releases them to the public in compliance with the relevant privacy act.

There is one very important passage in Phillips’ post:

“The White House is not archiving all content or activity across social networks where we have a page – nor do we want to. The only content archived is what is voluntarily published on the White House’s official pages on these sites or what is voluntarily sent to a White House account.” …

Master data management (MDM) is one of those topics that everyone considers important, but few know exactly what it is or have an MDM program. "MDM has the objective of providing processes for collecting, aggregating, matching, consolidating, quality-assuring, persisting and distributing such data throughout an organization to ensure consistency and control in the ongoing maintenance and application use of this information." So says Wikipedia.

I think that the lack of MDM will become more of an issue as cloud computing rises. We're moving from complex federated on-premise systems, to complex federated on-premise and cloud-delivered systems. Typically, we're moving in these new directions without regard for an underlying strategy around MDM, or other data management issues for that matter. …

… "What tends to worry people [about cloud computing] are issues like security and privacy of data -- that's definitely what we often hear from our customers," said Chris Willey, interim chief technology officer of Washington, D.C.

Willey's office provides an internal, government-held, private cloud service to other city agencies, which allows them to rent processing, storage and other computing resources. The city government also uses applications hosted by Google in an external, public cloud model for e-mail and document creation capabilities. …

"Google has had to spend more money and time on security than D.C. government will ever be able to do," Willey said. "They have such a robust infrastructure, and they're one of the biggest targets on the Internet in terms of hacks and denial-of-service attacks." …

"If I have personally identifiable information -- credit cards, Social Security numbers -- I wouldn't use cloud computing," said Dan Lohrmann, Michigan's chief technology officer. "But if it's publicly available data anyway -- [like] pictures of buildings in downtown Lansing we're storing -- I might feel like the risk is less to use cloud computing for storage." …

• Jon Oltsik’s white paper, A Prudent Approach for Storage Encryption and Key Management, according to the GovInfoSecurity Web site, “cuts through the hype and provides recommendations to protect your organization's data, with today's budget. Oltsik shows you where to start, how to focus on the real threats to data, and what actions you can take today to make a meaningful contribution to stopping data breaches.” (Free site registration required.):

The white paper covers:

What are the real threats to data today

Where do you really need to encrypt data first

How does key management fit into your encryption plans

What shifts in the industry and vendor developments will mean to your storage environment and strategy

I still cringe at that scene in Marathon Man where Laurence Olivier puts Dustin Hoffman in the dentist chair and tortures him while asking “Is it safe??” In fact, now I cringe even more because it reminds me of so many conversations between CEOs/CIOs and CISOs: “OK, we gave you the budget increase. Is it safe now???”

Of course, safety is a relative thing. As the old saw says about what one hunter said to the other when they ran into the angry bear in the woods: “I don’t have to outrun the bear, I only have to outrun you.” Animals use “herd behavior” as a basic safety mechanism – humans call it “due diligence.” …

Cloud computing and, more specifically, Microsoft Azure are questions on the minds of IT professionals everywhere. What is it? When should I use it? How does it apply to my job? Join us as we review some of the lessons Microsoft IT has learned through Project Austin, an incubation project dogfooding the use of Microsoft Azure as a platform for supporting internal line-of-business applications.

In this event we will discuss why an IT operations team would want to pursue Azure as an extension to the data center as we review the Azure architecture from the IT professional’s point of view; discuss configuration, deployment and scaling Azure-based applications; review how Azure-based applications can be integrated with on-premise applications; and how operations teams can manage and monitor Azure-based applications.

We will additionally explore several specific Azure capabilities:

The Azure roles (web, web service and worker)

Azure storage options

Azure security and identity options

If you are interested, we would like to invite you to our afternoon session for architects and developers. (See Below)

In this event we will start by reviewing cloud computing architectures in general and the Azure architecture in particular. We will then dive deeper into several aspects of Azure from the developer’s and architect’s perspective. We will review the Azure roles (web, web service and worker); discuss several Azure storage options; review Azure security and identity options; review how Azure-based applications can be integrated with on-premise applications; discuss configuration, deployment and scaling Azure-based applications; and highlight how development teams can optimize their applications for better management and monitoring.

If you are interested, we would like to invite you to our morning session for IT Professionals (see above).

MWD offers an extensive range of research reports free-of-charge to Guest Pass subscribers. The research available as part of this free service provides subscribers with a solid foundation for IT-business alignment based on our unique perspective on key IT management competencies.

Organizations understand the need for Business Continuity and Disaster Recovery in the face of natural, man-made and pandemic disasters. But what about Business Resiliency, which brings together multiple disciplines to ensure minimal disruption in the wake of a disaster?

Register for this webinar to learn:

How to assemble the Business Resiliency basics;

How to craft a proactive plan;

How to account for the most overlooked threats to sustaining your organization - and how to then test your plan effectively.

When: 9/30/2009 to 10/1/2009 Where: Royal College of Physicians, London, England, UK

Tech in the Middle will deliver a Day of Cloud conference on 10/16/2009:

Calling all software developers, leads, and architects. Join us for the day on Friday October 16, 2009 as we discuss the 'Cloud'. The day is focused on developers and includes talks on all the major cloud platforms: Google, Amazon, Sales Force & Microsoft.

Each talk will cover the basics for that platform. We will then delve into code, seeing how a solution is constructed. We cap off the day with a panel discussion. When we are done, you should have enough information to start your own experimentation. In 2010, you will be deploying at least one pilot project to a cloud platform. Kick off that investigation at Day of Cloud!

Agenda

7:30-8:30 AM Registration/Breakfast

8:30-10:00 AM Jonathan Sapir/Michael Topalovich: Salesforce.com

10:15-11:45 AM Wade Wegner: Azure

11:45-12:30 PM Lunch

12:30-2:00 PM Chris McAvoy: Amazon Web Services

2:15-3:45 PM Don Schwarz: Google App Engine

4:00-5:00 PM Panel

Early-bird tickets are $19.99, regular admission is $30.00. As of 9/24/2009, there were 28 tickets remaining.

Providing comprehensive research, along with the opportunity to connect with the analysts who’ve developed it, the Gartner Data Center Conference is the premier source for forward-looking insight and analysis across the broad spectrum of disciplines within the data center. Our team of 40 seasoned analysts and guest experts provide an integrated view of the trends and technologies impacting the traditional Data Center.

The 7-track agenda drills-down to the level you need when it comes to next stage virtualization, cloud computing, power & cooling, servers and storage, cost optimization, TCO and IT operations excellence, aging infrastructures and the 21st-century data center, , consolidation, workload management, procurement, and major platforms. Key takeaways include how to:

Keep pace with future-focused trends like Green IT and Cloud Computing

Increase agility and service quality while reducing costs

Manage and operate a world-class data center operation with hyper-efficiency

Joseph M. Tucci pulled EMC Corp. out of a two-year sales slump after the dot-com bust. Now he’s gearing up for round two: an industry shakeup that he expects to be even more punishing.

Tucci, 62, says the global economic crisis and a shift to a model where companies get computing power over the Internet will drown at least some of the biggest names in computing. …

… Tucci says, EMC, the world’s largest maker of storage computers, will hold on to its roughly 84 percent stake in VMWare Inc., the top maker of so-called virtualization software, which helps run data centers more efficiently. He plans to work more closely with Cisco Systems Inc. and said he will continue to make acquisitions. …

EMC is headquartered in Hopkintown, which is close to Worcester.

••• Michael Hickens explains How Cloud Computing Is Slowing Cloud Adoption in this 9/24/2009 post to the BNet.com blog:

There’s the cloud, and then there’s the cloud. The first cloud everyone talked about was really software-as-a-service (SaaS), a method for delivering applications over the Internet (the cloud) more effectively and cheaply than traditional implementations installed behind corporate firewalls, as exemplified by the likes of Salesforce.com, Successfactors, NetSuite and many others.

Then along came this other cloud, the infrastructure that you could rent by the processor, by the gigabyte of storage, and by the month, and which would expand and contract dynamically according to your needs, which Amazon, Microsoft, IBM and many other vendors offer. …

Now, however, there’s another option for enterprise IT, which is to run applications in the cloud but continuing to use the applications that have already been customized for your purposes and enjoy widespread adoption within the organization. As Lewis put it, cloud infrastructure “allows you to take your custom applications that are sticky within your organization and put them into a cloud environment. …

That might not have been a primary motivation for Microsoft to start offering Azure, its cloud infrastructure play, but I’m sure that staving off threats to its enterprise applications business went into its thinking.

Amazon EC2, the public cloud service offered by Amazon, has been growing at an amazing rate. From their early days of catering to startups, they have grown to have diverse clients from individuals to enterprises. Guy Rosen, the cloud entrepreneur who tracks the state of the cloud, has done some research on the resource identifier used by Amazon EC2 and come up with some interesting stats. I thought I will add it here at Cloud Ave for the benefit of our readers.

During one 24 hour period in the US East - 1 region, 50,242 EC2 instances were requested

During the same period, 12,840 EBS volumes were requested

And, 30,925 EBS snapshots were requested

However, the most interesting aspect of Guy Rosen's analysis is his calculation that 8.4 million EC2 instances have been launched since the launch of Amazon EC2. These are pretty big numbers showing success for cloud based computing. Kudos to Amazon for the success. …

IBM's CTO of Cloud Computing, Kristof Kloeckman, says IBM has demonstrated software engineering as a cloud process. At the end of the process, a developer deploys his application to the cloud of choice. As of today, that cloud better be running VMware virtual machines. In the future, the choice may be broader.

One of the obstacles to cloud computing is the difficulty of deploying a new application to the cloud. If that process could be automated, it would remove a significant barrier for IT managers who want to deploy workloads in the cloud.

IBM, with years of experience in deploying virtualized workloads, is attacking the problem from the perspective of cloud computing. In China, it now has two locations where software development in the cloud is being offered as a cloud service, one in a computing center in the city of Wuxi and another in the city of Dongying.

… Taking a page out of Cisco’s book – from the chapter on so-called smart cities – IBM late Thursday announced that the city of Dongying near China’s second-largest oil field in the midst of the Yellow River Delta is going to build a cloud to promote e-government and support its transition from a manufacturing center to a more eco-friendly services-based economy.

Dongying, which can turn the widgetry into a revenue generator, means to use the cloud to jumpstart new economic development in the region.

IBM has sold the Dongying government on its scalable, redundant, pre-packaged CloudBurst 1.1 solution – effectively an instant “cloud-in-a-box” – that IBM is peddling as the basis of its Smart City blueprint around China and elsewhere. …

CloudBurst is priced to start at $220,000. Dongying is starting with two racks.

Amazon.com is clearly interested in finding government customers for its cloud computing services. The ecommerce giant has been quietly building an operation in the Washington, D.C. area and Amazon Chief Technology Officer Werner Vogels is making a big sales pitch to federal agencies. Now we're hearing that Amazon is exploring a partnership with Apptis -- a Virginia-based government IT services company -- to provide the federal government with a variety of cloud services.

Amazon and Apptis together responded to an RFQ (request for quotes) put out by the U.S. General Services Administration, Apptis spokeswoman Piper Conrad said. Conrad said the two companies are also "finalizing a general partnership." She gave no further details, and said Apptis executives would not be able to comment.

The General Services Administration (GSA) put out an RFQ seeking "Infrastructure-as-a-Service" offerings, including cloud storage, virtual machines, and cloud web hosting. The deadline for submissions was Sept. 16. …

Still no word about Microsoft’s proposal, if they made one for the Windows Azure Platform.

Amazon is adding a new feature which significantly improves the flexibility of EC2’s Elastic Block Store (EBS) snapshot facility. You now have the ability to share your snapshots with other EC2 customers using a new set of fine-grained access controls. You can keep the snapshot to yourself (the default), share it with a list of EC2 customers, or share it publicly.

The Amazon Elastic Block Store lets you create block storage volumes in sizes ranging from 1 GB to 1 TB. You can create empty volumes or you can pre-populate them using one of our Public Data Sets. Once created, you attach each volume to an EC2 instance and then reference it like any other file system. The new volumes are ready in seconds. Last week I created a 180 GB volume from a Public Data Set, attached it to my instance, and started examining it, all in about 15 seconds.

Here’s a visual overview of the data flow (in this diagram, the word Partner refers to anyone that you choose to share your data with):

First and foremost, Perot enables and accelerates Dell’s expansion into large enterprise data centers. In many regards, Dell’s hardware business has been totally commoditized, and it has had limited success moving upstream into larger accounts, which typically treat PCs, x86 servers, and low-end storage as adjuncts to larger data center-focused IT operations. In fact, Dell hopes that Perot will help Dell become more involved with conversations at the CIO level, related to standardization, consolidation, and other key initiatives, which Dell’s HW sales executives may not have had access to.

The alert goes on to analyze the market impact of the acquisition. (Free site registration required.)

Together with some of my Forrester analyst colleagues earlier today I listened into the conference call hosted by executives of both - Dell and Perot Systems - to explain the rationale behind Dell's announcement to buy Perot for US$ 3.9 billion cash. There has been some speculation lately about Dell possibly making such a move, but the timing and the target they finally picked came as a bit of a surprise to everyone. The speculation was rooted in some of the statements made by Steve Schuckenbrock, President of Large Enterprise and Services at Dell, earlier this year where he pronounced that Dell would get much more serious around the services business. Now, you would of course expect nothing less from someone like Steve - after all he has spend much of his professional career prior to Dell as a top executive in the services industry (with EDS and The Feld Group). To this end Steve and his team finally delivered on the expectation, even more so as this had not been the first time that Dell promised a stronger emphasis on services. …

Pascal concludes:

… But the big challenge pertains to changing the overall value proposition and brand perception of Dell. Dell’s current positioning is still that of a product company that provides limited business value beyond the production and delivery of cost efficient computing hardware and resources. While there is a lot to gain from Perot’s existing positioning the challenge will be to do so by creating a truly consistent and integrated image of the new Dell. The question then quickly becomes whether this is about following in the shoes of IBM and/ or HP or whether Dell has something really new to offer here. That is something Dell has failed to articulate so far and so the jury is still out on this one.

US Department of Energy awards the Lawrence Berkeley National Laboratory (run by the University of California) $7,384,000.00 in American Recovery and Reinvestment Act (ARRA) 2009 operations funding for the Laboratory’s Magellan Distributed Computing and Data Initiative to:

[E]nhance the Phase I Cloud Computing by installing additional storage to support data intensive applications. The cluster will be instrumented to characterize the concurrency, communication patterns, and Input/Output of individual applications. The testbed and workload will be made available to the community for determining the effectiveness of commercial cloud computing models for DOE. The Phase I research will be expanded to address multi-site issues. The Laboratory will be managing the project and will initiate any subcontracting opportunities.

Made available to what community? My house is about three (air) miles from LBNL (known to locals as the “Cyclotron.”) Does that mean I’ll get a low-latency connection?

As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC).

We describe the architecture of WSCs, the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. We hope it will be useful to architects and programmers of today’s WSCs, as well as those of future many-core platforms which may one day implement the equivalent of today’s WSCs on a single board.

The acknowledgment begins:

While we draw from our direct involvement in Google’s infrastructure design and operation over the past several years, most of what we have learned and now report here is the result of the hard work, the insights, and the creativity of our colleagues at Google.

The work of our Platforms Engineering, Hardware Operations, Facilities, Site Reliability and Software Infrastructure teams is most directly related to the topics we cover here, and therefore, we are particularly grateful to them for allowing us to benefit from their experience.

Following on from my last post, Securing Applications on the Amazon Elastic Cloud, One of the biggest questions I often see asked is “Is Amazon EC2 as a platform secure”? This is like saying is my vanilla network secure? As you do to your internal network you can take some steps to make the environment as secure as you can …

Salesforce.com has had a pretty strong showing in 2009, due in part to the company’s introduction of Cloud Service (a SaaS application) at the beginning of this year. Early this month, Salesforce announced an upgrade to this application, Service Cloud 2, which consists of three phases to be launched from now until early 2011.

One of the Service Cloud 2’s web-based options is already available to Salesforce.com customers: Salesforce for Twitter. The company integrated Twitter into their platforms in March 2009—and was one of the first enterprise software developers to do so—and now the integration functions within the Service Cloud. This update allows users to track and monitor conversations in Twitter, as well as tweet from the Service Cloud.

Hot damn! Salesforce for Twitter! I can hardly wait to run the ROI on that one.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.