Applications running in the Windows Azure cloud can use existing NTFS APIs to access a network attached durable drive. The durable drive is actually a Page Blob formatted as a single volume NTFS Virtual Hard Drive (VHD).

A post by Brad Calder, Windows Azure Drive Demo at MIX 2010 on the Windows Azure Storage Team Blog, shows how you create a virtual drive in Windows 7. You can then upload the drive for access by your cloud application. Your application specifies the storage account name and its secret key in the ServiceConfiguration.cscfg in the same way as it does on your development computer.

Then you can access the drive from code in your application by:

Initializing the drive cache so all processes and threads running under that instance can mount and manipulate drives.

Creating an object that refers to the drive.

Mounting the drive.

Using the drive in your applications to read from or write to a drive letter (e.g., X:\) that represents a durable NTFS volume for storing and accessing data.

You can use the Windows Azure Drive APIs in your Windows Azure application to:

James Hamilton’s Stonebraker on CAP Theorem and Databases post of 4/7/2010 continues his analysis of the NoSQL movement’s premises about the Consistency, Availability and Partitioning (CAP) theorem and eventual consistency:

Mike challenges this assertion pointing that some common database errors are not avoided by eventual consistency and CAP really doesn’t apply in these cases. If you have an application error, administrative error, or database implementation bug that losses data, then it is simply gone unless you have an offline copy. This, by the way, is why I’m a big fan of deferred delete. This is a technique where deleted items are marked as deleted but not garbage collected until some days or preferably weeks later. Deferred delete is not full protection but it has save[d] my butt more than once and I’m a believer. See On Designing and Deploying Internet-Scale Services for more detail.

CAP and the application of eventual consistency doesn’t directly protect us against application or database implementation errors. And, in the case of a large scale disaster where the cluster is lost entirely, again, neither eventual consistency nor CAP offer a solution. Mike also notes that network partitions are fairly rare. I could quibble a bit on this one. Network partitions should be rare but net gear continues to cause more issues than it should. Networking configuration errors, black holes, dropped packets, and brownouts, remain a popular discussion point in post mortems industry-wide. I see this improving over the next 5 years but we have a long way to go. In Networking: the Last Bastion of Mainframe Computing, I argue that net gear is still operating on the mainframe business model: large, vertically integrated and expensive equipment, deployed in pairs. When it comes to redundancy at scale, 2 is a poor choice.

Mike’s article questions whether eventual consistency is really the right answer for these workloads. I made some similar points in “I love eventual consistency but…” In that posting, I argued that many applications are much easier to implement with full consistency and full consistency can be practically implemented at high scale. In fact, Amazon SimpleDB recently announced support for full consistency. Apps needed full consistency are now easier to write and, where only eventual consistency is needed, its available as well.

Don’t throw full consistency out too early. For many applications, it is both affordable and helps reduce application implementation errors.

Mike Kelly shares his notes on David Robinson’s SQL Azure FireStarter session in a Windows Azure SQL Notes post of 4/6/2010, which begins:

Goal is to convince you that there is no difference between SQL Server and SQL Azure.

Irrespective of where your application lies i.e. in the cloud or locally, you can simply connect to the SQL Azure database by replacing your local DB Connection String with the SQL Azure Connection String. The connection string for any SQL Azure database can be obtained in the "Server Administration" screen in your SQL Azure account by selecting the database and clicking the Connection String button under the Databases tab.

How to resolve some of the common connectivity error messages that you would see while connecting to SQL Azure:

A transport-level error has occurred when receiving results from the server. (Provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)

System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated.

An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections

I've always been a data guy. I think data maintenance, sharing and analysis is the inspiration for almost all line-of-business software, and technology that makes any or all of it easier is key to platform success. That's why I've been interested in WCF Data Services (previously ADO.NET Data Services) since it first appeared as the technology code-named "Astoria." Astoria was based on a crisp idea: representing data in AtomPub and JSON formats, as REST Web services, with simple URI and HTTP verb conventions for querying and updating the data.

Astoria, by any name, has been very popular, and for good reason: It provides refreshingly simple access to data, using modern, well-established Web standards. Astoria provides a versatile abstraction layer over data access, but does so without the over-engineering or tight environmental coupling to which most data-access technologies fall prey. This elegance has enabled Microsoft to do something equally unusual: separate Astoria's protocol from its implementation and publish that protocol as an open standard. We learned that Microsoft did this at its Professional Developers Conference (PDC) this past November in Los Angeles, when Redmond officially unveiled the technology as Open Data Protocol (OData). This may have been one of Microsoft's smartest data-access plays, ever. …

I’ve recently started using Microsoft’s WCF Data Services which supports OData Services. What this means is that we can access resources by simply specifying a URI. This concept greatly simplified building an ORM layer on a web site, as well as creating the linkage between the server side data and the client side application, which in my case is usually a browser.

So, the issue this blog addresses is that if you form a URI with the parameter $top={anything}, your data will automatically be sorted. The documentation for OData on top basically says that, but it could be clearer. It says the following:

We[']re writing a new Data Provider for Data Sync that can consume an OData Feed for example below were pulling in the Speakers data from the recent MIX event using the published OData feed http://api.visitmix.com/OData.svc/.

T-10 Media claims “the Service Bus can be used to build hybrid apps which span both on-premise and cloud services” in a The Hybrid Cloud and Azure [Blog] post of 4/7/2010:

There’s been a lot of buzz about the ‘hybrid’ cloud – the blending of on-premise services with cloud based services. CloudKick recently launched CloudKick Hybrid a tool for monitoring cloud and on-premise servers from a single console (see story here), Nimsoft which has a similar monitoring tool was recently acquired by CA for $350m, hosting provider VoxTel recently announced a unified admin/monitoring tools for its cloud and server offerings.

There is an undoubted need for a hybrid architecture for many larger corporations since migrating existing apps to the cloud is not a simple as lot of demos show and there is a perception (whether real or not) that the data is less secure on the cloud. Enter hybrids apps – maintain the data on premise or consume on-premise apps from a cloud service.

Of course it is possible to communicate between on-premise data sources or apps and cloud-based apps using SOAP/REST communication protocols, however there are two major obstacles – discovering the service endpoints (since these may change due to dynamically assigned IPs) and navigating through firewalls. These problems can be overcome by allowing apps to selectively open ports which is inherently insecure, and using relay systems that sit between the firewall and the apps and act as a bridge, thee systems tend to be very complicated and hard to implement.

The Azure Service Bus attempts to solve this issue by proving a service which allows applications which need to communicate with eachother to register with it. The requesting app is given a Service Bus endpoint to communicate with the data source/service app. Essentially the services are provided by service apps run behind the firewall, and the connection endpoints are provided by the Azure Service Bus. It should be noted that the Service Bus allows communication with non-.NET services , so Linux/UNIX hosted apps can register with the Service Bus and be consumed by .NET apps.

Security is provided by the Azure AppFabric Access Control, which applies user-defined rules to ensure security when an app claims tokens via the STS service provided by the Access Control.

Thus the Service Bus can be used to build hybrid apps which span both on-premise and cloud services.

Microsoft is working not only on the imminent release of .NET Framework 4, but also on expanding support beyond the now-traditional Windows client and server operating system. In this regard, the Redmond giant is hard at work delivering .NET 4 support for its Cloud platform. The promise from the software giant is that customers leveraging Windows Azure will be able to start taking advantage of .NET 4 for their applications in mid-2010.

“As we announced at MIX 2010, .NET 4 will be available in Windows Azure within 90 days of .NET 4 RTM,” a member of the Windows Azure team stated. Microsoft’s next-generation development tools and platform are scheduled for release the coming week. Visual Studio 2010, .NET Framework 4 and Silverlight 4 will all be officially launched on April 12 in a Las Vegas event.

This places availability of .NET Framework 4 RTW (release to web) support for Windows Azure sometime by mid-July 2010. The software giant could, of course, beat its own deadline, but, so far, it has chosen to give itself a little elbow room in order to get .NET4 support on Windows Azure to become a reality.

Fact is that Windows Azure already features .NET Framework 4, but not the RTW milestone. “As part of our preparation for that, the latest operating system build available in Windows Azure contains the .NET 4 RC. Although you cannot use this build to run .NET 4 applications, please let us know if having .NET 4 RC in the build has any effect on your existing applications. One known effect you may see if you’re consuming generic ASP.NET performance counters is that they will report data only on .NET 4 applications. You can instead use the versioned performance counters as documented in KB article 2022138. As always, you can choose which build of the operating system your application will run on in Windows Azure,” the Windows Azure team member stated.

The Cloud platform version that Microsoft is referring to is Windows Azure Guest OS 1.2 (Release 201003-01). The Release went live earlier this week, more precisely on April 5th, 2010, and contains .NET Framework 4.0 RC support. However, as the company stated above, the Windows Azure development environment does not support the .NET 4.0 Framework at this point in time. The purpose of Windows Azure Guest OS 1.2 (Release 201003-01) is that customers test to see whether their applications and services will continue to run under normal parameters while using .NET Framework 4.0 libraries. …

As we announced at MIX 2010, .NET 4 will be available in Windows Azure within 90 days of .NET 4 RTM. As part of our preparation for that, the latest operating system build available in Windows Azure contains the .NET 4 RC. Although you cannot use this build to run .NET 4 applications, please let us know if having .NET 4 RC in the build has any effect on your existing applications.

One known effect you may see if you’re consuming generic ASP.NET performance counters is that they will report data only on .NET 4 applications. You can instead use the versioned performance counters as documented in KB article 2022138.

As always, you can choose which build of the operating system your application will run on in Windows Azure. See the MSDN documentation for details.

ISC is pleased to announce that Miami 311, a government transparency solution built with MapDotNet UX, has won first place in the Microsoft Windows US Public Sector Azure Contest. The application was selected from a field of thirteen entries by Internet voters and a panel within Microsoft.

Miami 311 is an online application supported by the Microsoft Azure cloud platform that allows Miami residents to report, monitor and analyze all non-emergency events that occur in the metropolitan area. If a citizen needs to report a non-emergency situation such as a pothole, street light outage or missed trash pickup, he must first call 311 to report the issue. Once the issue is reported, the citizen can login to the Miami 311 online tool to track the progress of the issue reported and also view the progress of other city-wide projects in the area. The intent of the site is to increase citizen access to city-wide information.

"MapDotNet UX at version 8.0 is 100% managed .net code, which makes the entire server product and its functionality is deployable to Azure, which runs in a 64-bit virtualized environment. This enables map tile rendering (using a high-speed WPF-based map renderer), spatial querying and spatial data editing all in Azure. This is much more than pushpins in the cloud," said Brian Hearn, MapDotNet Lead Architect. …

For a demo I gave today at the Windows Azure Firestarter event, I let anyone on the internet change my wallpaper. You too can set my wallpaper by pointing your browser to http://annoy.smarx.com. I’ll try to continue running on my laptop for the next few days, so any time my laptop’s on and online, you can set my wallpaper. (If my laptop’s not on, you can still see the web page, but you’ll get a strange XML message when you try to change my wallpaper.)

You can even get the code and let people set your wallpaper too (if you have Windows Azure and Service Bus accounts). The entire project took less than eight hours to develop and deploy.

(All videos, slides, and code from the event will be available on Mithun Dhar’s blog in the next few days.) …

Be sure to set all the right configuration settings (in ServiceConfiguration.cscfg as well as in the local listener’s app.config) to point to your own storage, CDN endpoint, and service bus namespace. Then just deploy the app and launch the local listener on your desktop/laptop.

Diego Cardenas, a Solutions Architect at Go Airlines in Brazil, say[s] that they chose Windows Azure to use Virtual Machine for PS access, thus not having additional costs to maintain services on premise. Go Airlines is also very excited about new Data Sync feature of Windows Azure.

I got interviewed a quoted for an article on Windows Azure cost estimation. One of the key points to remember is that, if your code is deployed to Windows Azure, you’re still getting billed even if it isn’t running.

In K2 Advisory's report"Cloud Computing: A Step Change for IT Services," which analyses the developing market for cloud services, the report's author Dr Katy Ring, Director, K2Advisory says that the benefits of Cloud Computing can provide the business flexibility to help companies operate more effectively in the current economic climate. However, the report finds that adoption rates by smaller organisations of public cloud and SaaS services from vendors such as Amazon and Google will outpace the adoption rate of enterprises by a factor of two. By 2015 for organisations below 1,000 employees, a third to half of IT spend is likely to be with public cloud providers.

Commenting on the findings, Dr Ring said, "In five years' time the provision of IT to mid-sized and smaller businesses (of less than 1000 employees) will be quite distinct in terms of cloud adoption from enterprises. Indeed, it could be argued that small and mid-sized business use of cloud computing will enhance their agility and their ability to bounce back more quickly from the recession of 2009/10. Many Western enterprises, however, will continue to find that their IT systems are increasingly sclerotic, constrained by client-server ERP systems." …

K2 Advisory’s report states that the biggest challenges for enterprise adoption of cloud computing lie with existing investment in legacy systems, and with the potential impact on the internal IT department. Ultimately CIOs suspect that the rise of cloud computing heralds the demise of retaining internal technological expertise. IT services will be delivered by external suppliers who will be managed with (yet to be) established procurement processes. As an increasing amount of an IT group’s effort is spent on external providers delivering systems integration and managed services, this can be seen as evidence that the traditional enterprise IT we’re familiar with is disappearing. In this world, a CIO is a vendor management officer, and most of the technology is taken care of by external suppliers.

K2 Advisory is part of Sift Media, which runs the annual Business Cloud Summit in London. This year's event will be held on November 30th 2010. For more details on the Summit go to www.businesscloud9.com.

The U.S. federal government spends nearly $76 billion each year on information technology, and $20 billion of that is devoted to hardware, software, and file servers (Alford and Morton, 2009). Traditionally, computing services have been delivered through desktops or laptops operated by proprietary software. But new advances in cloud computing have made it possible for public and private sector agencies alike to access software, services, and data storage through remote file servers. With the number of federal data centers having skyrocketed from 493 to 1,200 over the past decade (Federal Communications Commission, 2010), it is time to more seriously consider whether money can be saved through greater reliance on cloud computing.

Cloud computing refers to services, applications, and data storage delivered online through powerful file servers. As pointed out by Jeffrey Rayport and Andrew Heyward (2009), cloud computing has the potential to produce “an explosion in creativity, diversity, and democratization predicated on creating ubiquitous access to high-powered computing resources.” By freeing users from being tied to desktop computers and specific geographic locations, clouds revolutionize the manner in which people, businesses, and governments may undertake basic computational and communication tasks (Benioff, 2009). In addition, clouds enable organizations to scale up or down to the level of needed service so that people can optimize their needed capacity. Fifty-eight percent of private sector information technology executives anticipate that “cloud computing will cause a radical shift in IT and 47 percent say they’re already using it or actively researching it” (Forrest, 2009, p. 5).

To evaluate the possible cost savings a federal agency might expect from migrating to the cloud, in this study I review past studies, undertake case studies of government agencies that have made the move, and discuss the future of cloud computing. I found that the agencies generally saw between 25 and 50 percent savings in moving to the cloud. For the federal government as a whole, this translates into billions in cost savings, depending on the scope of the transition. Many factors go into such assessments, such as the nature of the migration, a reliance on public versus private clouds, the need for privacy and security, the number of file servers before and after migration, the extent of labor savings, and file server storage utilization rates.

West continues with a description of “five steps be undertaken in order to improve efficiency and operations in the public sector.” See the Cloud Computing Events section for more details on the event.

The Brookings Institution describes itself as follows:

The Brookings Institution is a nonprofit public policy organization based in Washington, DC. Our mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations that advance three broad goals:

Strengthen American democracy;

Foster the economic and social welfare, security and opportunity of all Americans and

Secure a more open, safe, prosperous and cooperative international system.

Brookings is proud to be consistently ranked as the most influential, most quoted and most trusted think tank.

Most enterprises lack three essential ingredients to ensure that sensitive information stored in via cloud computing hosts remains secure: procedures, policies and tools. So says a joint survey called “Information Governance in the Cloud: A Study of IT Practitioners” from Symantec Corp. and Ponemon Institute.

“Cloud computing holds a great deal of promise as a tool for providing many essential business services, but our study reveals a disturbing lack of concern for the security of sensitive corporate and personal information as companies rush to join in on the trend,” said Dr. Larry Ponemon, chairman and founder of the Ponemon Institute.

Where is cloud security training?

Despite the ongoing clamor about cloud security and the anticipated growth of cloud computing, a meager 27 percent of those surveyed said their organizations have developed procedures for approving cloud applications that use sensitive or confidential information. Other surprising statistics from the study include:

Only 20% of information security teams are regularly involved in the decision-making process

Only 25% of information security teams aren’t involved at all

Only 30% evaluate cloud computing vendors before deploying their products

Only 23% require proof of security compliance

A full 75% believe cloud computing migration occurs in a less-than-ideal manner

IT vendors and suppliers, including the survey sponsor, Symantec, are lining up to help fill the evident gaps in enterprise cloud security tools, standards, best practices and culture adaptation. Symantec is making several recommendations for beefing up cloud security, beginning with ensuring that policies and procedures clearly state the importance of protecting sensitive information stored in the cloud.

“There needs to be a healthy, open governance discussion around data and what should be placed into the cloud,” says Justin Somaini, Chief Information Security Officer at Symantec. “Data classification standards can help with a discussion that’s wrapped around compliance as well as security impacts. Beyond that, it’s how to facilitate business in the cloud securely. This cuts across all business units.” …

David Linthicum claims “Tech firms and advocacy groups come together to seek new regulations -- that could turn out to be disastrous” in his Proposed new cloud privacy rules could backfire post of 4/7/2010 to InfoWorld’s Cloud Computing blog:

Privacy advocacy groups and tech vendors -- the Electronic Frontier Foundation, the ACLU, eBay, Google, and Microsoft -- are urging Congress to revise privacy laws to regulate user information on the cloud. Ther vendors support the changes because they fear that without regulation and privacy guarantees, people could become uncomfortable with the cloud. While reasonable in concept, the ideas may not work.

The fact of the matter is that the United States has not updated its privacy laws since 1986. With the rapid rise of cloud computing and the fact that more and more sensitive data will be stored off-premise, many believe it's high time to revisit those rules to accommodate today's reality.

But I always get a bit nervous when software specialists, now involved with the cloud, work with the government to create new laws. Here are a few of my issues.

First, regulations have a tendency to stultify innovation as providers make sure they adhere to these new and typically confusing rules. We've seen this issue with the financial reporting guidelines that began to appear earlier this decade, and the proposed cloud privacy laws will initially have similar results.

Second, any regulations that dictate privacy requirements and mechanisms will be outdated pretty much by the time they pass Congress. Other issues will arise, and unless there is a dedicated agency constantly updating the regulation, matters will quickly become dysfunctional -- but please don't create another dedicated agency for this!

Finally, it's a new world order in the cloud. These regulations won't extend to other countries. However, other countries will follow with their own regulations, which will make the situation even more onerous.

So what should be done? The real work needs to be carried out by industry, meaning cloud providers, IT pros, and users -- you and me. We need to come together around detailed requirements regarding privacy and security, and we have to stop writing conceptual white papers. This means setting lines in the sand around how data is encrypted at rest and in flight, what access controls needs to be in place, and detailed enabling standards to make all of this work together.

It's pretty simple, unless you get the government involved -- then expenses increase and productivity decreases.

The guys from SearchCloudComputing gave me a ring and we chatted about CloudAudit. The interview that follows is a distillation of that discussion and goes a long way toward answering many of the common questions surrounding CloudAudit/A6. You can find the original here.

What are the biggest challenges when auditing cloud-based services, particularly for the solution providers?

Christofer Hoff:: One of the biggest issues is their lack of understanding of how the cloud differs from traditional enterprise IT. They’re learning as quickly as their customers are. Once they figure out what to ask and potentially how to ask it, there is the issue surrounding, in many cases, the lack of transparency on the part of the provider to be able to actually provide consistent answers across different cloud providers, given the various delivery and deployment models in the cloud.

How does the cloud change the way a traditional audit would be carried out?

Hoff: For the most part, a good amount of the questions that one would ask specifically surrounding the infrastructure is abstracted and obfuscated. In many cases, a lot of the moving parts, especially as they relate to the potential to being competitive differentiators for that particular provider, are simply a black box into which operationally you’re not really given a lot of visibility or transparency. If you were to host in a colocation provider, where you would typically take a box, the operating system and the apps on top of it, you’d expect, given who controls what and who administers what, to potentially see a lot more, as well as there to be a lot more standardization of those deployed solutions, given the maturity of that space.

How did CloudAudit come about?

Hoff: I organized CloudAudit. We originally called it A6, which stands for Automated Audit Assertion Assessment and Assurance API. And as it stands now, it’s less in its first iteration about an API, and more specifically just about a common namespace and interface by which you can use simple protocols with good authentication to provide access to a lot of information that essentially can be automated in ways that you can do all sorts of interesting things with.

David Kearns asserts “Yale University and Canadian Privacy Commissioner offer negative -- and misinformed -- views on cloud computing” in his Clouded views on privacy post of 4/2/2010 to NetworkWorld’s Security blog:

Privacy and cloud computing have recently been in the news, with stories coming out of academia (Yale University) and government oversight agencies (Canadian Privacy Commissioner). Both, in my view, got it wrong.

First up, and easiest to deal with, is Yale. George Bush's alma mater recently decided to adopt Google Applications for Education, which would include changing from Horde e-mail to Gmail. (See the Yale Daily News story here.). This IT decision has been roundly denounced by some faculty members, who screamed loud enough to at least postpone the switchover.

Just what were their objections?

"Google stores every piece of data in three centers randomly chosen from the many it operates worldwide in order to guard the company's ability to recover lost information -- but that also makes the data subject to the vagaries of foreign laws and governments," according to one faculty member. I'd imagine, of course, that the faculty and students currently have no idea where their data is stored, though. Hopefully the IT department has at least a disaster-recovery plan, which includes off-site storage of data. …

Dave concludes:

Privacy and security are best arrived at through well-negotiated contracts between informed parties, not through the agenda-wielding of ivory tower proselytizers. Well, usually. But, as we've learned over and over again, it isn't the technology that's the problem -- it's the people and the politics.

Next issue we'll venture 700 km north of Yale to see how Canada's Privacy Commissioner tackles the cloud.

Christine Jacobs, Communications Officer, Governance Studies for the Brookings Institution announced in an e-mail this morning:

Earlier today at Brookings, the federal government’s chief information officer, Vivek Kundra, spoke about how the government is leveraging cloud computing to deliver results for the American people.

Mr. Kundra also announced that the National Institute of Standards and Technology will host a “Cloud Summit” on May 20, with government agencies and the private sector. The Summit will introduce NIST efforts to lead the definition of the Federal Government’s requirements for cloud computing, key technical research, and United States standards development. Furthermore, Mr. Kundra stated that the government will engage with industry to collaboratively develop standards and solutions for cloud interoperability, data portability, and security. [Emphasis added.]

You can read his full remarks here and his accompanying presentation here.

The Interop 2010 conference to be held 4/25 through 4/29/2010 in Las Vegas, NV will feature an Enterprise Cloud Summit chaired by Alistair Croll on 4/26/2010 from 8:30 AM to 4:30 PM PDT:

In just a few years, cloud computing has gone from a fringe idea for startups to a mainstream tool in every IT toolbox. The Enterprise Cloud Summit will show you how to move from theory to implementation. We'll cover practical cloud computing designs, as well as the standards, infrastructure decisions, and economics you need to understand as you transform your organization's IT. We'll also debunk some common myths about private clouds, security risks, costs, and lock-in.

On-demand computing resources are the most disruptive change in IT of the last decade. Whether you're deciding how to embrace them or want to learn from what others are doing, Enterprise Cloud Summit is the place to do it.

Invariably when new technology is introduced it causes an upheaval. When that technology has the power to change the way in which we architect networks and application infrastructure, it can be disruptive but beneficial. When that technology simultaneously requires that you abandon advances and best practices in architecture in order to realize those benefits, that’s not acceptable.

Virtualization at the server level is disruptive, but in a good way. It forces organizations to reconsider the applications deployed in their data center, turn a critical eye toward the resources available and how they’re partitioned across applications, projects, and departments. It creates an environment in which the very make-up of the data center can be re-examined with the goal of making more efficient the network, storage, and application network infrastructure over which those applications are delivered.

Virtualization at the network, layer, is even more disruptive. From a network infrastructure perspective there are few changes required in the underlying infrastructure to support server virtualization because the application and its behavior doesn’t really change when moving from a physical deployment to a virtual one. But the network, ah, the network does require changes when it moves from a physical to a virtual form factor. The way in which scale and fault-tolerance and availability of the network infrastructure – from storage to application delivery network – is impacted by the simple change from physical to virtual. In some cases this impact is a positive one, in others, it’s not so positive. Understanding how to take advantage of virtual network appliances such that core network characteristics such as fault-tolerance, reliability, and security are not negatively impacted is one of the key factors in the successful adoption of virtual network technology.

Combining virtualization of “the data center network” with the deployment of applications in a public cloud computing environment brings to the fore the core issues of lack of control and visibility in externalized environments. While the benefits of public cloud computing are undeniable (though perhaps not nearly as world-shaking as some would have us believe) the inclusion of externally controlled environments in the organization’s data center strategy will prove to have its challenges.

Many of these challenges can be addressed thanks to the virtualization of the network (despite the lack of choice and dearth of services available in today’s cloud computing offerings). …

UBM TechWeb presents Top [Media] Coverage Highlights from Cloud Connect with abstracts and links to articles related to the Cloud Connect 2010 conference held at the Santa Clara Convention Center on 3/15 to 3/18/2010. Following are a few recent examples:

IT Spending On Cloud Ratcheting UpBy Charles Babcock April 5, 2010 InformationWeekMarket research for the venture capital firm, the Sand Hill Group, has concluded that cloud computing represents one of the largest new investment opportunities on the horizon. Rangaswami aired the report's conclusion at last month's Cloud Connect Conference and asked IBM's VP of Cloud Services Ric Telford what he thought: "I have no problem with those numbers (40% in three years; 70% in five) as long as you include the caveat, it could be any one of five delivery models."

Geo Tagged Cloud ZombiesBy Oliver Marks March 28, 2010 ZDNetRodney Joffe, Senior Vice President and Senior Technologist at Neustar, Inc (who offer directory and clearinghouse services to large and small telecommunications service providers), spelled out some amazing realities in his talk 'Cloud Computing for Criminals' at the recent Cloud Connect conference in Santa Clara California/

California:

Agility, Not Savings, May Be The True Value Of The CloudBy Robert Mullins March 19, 2010 Network ComputingThere are ways to calculate the Return On Investment (ROI) when moving IT from the data center to the cloud, but experts say the savings to the IT budget is only a fraction of the reason to do so. Analysts and proponents of cloud computing discussed calculating the total cost of ownership (TCO) and the ROI of moving to cloud computing at Cloud Connect, a three-day conference this week in Santa Clara, Calif.

The cloud's three key issues come into focusBy David Linthicum March 19, 2010 InfoWorld'm writing this blog on the way back from Cloud Connect held this week in Santa Clara. It was a good show, all in all, and there was a who's-who in the world of cloud computing. I've really never seen anything like the hype around cloud computing, possibly because you can pretty much "cloudwash" anything, from disk storage to social networking. Thus, traditional software vendors are scrambling to move to the cloud, at least from a messaging perspective, to remain relevant. If I was going to name a theme of the conference, it would be "Ready or not, we're in the cloud."

We want to make it even easier for developers to build highly functional and architecturally complex applications on AWS. It turns out that applications of this type can often benefit from a publish/subscribe messaging paradigm. In such a system, publishers and receivers of messages are decoupled and unaware of each other's existence. The receivers (also known as subscribers) express interest in certain topics. The senders (publishers) can send a message to a topic. The message will then be immediately delivered or pushed to all of the subscribers to the topic.

The Amazon Simple Notification Service (SNS) makes it easy for you to build an application in this way. You'll need to know the following terms in order to understand how SNS works:

Topics are named groups of events or acess points, each identifying a specific subject, content, or event type. Each topic has a unique identifier (URI) that identifies the SNS endpoint for publishing and subscribing.

Owners create topics and control all access to the topic. The owner can define the permissions for all of the topics that they own.

Subscribers are clients (applications, end-users, servers, or other devices) that want to receive notifications on specific topics of interest to them.

Publishers send messages to topics. SNS matches the topic with the list of subscribers interested in the topic, and delivers the message to each and every one of them. Here's how it all fits together:

Jeff continues with a brief description of “what it takes to get started” with SNS.

Applications running in the Windows Azure cloud can use existing NTFS APIs to access a network attached durable drive. The durable drive is actually a Page Blob formatted as a single volume NTFS Virtual Hard Drive (VHD).

A post by Brad Calder, Windows Azure Drive Demo at MIX 2010 on the Windows Azure Storage Team Blog, shows how you create a virtual drive in Windows 7. You can then upload the drive for access by your cloud application. Your application specifies the storage account name and its secret key in the ServiceConfiguration.cscfg in the same way as it does on your development computer.

Then you can access the drive from code in your application by:

Initializing the drive cache so all processes and threads running under that instance can mount and manipulate drives.

Creating an object that refers to the drive.

Mounting the drive.

Using the drive in your applications to read from or write to a drive letter (e.g., X:\) that represents a durable NTFS volume for storing and accessing data.

You can use the Windows Azure Drive APIs in your Windows Azure application to:

James Hamilton’s Stonebraker on CAP Theorem and Databases post of 4/7/2010 continues his analysis of the NoSQL movement’s premises about the Consistency, Availability and Partitioning (CAP) theorem and eventual consistency:

Mike challenges this assertion pointing that some common database errors are not avoided by eventual consistency and CAP really doesn’t apply in these cases. If you have an application error, administrative error, or database implementation bug that losses data, then it is simply gone unless you have an offline copy. This, by the way, is why I’m a big fan of deferred delete. This is a technique where deleted items are marked as deleted but not garbage collected until some days or preferably weeks later. Deferred delete is not full protection but it has save[d] my butt more than once and I’m a believer. See On Designing and Deploying Internet-Scale Services for more detail.

CAP and the application of eventual consistency doesn’t directly protect us against application or database implementation errors. And, in the case of a large scale disaster where the cluster is lost entirely, again, neither eventual consistency nor CAP offer a solution. Mike also notes that network partitions are fairly rare. I could quibble a bit on this one. Network partitions should be rare but net gear continues to cause more issues than it should. Networking configuration errors, black holes, dropped packets, and brownouts, remain a popular discussion point in post mortems industry-wide. I see this improving over the next 5 years but we have a long way to go. In Networking: the Last Bastion of Mainframe Computing, I argue that net gear is still operating on the mainframe business model: large, vertically integrated and expensive equipment, deployed in pairs. When it comes to redundancy at scale, 2 is a poor choice.

Mike’s article questions whether eventual consistency is really the right answer for these workloads. I made some similar points in “I love eventual consistency but…” In that posting, I argued that many applications are much easier to implement with full consistency and full consistency can be practically implemented at high scale. In fact, Amazon SimpleDB recently announced support for full consistency. Apps needed full consistency are now easier to write and, where only eventual consistency is needed, its available as well.

Don’t throw full consistency out too early. For many applications, it is both affordable and helps reduce application implementation errors.

Mike Kelly shares his notes on David Robinson’s SQL Azure FireStarter session in a Windows Azure SQL Notes post of 4/6/2010, which begins:

Goal is to convince you that there is no difference between SQL Server and SQL Azure.

Irrespective of where your application lies i.e. in the cloud or locally, you can simply connect to the SQL Azure database by replacing your local DB Connection String with the SQL Azure Connection String. The connection string for any SQL Azure database can be obtained in the "Server Administration" screen in your SQL Azure account by selecting the database and clicking the Connection String button under the Databases tab.

How to resolve some of the common connectivity error messages that you would see while connecting to SQL Azure:

A transport-level error has occurred when receiving results from the server. (Provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)

System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated.

An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections

I've always been a data guy. I think data maintenance, sharing and analysis is the inspiration for almost all line-of-business software, and technology that makes any or all of it easier is key to platform success. That's why I've been interested in WCF Data Services (previously ADO.NET Data Services) since it first appeared as the technology code-named "Astoria." Astoria was based on a crisp idea: representing data in AtomPub and JSON formats, as REST Web services, with simple URI and HTTP verb conventions for querying and updating the data.

Astoria, by any name, has been very popular, and for good reason: It provides refreshingly simple access to data, using modern, well-established Web standards. Astoria provides a versatile abstraction layer over data access, but does so without the over-engineering or tight environmental coupling to which most data-access technologies fall prey. This elegance has enabled Microsoft to do something equally unusual: separate Astoria's protocol from its implementation and publish that protocol as an open standard. We learned that Microsoft did this at its Professional Developers Conference (PDC) this past November in Los Angeles, when Redmond officially unveiled the technology as Open Data Protocol (OData). This may have been one of Microsoft's smartest data-access plays, ever. …

I’ve recently started using Microsoft’s WCF Data Services which supports OData Services. What this means is that we can access resources by simply specifying a URI. This concept greatly simplified building an ORM layer on a web site, as well as creating the linkage between the server side data and the client side application, which in my case is usually a browser.

So, the issue this blog addresses is that if you form a URI with the parameter $top={anything}, your data will automatically be sorted. The documentation for OData on top basically says that, but it could be clearer. It says the following:

We[']re writing a new Data Provider for Data Sync that can consume an OData Feed for example below were pulling in the Speakers data from the recent MIX event using the published OData feed http://api.visitmix.com/OData.svc/.

T-10 Media claims “the Service Bus can be used to build hybrid apps which span both on-premise and cloud services” in a The Hybrid Cloud and Azure [Blog] post of 4/7/2010:

There’s been a lot of buzz about the ‘hybrid’ cloud – the blending of on-premise services with cloud based services. CloudKick recently launched CloudKick Hybrid a tool for monitoring cloud and on-premise servers from a single console (see story here), Nimsoft which has a similar monitoring tool was recently acquired by CA for $350m, hosting provider VoxTel recently announced a unified admin/monitoring tools for its cloud and server offerings.

There is an undoubted need for a hybrid architecture for many larger corporations since migrating existing apps to the cloud is not a simple as lot of demos show and there is a perception (whether real or not) that the data is less secure on the cloud. Enter hybrids apps – maintain the data on premise or consume on-premise apps from a cloud service.

Of course it is possible to communicate between on-premise data sources or apps and cloud-based apps using SOAP/REST communication protocols, however there are two major obstacles – discovering the service endpoints (since these may change due to dynamically assigned IPs) and navigating through firewalls. These problems can be overcome by allowing apps to selectively open ports which is inherently insecure, and using relay systems that sit between the firewall and the apps and act as a bridge, thee systems tend to be very complicated and hard to implement.

The Azure Service Bus attempts to solve this issue by proving a service which allows applications which need to communicate with eachother to register with it. The requesting app is given a Service Bus endpoint to communicate with the data source/service app. Essentially the services are provided by service apps run behind the firewall, and the connection endpoints are provided by the Azure Service Bus. It should be noted that the Service Bus allows communication with non-.NET services , so Linux/UNIX hosted apps can register with the Service Bus and be consumed by .NET apps.

Security is provided by the Azure AppFabric Access Control, which applies user-defined rules to ensure security when an app claims tokens via the STS service provided by the Access Control.

Thus the Service Bus can be used to build hybrid apps which span both on-premise and cloud services.

Microsoft is working not only on the imminent release of .NET Framework 4, but also on expanding support beyond the now-traditional Windows client and server operating system. In this regard, the Redmond giant is hard at work delivering .NET 4 support for its Cloud platform. The promise from the software giant is that customers leveraging Windows Azure will be able to start taking advantage of .NET 4 for their applications in mid-2010.

“As we announced at MIX 2010, .NET 4 will be available in Windows Azure within 90 days of .NET 4 RTM,” a member of the Windows Azure team stated. Microsoft’s next-generation development tools and platform are scheduled for release the coming week. Visual Studio 2010, .NET Framework 4 and Silverlight 4 will all be officially launched on April 12 in a Las Vegas event.

This places availability of .NET Framework 4 RTW (release to web) support for Windows Azure sometime by mid-July 2010. The software giant could, of course, beat its own deadline, but, so far, it has chosen to give itself a little elbow room in order to get .NET4 support on Windows Azure to become a reality.

Fact is that Windows Azure already features .NET Framework 4, but not the RTW milestone. “As part of our preparation for that, the latest operating system build available in Windows Azure contains the .NET 4 RC. Although you cannot use this build to run .NET 4 applications, please let us know if having .NET 4 RC in the build has any effect on your existing applications. One known effect you may see if you’re consuming generic ASP.NET performance counters is that they will report data only on .NET 4 applications. You can instead use the versioned performance counters as documented in KB article 2022138. As always, you can choose which build of the operating system your application will run on in Windows Azure,” the Windows Azure team member stated.

The Cloud platform version that Microsoft is referring to is Windows Azure Guest OS 1.2 (Release 201003-01). The Release went live earlier this week, more precisely on April 5th, 2010, and contains .NET Framework 4.0 RC support. However, as the company stated above, the Windows Azure development environment does not support the .NET 4.0 Framework at this point in time. The purpose of Windows Azure Guest OS 1.2 (Release 201003-01) is that customers test to see whether their applications and services will continue to run under normal parameters while using .NET Framework 4.0 libraries. …

As we announced at MIX 2010, .NET 4 will be available in Windows Azure within 90 days of .NET 4 RTM. As part of our preparation for that, the latest operating system build available in Windows Azure contains the .NET 4 RC. Although you cannot use this build to run .NET 4 applications, please let us know if having .NET 4 RC in the build has any effect on your existing applications.

One known effect you may see if you’re consuming generic ASP.NET performance counters is that they will report data only on .NET 4 applications. You can instead use the versioned performance counters as documented in KB article 2022138.

As always, you can choose which build of the operating system your application will run on in Windows Azure. See the MSDN documentation for details.

ISC is pleased to announce that Miami 311, a government transparency solution built with MapDotNet UX, has won first place in the Microsoft Windows US Public Sector Azure Contest. The application was selected from a field of thirteen entries by Internet voters and a panel within Microsoft.

Miami 311 is an online application supported by the Microsoft Azure cloud platform that allows Miami residents to report, monitor and analyze all non-emergency events that occur in the metropolitan area. If a citizen needs to report a non-emergency situation such as a pothole, street light outage or missed trash pickup, he must first call 311 to report the issue. Once the issue is reported, the citizen can login to the Miami 311 online tool to track the progress of the issue reported and also view the progress of other city-wide projects in the area. The intent of the site is to increase citizen access to city-wide information.

"MapDotNet UX at version 8.0 is 100% managed .net code, which makes the entire server product and its functionality is deployable to Azure, which runs in a 64-bit virtualized environment. This enables map tile rendering (using a high-speed WPF-based map renderer), spatial querying and spatial data editing all in Azure. This is much more than pushpins in the cloud," said Brian Hearn, MapDotNet Lead Architect. …

For a demo I gave today at the Windows Azure Firestarter event, I let anyone on the internet change my wallpaper. You too can set my wallpaper by pointing your browser to http://annoy.smarx.com. I’ll try to continue running on my laptop for the next few days, so any time my laptop’s on and online, you can set my wallpaper. (If my laptop’s not on, you can still see the web page, but you’ll get a strange XML message when you try to change my wallpaper.)

You can even get the code and let people set your wallpaper too (if you have Windows Azure and Service Bus accounts). The entire project took less than eight hours to develop and deploy.

(All videos, slides, and code from the event will be available on Mithun Dhar’s blog in the next few days.) …

Be sure to set all the right configuration settings (in ServiceConfiguration.cscfg as well as in the local listener’s app.config) to point to your own storage, CDN endpoint, and service bus namespace. Then just deploy the app and launch the local listener on your desktop/laptop.

Diego Cardenas, a Solutions Architect at Go Airlines in Brazil, say[s] that they chose Windows Azure to use Virtual Machine for PS access, thus not having additional costs to maintain services on premise. Go Airlines is also very excited about new Data Sync feature of Windows Azure.

I got interviewed a quoted for an article on Windows Azure cost estimation. One of the key points to remember is that, if your code is deployed to Windows Azure, you’re still getting billed even if it isn’t running.

In K2 Advisory's report"Cloud Computing: A Step Change for IT Services," which analyses the developing market for cloud services, the report's author Dr Katy Ring, Director, K2Advisory says that the benefits of Cloud Computing can provide the business flexibility to help companies operate more effectively in the current economic climate. However, the report finds that adoption rates by smaller organisations of public cloud and SaaS services from vendors such as Amazon and Google will outpace the adoption rate of enterprises by a factor of two. By 2015 for organisations below 1,000 employees, a third to half of IT spend is likely to be with public cloud providers.

Commenting on the findings, Dr Ring said, "In five years' time the provision of IT to mid-sized and smaller businesses (of less than 1000 employees) will be quite distinct in terms of cloud adoption from enterprises. Indeed, it could be argued that small and mid-sized business use of cloud computing will enhance their agility and their ability to bounce back more quickly from the recession of 2009/10. Many Western enterprises, however, will continue to find that their IT systems are increasingly sclerotic, constrained by client-server ERP systems." …

K2 Advisory’s report states that the biggest challenges for enterprise adoption of cloud computing lie with existing investment in legacy systems, and with the potential impact on the internal IT department. Ultimately CIOs suspect that the rise of cloud computing heralds the demise of retaining internal technological expertise. IT services will be delivered by external suppliers who will be managed with (yet to be) established procurement processes. As an increasing amount of an IT group’s effort is spent on external providers delivering systems integration and managed services, this can be seen as evidence that the traditional enterprise IT we’re familiar with is disappearing. In this world, a CIO is a vendor management officer, and most of the technology is taken care of by external suppliers.

K2 Advisory is part of Sift Media, which runs the annual Business Cloud Summit in London. This year's event will be held on November 30th 2010. For more details on the Summit go to www.businesscloud9.com.

The U.S. federal government spends nearly $76 billion each year on information technology, and $20 billion of that is devoted to hardware, software, and file servers (Alford and Morton, 2009). Traditionally, computing services have been delivered through desktops or laptops operated by proprietary software. But new advances in cloud computing have made it possible for public and private sector agencies alike to access software, services, and data storage through remote file servers. With the number of federal data centers having skyrocketed from 493 to 1,200 over the past decade (Federal Communications Commission, 2010), it is time to more seriously consider whether money can be saved through greater reliance on cloud computing.

Cloud computing refers to services, applications, and data storage delivered online through powerful file servers. As pointed out by Jeffrey Rayport and Andrew Heyward (2009), cloud computing has the potential to produce “an explosion in creativity, diversity, and democratization predicated on creating ubiquitous access to high-powered computing resources.” By freeing users from being tied to desktop computers and specific geographic locations, clouds revolutionize the manner in which people, businesses, and governments may undertake basic computational and communication tasks (Benioff, 2009). In addition, clouds enable organizations to scale up or down to the level of needed service so that people can optimize their needed capacity. Fifty-eight percent of private sector information technology executives anticipate that “cloud computing will cause a radical shift in IT and 47 percent say they’re already using it or actively researching it” (Forrest, 2009, p. 5).

To evaluate the possible cost savings a federal agency might expect from migrating to the cloud, in this study I review past studies, undertake case studies of government agencies that have made the move, and discuss the future of cloud computing. I found that the agencies generally saw between 25 and 50 percent savings in moving to the cloud. For the federal government as a whole, this translates into billions in cost savings, depending on the scope of the transition. Many factors go into such assessments, such as the nature of the migration, a reliance on public versus private clouds, the need for privacy and security, the number of file servers before and after migration, the extent of labor savings, and file server storage utilization rates.

West continues with a description of “five steps be undertaken in order to improve efficiency and operations in the public sector.” See the Cloud Computing Events section for more details on the event.

The Brookings Institution describes itself as follows:

The Brookings Institution is a nonprofit public policy organization based in Washington, DC. Our mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations that advance three broad goals:

Strengthen American democracy;

Foster the economic and social welfare, security and opportunity of all Americans and

Secure a more open, safe, prosperous and cooperative international system.

Brookings is proud to be consistently ranked as the most influential, most quoted and most trusted think tank.

Most enterprises lack three essential ingredients to ensure that sensitive information stored in via cloud computing hosts remains secure: procedures, policies and tools. So says a joint survey called “Information Governance in the Cloud: A Study of IT Practitioners” from Symantec Corp. and Ponemon Institute.

“Cloud computing holds a great deal of promise as a tool for providing many essential business services, but our study reveals a disturbing lack of concern for the security of sensitive corporate and personal information as companies rush to join in on the trend,” said Dr. Larry Ponemon, chairman and founder of the Ponemon Institute.

Where is cloud security training?

Despite the ongoing clamor about cloud security and the anticipated growth of cloud computing, a meager 27 percent of those surveyed said their organizations have developed procedures for approving cloud applications that use sensitive or confidential information. Other surprising statistics from the study include:

Only 20% of information security teams are regularly involved in the decision-making process

Only 25% of information security teams aren’t involved at all

Only 30% evaluate cloud computing vendors before deploying their products

Only 23% require proof of security compliance

A full 75% believe cloud computing migration occurs in a less-than-ideal manner

IT vendors and suppliers, including the survey sponsor, Symantec, are lining up to help fill the evident gaps in enterprise cloud security tools, standards, best practices and culture adaptation. Symantec is making several recommendations for beefing up cloud security, beginning with ensuring that policies and procedures clearly state the importance of protecting sensitive information stored in the cloud.

“There needs to be a healthy, open governance discussion around data and what should be placed into the cloud,” says Justin Somaini, Chief Information Security Officer at Symantec. “Data classification standards can help with a discussion that’s wrapped around compliance as well as security impacts. Beyond that, it’s how to facilitate business in the cloud securely. This cuts across all business units.” …

David Linthicum claims “Tech firms and advocacy groups come together to seek new regulations -- that could turn out to be disastrous” in his Proposed new cloud privacy rules could backfire post of 4/7/2010 to InfoWorld’s Cloud Computing blog:

Privacy advocacy groups and tech vendors -- the Electronic Frontier Foundation, the ACLU, eBay, Google, and Microsoft -- are urging Congress to revise privacy laws to regulate user information on the cloud. Ther vendors support the changes because they fear that without regulation and privacy guarantees, people could become uncomfortable with the cloud. While reasonable in concept, the ideas may not work.

The fact of the matter is that the United States has not updated its privacy laws since 1986. With the rapid rise of cloud computing and the fact that more and more sensitive data will be stored off-premise, many believe it's high time to revisit those rules to accommodate today's reality.

But I always get a bit nervous when software specialists, now involved with the cloud, work with the government to create new laws. Here are a few of my issues.

First, regulations have a tendency to stultify innovation as providers make sure they adhere to these new and typically confusing rules. We've seen this issue with the financial reporting guidelines that began to appear earlier this decade, and the proposed cloud privacy laws will initially have similar results.

Second, any regulations that dictate privacy requirements and mechanisms will be outdated pretty much by the time they pass Congress. Other issues will arise, and unless there is a dedicated agency constantly updating the regulation, matters will quickly become dysfunctional -- but please don't create another dedicated agency for this!

Finally, it's a new world order in the cloud. These regulations won't extend to other countries. However, other countries will follow with their own regulations, which will make the situation even more onerous.

So what should be done? The real work needs to be carried out by industry, meaning cloud providers, IT pros, and users -- you and me. We need to come together around detailed requirements regarding privacy and security, and we have to stop writing conceptual white papers. This means setting lines in the sand around how data is encrypted at rest and in flight, what access controls needs to be in place, and detailed enabling standards to make all of this work together.

It's pretty simple, unless you get the government involved -- then expenses increase and productivity decreases.

The guys from SearchCloudComputing gave me a ring and we chatted about CloudAudit. The interview that follows is a distillation of that discussion and goes a long way toward answering many of the common questions surrounding CloudAudit/A6. You can find the original here.

What are the biggest challenges when auditing cloud-based services, particularly for the solution providers?

Christofer Hoff:: One of the biggest issues is their lack of understanding of how the cloud differs from traditional enterprise IT. They’re learning as quickly as their customers are. Once they figure out what to ask and potentially how to ask it, there is the issue surrounding, in many cases, the lack of transparency on the part of the provider to be able to actually provide consistent answers across different cloud providers, given the various delivery and deployment models in the cloud.

How does the cloud change the way a traditional audit would be carried out?

Hoff: For the most part, a good amount of the questions that one would ask specifically surrounding the infrastructure is abstracted and obfuscated. In many cases, a lot of the moving parts, especially as they relate to the potential to being competitive differentiators for that particular provider, are simply a black box into which operationally you’re not really given a lot of visibility or transparency. If you were to host in a colocation provider, where you would typically take a box, the operating system and the apps on top of it, you’d expect, given who controls what and who administers what, to potentially see a lot more, as well as there to be a lot more standardization of those deployed solutions, given the maturity of that space.

How did CloudAudit come about?

Hoff: I organized CloudAudit. We originally called it A6, which stands for Automated Audit Assertion Assessment and Assurance API. And as it stands now, it’s less in its first iteration about an API, and more specifically just about a common namespace and interface by which you can use simple protocols with good authentication to provide access to a lot of information that essentially can be automated in ways that you can do all sorts of interesting things with.

David Kearns asserts “Yale University and Canadian Privacy Commissioner offer negative -- and misinformed -- views on cloud computing” in his Clouded views on privacy post of 4/2/2010 to NetworkWorld’s Security blog:

Privacy and cloud computing have recently been in the news, with stories coming out of academia (Yale University) and government oversight agencies (Canadian Privacy Commissioner). Both, in my view, got it wrong.

First up, and easiest to deal with, is Yale. George Bush's alma mater recently decided to adopt Google Applications for Education, which would include changing from Horde e-mail to Gmail. (See the Yale Daily News story here.). This IT decision has been roundly denounced by some faculty members, who screamed loud enough to at least postpone the switchover.

Just what were their objections?

"Google stores every piece of data in three centers randomly chosen from the many it operates worldwide in order to guard the company's ability to recover lost information -- but that also makes the data subject to the vagaries of foreign laws and governments," according to one faculty member. I'd imagine, of course, that the faculty and students currently have no idea where their data is stored, though. Hopefully the IT department has at least a disaster-recovery plan, which includes off-site storage of data. …

Dave concludes:

Privacy and security are best arrived at through well-negotiated contracts between informed parties, not through the agenda-wielding of ivory tower proselytizers. Well, usually. But, as we've learned over and over again, it isn't the technology that's the problem -- it's the people and the politics.

Next issue we'll venture 700 km north of Yale to see how Canada's Privacy Commissioner tackles the cloud.

Christine Jacobs, Communications Officer, Governance Studies for the Brookings Institution announced in an e-mail this morning:

Earlier today at Brookings, the federal government’s chief information officer, Vivek Kundra, spoke about how the government is leveraging cloud computing to deliver results for the American people.

Mr. Kundra also announced that the National Institute of Standards and Technology will host a “Cloud Summit” on May 20, with government agencies and the private sector. The Summit will introduce NIST efforts to lead the definition of the Federal Government’s requirements for cloud computing, key technical research, and United States standards development. Furthermore, Mr. Kundra stated that the government will engage with industry to collaboratively develop standards and solutions for cloud interoperability, data portability, and security. [Emphasis added.]

You can read his full remarks here and his accompanying presentation here.

The Interop 2010 conference to be held 4/25 through 4/29/2010 in Las Vegas, NV will feature an Enterprise Cloud Summit chaired by Alistair Croll on 4/26/2010 from 8:30 AM to 4:30 PM PDT:

In just a few years, cloud computing has gone from a fringe idea for startups to a mainstream tool in every IT toolbox. The Enterprise Cloud Summit will show you how to move from theory to implementation. We'll cover practical cloud computing designs, as well as the standards, infrastructure decisions, and economics you need to understand as you transform your organization's IT. We'll also debunk some common myths about private clouds, security risks, costs, and lock-in.

On-demand computing resources are the most disruptive change in IT of the last decade. Whether you're deciding how to embrace them or want to learn from what others are doing, Enterprise Cloud Summit is the place to do it.

Invariably when new technology is introduced it causes an upheaval. When that technology has the power to change the way in which we architect networks and application infrastructure, it can be disruptive but beneficial. When that technology simultaneously requires that you abandon advances and best practices in architecture in order to realize those benefits, that’s not acceptable.

Virtualization at the server level is disruptive, but in a good way. It forces organizations to reconsider the applications deployed in their data center, turn a critical eye toward the resources available and how they’re partitioned across applications, projects, and departments. It creates an environment in which the very make-up of the data center can be re-examined with the goal of making more efficient the network, storage, and application network infrastructure over which those applications are delivered.

Virtualization at the network, layer, is even more disruptive. From a network infrastructure perspective there are few changes required in the underlying infrastructure to support server virtualization because the application and its behavior doesn’t really change when moving from a physical deployment to a virtual one. But the network, ah, the network does require changes when it moves from a physical to a virtual form factor. The way in which scale and fault-tolerance and availability of the network infrastructure – from storage to application delivery network – is impacted by the simple change from physical to virtual. In some cases this impact is a positive one, in others, it’s not so positive. Understanding how to take advantage of virtual network appliances such that core network characteristics such as fault-tolerance, reliability, and security are not negatively impacted is one of the key factors in the successful adoption of virtual network technology.

Combining virtualization of “the data center network” with the deployment of applications in a public cloud computing environment brings to the fore the core issues of lack of control and visibility in externalized environments. While the benefits of public cloud computing are undeniable (though perhaps not nearly as world-shaking as some would have us believe) the inclusion of externally controlled environments in the organization’s data center strategy will prove to have its challenges.

Many of these challenges can be addressed thanks to the virtualization of the network (despite the lack of choice and dearth of services available in today’s cloud computing offerings). …

UBM TechWeb presents Top [Media] Coverage Highlights from Cloud Connect with abstracts and links to articles related to the Cloud Connect 2010 conference held at the Santa Clara Convention Center on 3/15 to 3/18/2010. Following are a few recent examples:

IT Spending On Cloud Ratcheting UpBy Charles Babcock April 5, 2010 InformationWeekMarket research for the venture capital firm, the Sand Hill Group, has concluded that cloud computing represents one of the largest new investment opportunities on the horizon. Rangaswami aired the report's conclusion at last month's Cloud Connect Conference and asked IBM's VP of Cloud Services Ric Telford what he thought: "I have no problem with those numbers (40% in three years; 70% in five) as long as you include the caveat, it could be any one of five delivery models."

Geo Tagged Cloud ZombiesBy Oliver Marks March 28, 2010 ZDNetRodney Joffe, Senior Vice President and Senior Technologist at Neustar, Inc (who offer directory and clearinghouse services to large and small telecommunications service providers), spelled out some amazing realities in his talk 'Cloud Computing for Criminals' at the recent Cloud Connect conference in Santa Clara California/

California:

Agility, Not Savings, May Be The True Value Of The CloudBy Robert Mullins March 19, 2010 Network ComputingThere are ways to calculate the Return On Investment (ROI) when moving IT from the data center to the cloud, but experts say the savings to the IT budget is only a fraction of the reason to do so. Analysts and proponents of cloud computing discussed calculating the total cost of ownership (TCO) and the ROI of moving to cloud computing at Cloud Connect, a three-day conference this week in Santa Clara, Calif.

The cloud's three key issues come into focusBy David Linthicum March 19, 2010 InfoWorld'm writing this blog on the way back from Cloud Connect held this week in Santa Clara. It was a good show, all in all, and there was a who's-who in the world of cloud computing. I've really never seen anything like the hype around cloud computing, possibly because you can pretty much "cloudwash" anything, from disk storage to social networking. Thus, traditional software vendors are scrambling to move to the cloud, at least from a messaging perspective, to remain relevant. If I was going to name a theme of the conference, it would be "Ready or not, we're in the cloud."

We want to make it even easier for developers to build highly functional and architecturally complex applications on AWS. It turns out that applications of this type can often benefit from a publish/subscribe messaging paradigm. In such a system, publishers and receivers of messages are decoupled and unaware of each other's existence. The receivers (also known as subscribers) express interest in certain topics. The senders (publishers) can send a message to a topic. The message will then be immediately delivered or pushed to all of the subscribers to the topic.

The Amazon Simple Notification Service (SNS) makes it easy for you to build an application in this way. You'll need to know the following terms in order to understand how SNS works:

Topics are named groups of events or acess points, each identifying a specific subject, content, or event type. Each topic has a unique identifier (URI) that identifies the SNS endpoint for publishing and subscribing.

Owners create topics and control all access to the topic. The owner can define the permissions for all of the topics that they own.

Subscribers are clients (applications, end-users, servers, or other devices) that want to receive notifications on specific topics of interest to them.

Publishers send messages to topics. SNS matches the topic with the list of subscribers interested in the topic, and delivers the message to each and every one of them. Here's how it all fits together:

Jeff continues with a brief description of “what it takes to get started” with SNS.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.