HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the November CTP in January 2010. * Content for managing DataHubs will be added as Microsoft releases more details on data synchronization services for SQL Azure and Windows Azure.

As always, every time we add a new storage provider integration, we would compare it to the existing ones, such as Amazon S3.

The test is simple, I have a 27M zip file that I first drag and drop into a folder under Amazon S3 and watch the upload progress in the Gladinet Task Manager, timing it from begin to end. Then I repeat the same thing with Windows Azure Blob Storage.

It is very interesting to see the results are so close when knowing the two cloud storage providers are completely different. The servers hosting the data are completely different and the IP routes are different. There must be some kind of server side tool to throttle the speed of a single connection to a certain number. Maybe an expert of the cloud server end can explain this. …

A speed test shows my upload speed to a random server in US is 3.97Mbit/s ( ~ 490KBytes/sec). …

Jerry is a founder of Gladinet.

• Jerry Huang asserts “I haven't seen a Azure Storage Client on Mac yet but I guess the PHP and Ruby clients can cover those on Mac and Linux” in his Windows Azure Storage Windows Clients post of 1/19/2010:

As the Windows Azure Platform making the transition from public preview to full production now, here is a list of Windows Azure Storage clients to help you leverage the Azure Storage:

Azure Storage Explorer is a useful GUI tool for inspecting and altering the data in your Azure cloud storage projects including the logs of your cloud-hosted applications. All three types of cloud storage can be viewed: blobs, queues, and tables. Free, Windows(NET/WPF), Open Source.

These tools overlap and also compliment each other on functionality. For example, you can use Gladinet Cloud Desktop to map a drive letter, do drag and drop and in place editing, and use Cloud Storage Studio to fine tune the properties of each blob and use other tools to cover functionality gap in between. …

Today, SQL Server 2008 R2 received an official release date. It will be listed on Microsoft’s May price list, and will be available by May 2010.

SQL Server 2008 R2 showcases Microsoft’s continued commitment to business intelligence and mission-critical workloads. Since we made this release available as a Community Technology Preview (CTP) in August 2009, it has been well-received by the community with more than 150,000 downloads. …

Customers with Software Assurance can upgrade to SQL Server 2008 R2 and take advantage of the new features without incurring additional licensing costs.

Also, Vinod Kumar Jagannathan, who I believe is a Program Manager at Microsoft India, reports in a comment to this post that the problem is an issue with the 11/2009 CTP and is fixed for later (future) SSMS 2008 R2 CTPs.

What if you could get all the benefits of distributed, on-premises application databases AND hosted cloud-based databases for a real Software + Services implementation? In this video, Hilton Giesenow, host of The MOSS Show SharePoint podcast (http://www.TheMossShow.com/) shows us how to set up a powerful and easy-to-use synchronisation model between a local Microsoft SQL Server database and a Microsoft SQL Azure database with the Microsoft Sync Framework tools for SQL Azure.

About two weeks ago Chris Messina wrote a post titled OpenID Connect where he argued for the existence of a Facebook Connect style technology build on OpenID. He describes the technology as follows

So, to summarize:

for the non-tech, uninitiated audiences: OpenID Connect is a technology that lets you use an account that you already have to sign up, sign in, and bring your profile, contacts, data, and activities with you to any compatible site on the web.

for techies: OpenID Connect is OpenID rewritten on top of OAuth WRAP using service discovery to advertise Portable Contacts, Activity Streams, and any other well known API endpoints, and a means to automatically bootstrap consumer registration and token issuance.

This is something I brought up over a year ago in my post Some Thoughts on OpenID vs. Facebook Connect. The fact is that OpenID by itself is simply not as useful as Facebook Connect. The former allows me to sign-in to participating sites with my existing credentials while the latter lets me sign-in, share content with my social network, personalize and find my friends on participating sites using my Facebook identity.

As I mentioned in my previous post there are many pieces of different “Open brand” technologies that can be pieced together to create something similar to Facebook Connect such as OpenID + OpenID Attribute Exchange + Portable Contacts + OAuth WRAP + Activity Streams. However no one has put together a coherent package that ties all of these together as a complete end-to-end solution. This isn’t helped by the fact that these specs are at varying levels of maturity and completion.

An additional area of challenge for establishing an eventual “plug-and-play” national health information system pertains to laboratory data. Incorporating clinical lab-test results into EHRs as structured data is one of the Meaningful Use criteria (criterion 10) – physicians must be able to demonstrate that >50% of clinical lab tests that were ordered (whose results were numeric or +/-) are incorporated as structured data.

In order to achieve this, a common “language” must be implemented by all laboratories, so that EHRs can import these results in a systematic fashion. The specific vocabulary specified in the Certification documents is to use a coding system called LOINC. There is a standardized LOINC code for each different lab test type, specimen type and specific methodology – there are about 40,000 different LOINC codes currently defined for lab tests. …

Dr. Rowley continues with a description of “two issues [that] emerged when trying to implement standardization around LOINC codes.” LOINC codes should be a candidate for inclusion in the Microsoft Codename “Dallas” databases.

In this virtual lab, you will create a fully functional Windows Azure application from scratch. In the process, you will become familiar with several important components of the Windows Azure architecture including Web Roles, Worker Roles, and Windows Azure storage. You will also learn how to integrate a Windows Azure application with Windows Live ID authentication, and you’ll learn how to store relational data in the cloud using SQL Azure.

Update: If you are returning to continue the Windows Azure virtual lab and you started it before January 6, 2010, please note that the lab has been updated to reflect recent API changes. The exercises as shown here reflect the latest SDK (November 2009.)

To reflect the commercial availability of Windows Azure and the Windows Azure Offers – we went ahead and update[d] a couple of simple step-by-step documents to help developers get their feet wet with Windows Azure.

The much ballyhooed Health Information Exchange (HIE) in the state of California, CalRHIO, has raised the white flag, dismissing its troops and sending home its arms supplier (Medicity). Despite its founding five years ago, support of some significant organizations (e.g., United Health Group, Cisco, HP, California Hospital Assoc., etc.) spending some $7M to date and launching a major roadshow in March last year that included the go live of 23 institutions in Orange County in October, CalRHIO did not get the support of California’s Health and Human Services (CHHS) to be the state designated entity for overseeing ARRA funding for state HIEs.

Based on an article in California Healthline, a number of other organizations had some serious concerns with CalRHIO, enough concerns to start their own organization, the California eHealth Collaborative (CAeHC). What is surprising here is that one of those that called into question CalRHIO’s operating model was its former CEO Lori Hack, who is now a board member of the competing CAeHC. …

John concludes:

[P]robably the clearest message here is that the governance issue of HIEs is extremely political, especially when a boat-load of federal Stimulus dollars are at stake. The CalRHIO fiasco is unlikely to be the last one we’ll hear of over the next 3-9 months.

I consider “extremely political” an understatement, but otherwise I agree with his conclusion.

You might have noticed how hard it is to obtain your own health care history. Most medical records are written on paper. This is changing, but slowly. Even forward-looking doctors, hospitals, and pharmacies who are converting to electronic records have a hard time integrating with each other, since each IT system is largely independent of other systems. What you, the patient, would like to see is all of your history together in one place, regardless of who the provider was or where the care was given.

One of the tools that show promise in moving us towards that goal is the PHR, or Personal Health Record. While the concept of PHR has been around a long time, a relatively new idea, the online PHR, gained a big boost in popularity when Google and Microsoft each announced plans to provide PHR applications, aka Patient Portals, in 2007.

In a series of articles, we will explore online Personal Health Records and see how they might benefit both you and your care providers. We’ll take a look at how the two major business players architected their PHR applications, and finally dive deeper into how each of these solutions can be integrated into an existing health care IT enterprise. …

Here are the topics of this and the future articles:

This introduction to PHR

Google Health - what it offers, how it works, approaches to integration

Google Health in an SOA as orchestrated with WebSphere Process Server

Microsoft HealthVault - what it offers, how it works, approaches to integration

Microsoft HealthVault in an SOA as orchestrated with Microsoft BizTalk

Dom Green explains how to cache worker roles on either the Developer or Cloud Fabric in his Windows Azure Memcached-ed post of 1/17/2010:

Memcached is a distributed cache used to help speeding up large scale web applications by taking pressure off the database. Memcached is used by many of the internets biggest sites, including Twitter, Wikipedia, and YouTube to name just a few.

A distributed cache is one of the things that I’ve been hoping to see released for Windows Azure for quite a while, and I am hoping that AppFabric Caching will make the move to the cloud in the coming year. However, until that happens I was determined to find a way to get a distributed cache and this great Windows Azure Memcached sample showed me how. …

After installing and setting up Memcached you will be able to cache any data, including data that is retrieved from your database so that the next time you need it you can get it from cache and not need to re-query your database. Therefore, reduce the pressure on the database and earning the love of our DBA.

You can download the sample code for Windows Azure from the codeplex download page you then need to download a Windows friendly version of Memcached (here is where I got mine). With the sample code from codeplex just add the memcached exe to your worker roles. You will now be able to run the sample code either on the dev fabric or in the cloud. …

When Windows Azure was first released, only Web Roles were able to have an externally facing endpoint. Since PDC 2009, Worker Roles can now also have an external facing endpoint, allowing for a custom application server to be hosted in a Worker Role. Another option would be to run your own WCF service and have it hosted in a Worker Role. Features like load balancing, multiple instances of the Worker are all available. Let’s see how you can create a simple TCP service that can display the current date and time.

Here’s what I want to see when I connect to my Azure Worker Role using telnet (“telnet efwr.cloudapp.net 1234”):

Just a quick note: the approach described here can also be used to run a custom WCF host that has other bindings than for example basicHttpBinding.

Lucas Mearian reports “Incentives in the stimulus bill are also expected to foster a rapid spread of e-health records” in his Kaiser, VA join to give e-health a boost article of 1/18/2001 ComputerWorld article:

The e-health revolution got a shot in the arm earlier this month when health care network giant Kaiser Permanente and the U.S. Department of Veterans Affairs declared a months-long pilot program of sharing patient electronic health records (EHR) a success.

Patients must agree to be part of the shared system, which uses the Nationwide Health Information Network, a set of government protocols and standards designed to allow the secure exchange of health information between physicians at both private practices and hospitals.

During a press briefing in San Diego, officials from the VA and Kaiser Permanente also outlined plans to add the EHRs of Department of Defense personnel to the eHealth system and to expand the program geographically in the coming months.

Want to know how much your organization could save with the Windows Azure Platform? Check out the Windows Azure Platform TCO and ROI Calculator: http://www.microsoft.com/windowsazure/tco/, now available in both online and offline versions! Designed to help measure the potential savings of product development or migration to Windows Azure platform services for organizations of all sizes, the calculator will ask you for detailed information about the needs of your organization, its size and other information. Once you've entered in all your data, the Calculator will provide you with a customized report that will detail the estimated line item costs for a TCO and a 1 to 3 year ROI analysis for your organization.

Once you've registered, you'll have the option of downloading the off-line version right to your desktop. In addition to the "Reporting" and the "Collaborate" features in the on-line version, the offline version has "synchronize" option to keep the latest version of the tool, analysis models, templates etc. in sync with the online version.

While this Calculator is informational only, we hope that it provides you with a better sense of the savings you could experience by building on the Windows Azure Platform. Let us know what you think by posting a comment. We look forward to hearing from you

Virtualization has benefits, there is no arguing that. But let’s not get carried away and attribute all the benefits associated with cloud computing and automation to one member of the “game changing” team: virtualization. I recently read one of the all-too-common end-of-year prediction blogs on virtualization and 2010 that managed to say with what I think was a straight face that virtualization of the network is what makes it “fluid”.

Virtualizing the network provides the similar benefits as server virtualization through abstraction and automation … The bottom line: In 2010, the network is going to become as fluid and dynamic as the data center is today.

The first problem with these statements is the separation of the network from the data center. The last time I checked the network was the core foundation upon which data centers are built, making them not only a part of the data center but an integral part of it. The second is implying that the automation from which a fluid network is derived is somehow achieved through virtualization. No. No, it isn’t. Both virtual and physical infrastructure require more than just being shoved into a virtual machine or automatically provisioned to enable the kind of benefits expected. …

Once upon a time, a network engineer scrawled an amorphous shape upon a whiteboard and wrote “Internet” thereon. The amorphous circle, a ‘cloud’, soon became the de facto way that we represent “not my problem”, or outsourcing. Hence, the “cloud” in cloud computing means that cloud is predominantly an outsourcing business model. Only large scale ‘utilities’ can provide the cost savings benefits associated with cloud computing. — The Private Cloud Myth

This myth is misguided because it assumes that all cloud computing is a financial model rather than a technology or service model. Information Technology is rapidly changing from the older client/server and mainframe computing models to the cloud computing model. This computing model has been pioneered by Amazon and Google, both of whom offer non-utility ‘cloud’ services. It is a model that embraces automation and on-demand self-service. Providing a public utility service requires cloud computing, but cloud computing does not have to be delivered with a predetermined financial model.

An aside: a ‘model’ is a way of doing things. Technology models are ways of putting technology together. Financial models are ways to arrange finances. Service models are ways of providing a service that is consumed by someone else. …

Randy continues with a detailed analysis that supports his contention.

I was taken back a bit by this recent article talking about some big predictions from Gartner around the adoption of cloud computing:

“Cloud computing will become so pervasive that by 2012, one out of five businesses will own no IT assets at all, the analyst firm Gartner is predicting.

“The shift toward cloud services hosted outside the enterprise's firewall will necessitate a major shift in the IT hardware markets, and shrink IT staff, Gartner said.”

This is very interesting to me, considering that many new and small businesses are finding a great deal of value in moving to cloud computing. However, I'm not sure I agree with Gartner over the amount of movement that will occur by 2012. Sorry to once again be the buzzkill, but a sure way to bury a space is to overhype and underdeliver.

Don't get me wrong: Cloud computing will have an impact. I suspect that most midsize and small businesses will use e-mail and document management systems that are outside their firewalls. We've seen a lot of movement in this direction in 2009, and with the rapid expansion of Google Enterprise services and the emerging online version of Microsoft Office, this trend will only accelerate. …

Last month I launched my show called The Cloud9 Show. The first series of this show is called Demystifying The Cloud where I covered the key concepts of the Cloud Computing along with some of the commercial implementations of the Cloud. The unique thing about The Cloud9 Show is that you can download the video, audio, article and slides for each episode. Within the first few days of launching the show, it crossed 1000 views! I want to thank all the viewers.

I am working on the next series and currently working on the content. Now that I have covered the fundamentals of Cloud Computing, I am going to cover the hands-on tutorials. Going forward, you can watch a complete demo, download a step-by-step hands-on-lab guide along with links to all the required components.

Jani works for Alcatel-Lucent as Deputy General Manager for Bell Labs, India.

I spoke about Microsoft’s 3 Screens and the Cloud vision at a local Technology Council panel in New York Thursday night. I was on the panel with Alfred Spector, Google’s Vice President of Research and a councilwoman from New York City.

3 Screens & the Cloud is Microsoft’s vision that embraces the convergence of content and protocols across all devices; the PC, the mobile phone and Television including not only the ubiquitous Television set but also game consoles and other related devices.

They videoed the session and a link to it will be posted here as soon as they make it available.

Bill links to the following two articles. I’ll update this post when the video becomes available.

Tonight was the inaugural audience event of the newly formed New York Technology Council, and I must say the organization is off to an excellent start. The event was panel discussion focusing on technology trends for 2010, and included Alfred Spector, who heads Google's research and special initiatives (and is based in New York City, not Silicon Valley), Bill Zack, an Architect Evangelist for Microsoft focusing on Azure, and New York City Councilmember Gale Brewer, who is the Chair of the Council's Committee on Technology in Government. The panel was moderated by BusinessWeek's Arik Hesseldahl. This was a strong panel, with an excellent moderator and an impressive turnout; a good omen for the future success of NYTECH.

I won't relate the blow-by-blow of the discussion (if you're interested in that you can read the tweets from the event, but I think a summary of the discourse merits some discussion.

Questions and statements concerning health IT, government open data, mobile devices and broadband were raised. With each question, patterns emerged amongst each panelist's answers. Google's Spector said his company believes all data will, and should, be interconnected. Google also feels that mobile devices, fetching data from the cloud, will continue to grow in popularity and disrupt. Microsoft offers, not surprisingly, a differing, though not opposing, view. Redmond's take is that on-premise and cloud-based architectures are very different, that each offers distinct advantages and that in many cases, a combination of the two is the most sensible choice. Contrast this with Spector's comment that "it's only incidental whether data is stored on-premise or in the cloud" as long as it's not in a "walled garden." …

At the inaugural meeting of the New York Technology Council Thursday night, Google Vice President of Research Alfred Spector and Microsoft architect evangelist Bill Zack debated their views on how data will be stored and shared in the future.

The two were part of a panel discussion moderated by BusinessWeek technology reporter Arik Hesseldahl. Held at the New York headquarters of PricewaterhouseCoopers in front of an audience of 200 influential venture capitalists, IT executives and vendors, the debate underscored the rivals' competing but overlapping strategies for how datacenter architectures and personal information access will evolve using cloud services.

Zack explained Microsoft's mantra of "three screens and the cloud," which is focused on making data universally accessible on PCs, mobile devices and consumer systems, including televisions and gaming consoles. "We see in terms of content and in terms of protocols the convergence of those," Zack said.

"There is some information you can put in the cloud and there is some information that you'd be crazy to put in the cloud," Zack added. "We believe in the online stack and we believe in our cloud stack. And we believe in hybrid applications so you don't have to put information out in the cloud or all information on-premise. You can build an application that leverages the best of both."

Spector, meanwhile, championed Google's vision of having all data residing in the cloud. "I think it's clear to all of us now that information sharing is an essential part of running our society, we cannot have a walled enterprise," Spector said.

"Google and Microsoft each clearly espouse views that correlate to their own agendas," wrote attendee Andrew Brust, chief of new technology at twentysix New York, in a blog post. "Google wants everything to be published and interconnected, so that it can all be indexed, searched and AdWord-ized," Brust noted. "Microsoft, on the other hand, wishes both to promote its new cloud platform (Azure) and protect its legacy PC and server software franchise. Software + Services." …

There’s been increasing interest in Infrastructure 2.0 of late that’s encouraging to those of us who’ve been, well, pushing it uphill against the focus on cloud computing and virtualization for quite some time now. What’s been the most frustrating about bringing this concept to awareness has been that cloud computing is one of the most tangible examples of both what infrastructure 2.0 is and what it can do and virtualization is certainly one of the larger technological drivers of infrastructure 2.0 capable solutions today. So despite the frustration associated with cloud computing and virtualization stealing the stage, as it were, the spotlight is certainly helping to bring the issues which Infrastructure 2.0 is attempting to address into the fore. As it gains traction, one of the first challenges that must be addressed is to define what it is we mean when we say “Infrastructure 2.0.”

Like Web 2.0 – go ahead and try to define it simply – Infrastructure 2.0 remains, as James Urquhart put it recently, a “squishy term.” …

What complicates Infrastructure 2.0 is that not only is the term “squishy” but so is the very concept. After all, Infrastructure 2.0 is mostly about collaboration, about integration, about intelligence. These are not off the shelf “solutions” but rather enabling technologies that are designed to drive the flexibility and agility of enterprise networks forward in a such as way as to alleviate the pain points associated with the brittle, fragile network architectures of the past.

Greg Ness summed it the concept, at least, very well more than a year ago in “The beginning of the end of static infrastructure” when he said, “The issue comes down to static infrastructure incapable of keeping up with all of the new IP addresses and devices and initiatives and movement/change already taking place in large enterprises” and then noted that “the notion of application, endpoint and network intelligence thus far has been hamstrung by the lack of dynamic connectivity, or connectivity intelligence.”

What Greg noticed is missing is context, and perhaps even more importantly the ability to share that context across the entire infrastructure. I could, and have, gone on and on and on about this subject so for now I’ll just stop and offer up a few links to some of the insightful posts that shed more light on Infrastructure 2.0 – its drivers, its requirements, its breadth of applicability, and its goals. …

Listen to my conversation with Jeff Kaplan, managing director of strategic consulting firm THINKstrategies, and one of the foremost analysts tracking software-as-a-service, on-demand and cloud computing.

In this podcast, learn why it’s not enough to simply port an existing software package to the cloud without rearchitecting it, and hear about some of the ways enterprises will deal with hybrid environments that mix on-premise and cloud assets.

Tony Iams stratifies clouds into high, middle, low and vertical categories in his 1/15/2010 The Four Clouds of the Datacenter analysis of the HP/Microsoft agreement of last week:

In the real world, meteorologists classify clouds into four categories according to their base height: "high" clouds, "middle" clouds, "low" clouds, and "vertical" clouds, which can form at many heights. After this week's announcement that HP and Microsoft would jointly invest $250 million in developing and selling integrated cloud computing technology, it seems that a similar selection will emerge in datacenters, as vendors seek to offer clouds targeting different strata of customers. HP and Microsoft announced that they would jointly develop systems that are highly optimized for hosting Microsoft Exchange Server and Microsoft SQL Server. These systems will provide pre-integrated server, storage, networking and application packages for deploying and managing Microsoft's database and e-mail services with "push-button" simplicity.

While the prospect of tapping into third-party computing infrastructures remains a goal for many organizations, the most pressing concern for most is to virtualize as much as possible of their internal infrastructure into "secure" or "private" clouds. Indeed, for many users "cloud" currently implies nothing more than converging virtualized server, storage, and network resources into a single pool that workloads can draw upon as needed. However, these users are finding that the complexity of deploying virtual infrastructure can be overwhelming, especially in mid-sized organizations, which have the need for datacenter capabilities, but lack the depth of personnel to manage complex new datacenter functions such as virtual infrastructure.

In response, there has been a movement among the major systems vendors to provide integrated stacks that combine multiple layers of IT infrastructure, including servers, storage, networking, and software, into a single package that can be managed as a unit. Cisco is taking advantage of the need for such solutions to break into the server market, collaborating with VMware and EMC as part of the Virtual Computing Environment (VCE) alliance to deliver integrated solutions that are optimized for deploying cloud infrastructure. Oracle has integrated its database software with advanced storage functions based on Solid State Disks (SSD) in its Exadata appliance, and if its acquisition of Sun succeeds, it will be able to fold much of Sun's server, storage and software technology into future systems. IBM has strengthened the integration between its various server platforms by unifying the management experience for administrators across all of them. …

Tony continues with a table that describes the Integrated stacks for deploying datacenter clouds from HP/Microsoft (low), Oracle (middle), IBM (high), and VMWare/EMC (vertical). Tony is an analyst with Ideas International.

Cisco Security Intelligence Operations announces the Cisco 2009 Annual Security Report. The updated report includes information about 2009 global threats and trends, as well as security recommendations for 2010.

Managing and securing today's distributed and agile network is increasingly challenging, with cloud computing and sharing of data threatening security norms. Online criminals are continuing to exploit users trust in consumer applications and devices, increasing the risk to organizations and employees.

Report Highlights

Online criminals have taken advantage of the large social media following, exploiting users' willingness to respond to messages that are supposedly from people they know and trust.

Politically-motivated threats are increasing, while governments are teaming up and promoting online security.

Up to 90 percent of spam is untargeted. That includes spam delivered by botnets that floods inboxes with messages from supposed banks, educational institutions, and service providers.

More than 80 percent of the web can be classified as "uncategorized" or "unknown", making it challenging for traditional URL filtering technology. The new Cisco Cybercrime Return on Investment Matrix tracks the performance of the underground online criminal marketplace, helping organizations understand the latest targets.

The 1/18/2010 meetup appears to be overbooked, but I’ll try to make the next one.

INPUT and SIIA will present SaaS/Gov 2010 on 2/11/2010 at The Westin, Washington, DC:

SaaS/Gov is the most comprehensive conference bringing together federal IT purchasers and software industry executives to address the government's movement towards Software as a Service (SaaS) and Cloud Computing.

The Microsoft BizSparkCamp for Windows Azurewill be held at Microsoft Technology Center, New York, NY from Thu 1/28/2010 to Fri 1/29/2010. This event consists of ½ day of training, 1 day of active prototype/development time, and ½ day for packaging/finishing and reporting out to a panel of judges for various prizes.

This event is a no-fee event (plan your own travel expenses) and each team can bring 3 participants (1 business and 1 – 2 developer). It is required to have at least 1 developer as part of your team. To nominate your team, please submit the following details to Sanjay Jain (preferably via your BizSaprk Sponsor) no later than by Mon 18 January 2010. Nominations will be judged according to the strength of the founding team, originality and creativity of the idea, and ability to leverage Windows Azure Scenarios.

Presenting for the evening will be Chris Rolon. Chris is an Architectural Consultant for Neudesic, a solutions company focused on the delivery of products and services based on Microsoft .NET technologies. Chris brings an extensive technology background with more than 25 years of industry experience in custom application development and implementation.

Chris now heads the Windows Azure User Groups in New York and Malvern, PA, where he has been speaking on Azure since January of 2009. Chris has just completed a national tour where he spoke to other Microsoft partners on the merits of Windows Azure.

The meeting will be held at Google, 76 Ninth Ave, 4th Floor, New York, NY 10011. The presentation is free for NYTC members and sponsors, $15 for non-members.

John Willis adds his support for CloudCampHaiti in this 1/18/2010 post:

I would love to use this opportunity to inform you of something the “Cloudcamp.org” has setup. CloudCampHaiti is a virtual unconference we are running this Wednesday afternoon (http://www.cloudcamp.org/haiti). Our primary goal is to raise money for the Red Cross. One hundred percent of the proceeds will be going to the Haiti earthquake victims. However, we also have a theme “How The Cloud Can Help” . We want to see how cloud computing and expert resources can be used to help in disasters like this. Our registration process is simple, $25 to attend the virtual conference, $50 to be listed as a special donor, and $250 to have your company logo.

CloudCampHaiti is going to be a great event. We are going to have some of the biggest names in cloud computing present and we will also have a panel session and open discussion based on the on the “How The Cloud Can Help” theme. This is a great opportunity to learn, participate and help.

I will be popping along to CloudCamp this Thursday. I’ve been to CloudCamp once before when it coincided with Qcon London when I was a speaker in 2009. But TBH I really didn’t pay much attention last time around as it was a) the end of a long day and b) I was only dabbling with cloud at the time.

On January 1st I switched full time to the Windows Azure Platform which means this time I will be paying attention :-) …

Jim Swartz, a longtime veteran of the CIO ranks, explains his view of the cloud’s pros and cons, as well as what he sees in store for IT leaders in 2010.

Sybase has made a major transition to become a mobility software provider. But beyond boosting the company's core offerings, CIO Jim Swartz is looking heavily at cloud computing, SaaS and maximizing the potential of the Millennial generation.

Swartz, formerly an IT leader at several companies, including SRI International and SAIC, spoke recently with CIO Insight Editor in Chief Brian P. Watson about his priorities for 2010. …

… Apple’s recent acquisition of digital music startup Lala rekindled speculation of an iTunes subscription service. There’s no shortage of subscription offerings (Napster, Rhapsody,Spotify,Pandora, etc), but none have attracted the millions of subscribers necessary to make the high royalty structures work. Experts have pondered that Apple’s design expertise and hardware integration could make subscription work. And leveraging Lala’s digital library, licenses from the major labels, and a management team who cycled through several business models including the ten cent web song rental could make it a reality. It’s a logical assumption, but after talking to a wide variety of insider sources it’s clear there is no upcoming Apple subscription service and Apple has far different plans. …

Lala will play a critical role in Apple’s music future, but not for the reasons cited above. Lala’s licenses with major labels are non-transferable, so they’re not usable for any new iTunes service. The 10 cent song rental model never gained traction and does not cover mobile devices thus is of little value to Apple. What is of value is the personal music storage service which was an often overlooked component of Lala’s business. As Apple did with the original iPods, Lala realized that any music solution must include music already possessed by the user. The Lala setup process provides software to store a personal music library online and then play it from any web browser alongside web songs they vend. This technology plus the engineering and management team is the true value of Lala to Apple. …

Michael is the founder and former CEO of digital music pioneer MP3.com. He is currently the CEO of music locker company MP3tunes. Robertson is also an adviser to Google Voice.

IBM in Monday announced the technology and business expansion of its LotusLive cloud collaboration platform through a new R&D pipeline from IBM Research and plans to open the LotusLive suite to new partners.

I am here at Lotusphere 2010 in Orlando, sitting in the press room. John Fontana from Network World just walked in and and asked me what I thought of the event so far. My reply:

“Well I am not saying Lotus has all its ducks in a row, but at least it has plenty of them now.”

The Lotus portfolio is now looking both increasingly broad, and compelling at scale. …

See the Windows Azure Infrastructure section for details of the New York Technology Council’s inaugural meeting, which featured Bill Zack’s 3 Screens & the Cloud presentation, as well as the views of Alfred Spector, Google’s Vice President of Research.

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the November CTP in January 2010. * Content for managing DataHubs will be added as Microsoft releases more details on data synchronization services for SQL Azure and Windows Azure.

As always, every time we add a new storage provider integration, we would compare it to the existing ones, such as Amazon S3.

The test is simple, I have a 27M zip file that I first drag and drop into a folder under Amazon S3 and watch the upload progress in the Gladinet Task Manager, timing it from begin to end. Then I repeat the same thing with Windows Azure Blob Storage.

It is very interesting to see the results are so close when knowing the two cloud storage providers are completely different. The servers hosting the data are completely different and the IP routes are different. There must be some kind of server side tool to throttle the speed of a single connection to a certain number. Maybe an expert of the cloud server end can explain this. …

A speed test shows my upload speed to a random server in US is 3.97Mbit/s ( ~ 490KBytes/sec). …

Jerry is a founder of Gladinet.

• Jerry Huang asserts “I haven't seen a Azure Storage Client on Mac yet but I guess the PHP and Ruby clients can cover those on Mac and Linux” in his Windows Azure Storage Windows Clients post of 1/19/2010:

As the Windows Azure Platform making the transition from public preview to full production now, here is a list of Windows Azure Storage clients to help you leverage the Azure Storage:

Azure Storage Explorer is a useful GUI tool for inspecting and altering the data in your Azure cloud storage projects including the logs of your cloud-hosted applications. All three types of cloud storage can be viewed: blobs, queues, and tables. Free, Windows(NET/WPF), Open Source.

These tools overlap and also compliment each other on functionality. For example, you can use Gladinet Cloud Desktop to map a drive letter, do drag and drop and in place editing, and use Cloud Storage Studio to fine tune the properties of each blob and use other tools to cover functionality gap in between. …

Today, SQL Server 2008 R2 received an official release date. It will be listed on Microsoft’s May price list, and will be available by May 2010.

SQL Server 2008 R2 showcases Microsoft’s continued commitment to business intelligence and mission-critical workloads. Since we made this release available as a Community Technology Preview (CTP) in August 2009, it has been well-received by the community with more than 150,000 downloads. …

Customers with Software Assurance can upgrade to SQL Server 2008 R2 and take advantage of the new features without incurring additional licensing costs.

Also, Vinod Kumar Jagannathan, who I believe is a Program Manager at Microsoft India, reports in a comment to this post that the problem is an issue with the 11/2009 CTP and is fixed for later (future) SSMS 2008 R2 CTPs.

What if you could get all the benefits of distributed, on-premises application databases AND hosted cloud-based databases for a real Software + Services implementation? In this video, Hilton Giesenow, host of The MOSS Show SharePoint podcast (http://www.TheMossShow.com/) shows us how to set up a powerful and easy-to-use synchronisation model between a local Microsoft SQL Server database and a Microsoft SQL Azure database with the Microsoft Sync Framework tools for SQL Azure.

About two weeks ago Chris Messina wrote a post titled OpenID Connect where he argued for the existence of a Facebook Connect style technology build on OpenID. He describes the technology as follows

So, to summarize:

for the non-tech, uninitiated audiences: OpenID Connect is a technology that lets you use an account that you already have to sign up, sign in, and bring your profile, contacts, data, and activities with you to any compatible site on the web.

for techies: OpenID Connect is OpenID rewritten on top of OAuth WRAP using service discovery to advertise Portable Contacts, Activity Streams, and any other well known API endpoints, and a means to automatically bootstrap consumer registration and token issuance.

This is something I brought up over a year ago in my post Some Thoughts on OpenID vs. Facebook Connect. The fact is that OpenID by itself is simply not as useful as Facebook Connect. The former allows me to sign-in to participating sites with my existing credentials while the latter lets me sign-in, share content with my social network, personalize and find my friends on participating sites using my Facebook identity.

As I mentioned in my previous post there are many pieces of different “Open brand” technologies that can be pieced together to create something similar to Facebook Connect such as OpenID + OpenID Attribute Exchange + Portable Contacts + OAuth WRAP + Activity Streams. However no one has put together a coherent package that ties all of these together as a complete end-to-end solution. This isn’t helped by the fact that these specs are at varying levels of maturity and completion.

An additional area of challenge for establishing an eventual “plug-and-play” national health information system pertains to laboratory data. Incorporating clinical lab-test results into EHRs as structured data is one of the Meaningful Use criteria (criterion 10) – physicians must be able to demonstrate that >50% of clinical lab tests that were ordered (whose results were numeric or +/-) are incorporated as structured data.

In order to achieve this, a common “language” must be implemented by all laboratories, so that EHRs can import these results in a systematic fashion. The specific vocabulary specified in the Certification documents is to use a coding system called LOINC. There is a standardized LOINC code for each different lab test type, specimen type and specific methodology – there are about 40,000 different LOINC codes currently defined for lab tests. …

Dr. Rowley continues with a description of “two issues [that] emerged when trying to implement standardization around LOINC codes.” LOINC codes should be a candidate for inclusion in the Microsoft Codename “Dallas” databases.

In this virtual lab, you will create a fully functional Windows Azure application from scratch. In the process, you will become familiar with several important components of the Windows Azure architecture including Web Roles, Worker Roles, and Windows Azure storage. You will also learn how to integrate a Windows Azure application with Windows Live ID authentication, and you’ll learn how to store relational data in the cloud using SQL Azure.

Update: If you are returning to continue the Windows Azure virtual lab and you started it before January 6, 2010, please note that the lab has been updated to reflect recent API changes. The exercises as shown here reflect the latest SDK (November 2009.)

To reflect the commercial availability of Windows Azure and the Windows Azure Offers – we went ahead and update[d] a couple of simple step-by-step documents to help developers get their feet wet with Windows Azure.

The much ballyhooed Health Information Exchange (HIE) in the state of California, CalRHIO, has raised the white flag, dismissing its troops and sending home its arms supplier (Medicity). Despite its founding five years ago, support of some significant organizations (e.g., United Health Group, Cisco, HP, California Hospital Assoc., etc.) spending some $7M to date and launching a major roadshow in March last year that included the go live of 23 institutions in Orange County in October, CalRHIO did not get the support of California’s Health and Human Services (CHHS) to be the state designated entity for overseeing ARRA funding for state HIEs.

Based on an article in California Healthline, a number of other organizations had some serious concerns with CalRHIO, enough concerns to start their own organization, the California eHealth Collaborative (CAeHC). What is surprising here is that one of those that called into question CalRHIO’s operating model was its former CEO Lori Hack, who is now a board member of the competing CAeHC. …

John concludes:

[P]robably the clearest message here is that the governance issue of HIEs is extremely political, especially when a boat-load of federal Stimulus dollars are at stake. The CalRHIO fiasco is unlikely to be the last one we’ll hear of over the next 3-9 months.

I consider “extremely political” an understatement, but otherwise I agree with his conclusion.

You might have noticed how hard it is to obtain your own health care history. Most medical records are written on paper. This is changing, but slowly. Even forward-looking doctors, hospitals, and pharmacies who are converting to electronic records have a hard time integrating with each other, since each IT system is largely independent of other systems. What you, the patient, would like to see is all of your history together in one place, regardless of who the provider was or where the care was given.

One of the tools that show promise in moving us towards that goal is the PHR, or Personal Health Record. While the concept of PHR has been around a long time, a relatively new idea, the online PHR, gained a big boost in popularity when Google and Microsoft each announced plans to provide PHR applications, aka Patient Portals, in 2007.

In a series of articles, we will explore online Personal Health Records and see how they might benefit both you and your care providers. We’ll take a look at how the two major business players architected their PHR applications, and finally dive deeper into how each of these solutions can be integrated into an existing health care IT enterprise. …

Here are the topics of this and the future articles:

This introduction to PHR

Google Health - what it offers, how it works, approaches to integration

Google Health in an SOA as orchestrated with WebSphere Process Server

Microsoft HealthVault - what it offers, how it works, approaches to integration

Microsoft HealthVault in an SOA as orchestrated with Microsoft BizTalk

Dom Green explains how to cache worker roles on either the Developer or Cloud Fabric in his Windows Azure Memcached-ed post of 1/17/2010:

Memcached is a distributed cache used to help speeding up large scale web applications by taking pressure off the database. Memcached is used by many of the internets biggest sites, including Twitter, Wikipedia, and YouTube to name just a few.

A distributed cache is one of the things that I’ve been hoping to see released for Windows Azure for quite a while, and I am hoping that AppFabric Caching will make the move to the cloud in the coming year. However, until that happens I was determined to find a way to get a distributed cache and this great Windows Azure Memcached sample showed me how. …

After installing and setting up Memcached you will be able to cache any data, including data that is retrieved from your database so that the next time you need it you can get it from cache and not need to re-query your database. Therefore, reduce the pressure on the database and earning the love of our DBA.

You can download the sample code for Windows Azure from the codeplex download page you then need to download a Windows friendly version of Memcached (here is where I got mine). With the sample code from codeplex just add the memcached exe to your worker roles. You will now be able to run the sample code either on the dev fabric or in the cloud. …

When Windows Azure was first released, only Web Roles were able to have an externally facing endpoint. Since PDC 2009, Worker Roles can now also have an external facing endpoint, allowing for a custom application server to be hosted in a Worker Role. Another option would be to run your own WCF service and have it hosted in a Worker Role. Features like load balancing, multiple instances of the Worker are all available. Let’s see how you can create a simple TCP service that can display the current date and time.

Here’s what I want to see when I connect to my Azure Worker Role using telnet (“telnet efwr.cloudapp.net 1234”):

Just a quick note: the approach described here can also be used to run a custom WCF host that has other bindings than for example basicHttpBinding.

Lucas Mearian reports “Incentives in the stimulus bill are also expected to foster a rapid spread of e-health records” in his Kaiser, VA join to give e-health a boost article of 1/18/2001 ComputerWorld article:

The e-health revolution got a shot in the arm earlier this month when health care network giant Kaiser Permanente and the U.S. Department of Veterans Affairs declared a months-long pilot program of sharing patient electronic health records (EHR) a success.

Patients must agree to be part of the shared system, which uses the Nationwide Health Information Network, a set of government protocols and standards designed to allow the secure exchange of health information between physicians at both private practices and hospitals.

During a press briefing in San Diego, officials from the VA and Kaiser Permanente also outlined plans to add the EHRs of Department of Defense personnel to the eHealth system and to expand the program geographically in the coming months.

Want to know how much your organization could save with the Windows Azure Platform? Check out the Windows Azure Platform TCO and ROI Calculator: http://www.microsoft.com/windowsazure/tco/, now available in both online and offline versions! Designed to help measure the potential savings of product development or migration to Windows Azure platform services for organizations of all sizes, the calculator will ask you for detailed information about the needs of your organization, its size and other information. Once you've entered in all your data, the Calculator will provide you with a customized report that will detail the estimated line item costs for a TCO and a 1 to 3 year ROI analysis for your organization.

Once you've registered, you'll have the option of downloading the off-line version right to your desktop. In addition to the "Reporting" and the "Collaborate" features in the on-line version, the offline version has "synchronize" option to keep the latest version of the tool, analysis models, templates etc. in sync with the online version.

While this Calculator is informational only, we hope that it provides you with a better sense of the savings you could experience by building on the Windows Azure Platform. Let us know what you think by posting a comment. We look forward to hearing from you

Virtualization has benefits, there is no arguing that. But let’s not get carried away and attribute all the benefits associated with cloud computing and automation to one member of the “game changing” team: virtualization. I recently read one of the all-too-common end-of-year prediction blogs on virtualization and 2010 that managed to say with what I think was a straight face that virtualization of the network is what makes it “fluid”.

Virtualizing the network provides the similar benefits as server virtualization through abstraction and automation … The bottom line: In 2010, the network is going to become as fluid and dynamic as the data center is today.

The first problem with these statements is the separation of the network from the data center. The last time I checked the network was the core foundation upon which data centers are built, making them not only a part of the data center but an integral part of it. The second is implying that the automation from which a fluid network is derived is somehow achieved through virtualization. No. No, it isn’t. Both virtual and physical infrastructure require more than just being shoved into a virtual machine or automatically provisioned to enable the kind of benefits expected. …

Once upon a time, a network engineer scrawled an amorphous shape upon a whiteboard and wrote “Internet” thereon. The amorphous circle, a ‘cloud’, soon became the de facto way that we represent “not my problem”, or outsourcing. Hence, the “cloud” in cloud computing means that cloud is predominantly an outsourcing business model. Only large scale ‘utilities’ can provide the cost savings benefits associated with cloud computing. — The Private Cloud Myth

This myth is misguided because it assumes that all cloud computing is a financial model rather than a technology or service model. Information Technology is rapidly changing from the older client/server and mainframe computing models to the cloud computing model. This computing model has been pioneered by Amazon and Google, both of whom offer non-utility ‘cloud’ services. It is a model that embraces automation and on-demand self-service. Providing a public utility service requires cloud computing, but cloud computing does not have to be delivered with a predetermined financial model.

An aside: a ‘model’ is a way of doing things. Technology models are ways of putting technology together. Financial models are ways to arrange finances. Service models are ways of providing a service that is consumed by someone else. …

Randy continues with a detailed analysis that supports his contention.

I was taken back a bit by this recent article talking about some big predictions from Gartner around the adoption of cloud computing:

“Cloud computing will become so pervasive that by 2012, one out of five businesses will own no IT assets at all, the analyst firm Gartner is predicting.

“The shift toward cloud services hosted outside the enterprise's firewall will necessitate a major shift in the IT hardware markets, and shrink IT staff, Gartner said.”

This is very interesting to me, considering that many new and small businesses are finding a great deal of value in moving to cloud computing. However, I'm not sure I agree with Gartner over the amount of movement that will occur by 2012. Sorry to once again be the buzzkill, but a sure way to bury a space is to overhype and underdeliver.

Don't get me wrong: Cloud computing will have an impact. I suspect that most midsize and small businesses will use e-mail and document management systems that are outside their firewalls. We've seen a lot of movement in this direction in 2009, and with the rapid expansion of Google Enterprise services and the emerging online version of Microsoft Office, this trend will only accelerate. …

Last month I launched my show called The Cloud9 Show. The first series of this show is called Demystifying The Cloud where I covered the key concepts of the Cloud Computing along with some of the commercial implementations of the Cloud. The unique thing about The Cloud9 Show is that you can download the video, audio, article and slides for each episode. Within the first few days of launching the show, it crossed 1000 views! I want to thank all the viewers.

I am working on the next series and currently working on the content. Now that I have covered the fundamentals of Cloud Computing, I am going to cover the hands-on tutorials. Going forward, you can watch a complete demo, download a step-by-step hands-on-lab guide along with links to all the required components.

Jani works for Alcatel-Lucent as Deputy General Manager for Bell Labs, India.

I spoke about Microsoft’s 3 Screens and the Cloud vision at a local Technology Council panel in New York Thursday night. I was on the panel with Alfred Spector, Google’s Vice President of Research and a councilwoman from New York City.

3 Screens & the Cloud is Microsoft’s vision that embraces the convergence of content and protocols across all devices; the PC, the mobile phone and Television including not only the ubiquitous Television set but also game consoles and other related devices.

They videoed the session and a link to it will be posted here as soon as they make it available.

Bill links to the following two articles. I’ll update this post when the video becomes available.

Tonight was the inaugural audience event of the newly formed New York Technology Council, and I must say the organization is off to an excellent start. The event was panel discussion focusing on technology trends for 2010, and included Alfred Spector, who heads Google's research and special initiatives (and is based in New York City, not Silicon Valley), Bill Zack, an Architect Evangelist for Microsoft focusing on Azure, and New York City Councilmember Gale Brewer, who is the Chair of the Council's Committee on Technology in Government. The panel was moderated by BusinessWeek's Arik Hesseldahl. This was a strong panel, with an excellent moderator and an impressive turnout; a good omen for the future success of NYTECH.

I won't relate the blow-by-blow of the discussion (if you're interested in that you can read the tweets from the event, but I think a summary of the discourse merits some discussion.

Questions and statements concerning health IT, government open data, mobile devices and broadband were raised. With each question, patterns emerged amongst each panelist's answers. Google's Spector said his company believes all data will, and should, be interconnected. Google also feels that mobile devices, fetching data from the cloud, will continue to grow in popularity and disrupt. Microsoft offers, not surprisingly, a differing, though not opposing, view. Redmond's take is that on-premise and cloud-based architectures are very different, that each offers distinct advantages and that in many cases, a combination of the two is the most sensible choice. Contrast this with Spector's comment that "it's only incidental whether data is stored on-premise or in the cloud" as long as it's not in a "walled garden." …

At the inaugural meeting of the New York Technology Council Thursday night, Google Vice President of Research Alfred Spector and Microsoft architect evangelist Bill Zack debated their views on how data will be stored and shared in the future.

The two were part of a panel discussion moderated by BusinessWeek technology reporter Arik Hesseldahl. Held at the New York headquarters of PricewaterhouseCoopers in front of an audience of 200 influential venture capitalists, IT executives and vendors, the debate underscored the rivals' competing but overlapping strategies for how datacenter architectures and personal information access will evolve using cloud services.

Zack explained Microsoft's mantra of "three screens and the cloud," which is focused on making data universally accessible on PCs, mobile devices and consumer systems, including televisions and gaming consoles. "We see in terms of content and in terms of protocols the convergence of those," Zack said.

"There is some information you can put in the cloud and there is some information that you'd be crazy to put in the cloud," Zack added. "We believe in the online stack and we believe in our cloud stack. And we believe in hybrid applications so you don't have to put information out in the cloud or all information on-premise. You can build an application that leverages the best of both."

Spector, meanwhile, championed Google's vision of having all data residing in the cloud. "I think it's clear to all of us now that information sharing is an essential part of running our society, we cannot have a walled enterprise," Spector said.

"Google and Microsoft each clearly espouse views that correlate to their own agendas," wrote attendee Andrew Brust, chief of new technology at twentysix New York, in a blog post. "Google wants everything to be published and interconnected, so that it can all be indexed, searched and AdWord-ized," Brust noted. "Microsoft, on the other hand, wishes both to promote its new cloud platform (Azure) and protect its legacy PC and server software franchise. Software + Services." …

There’s been increasing interest in Infrastructure 2.0 of late that’s encouraging to those of us who’ve been, well, pushing it uphill against the focus on cloud computing and virtualization for quite some time now. What’s been the most frustrating about bringing this concept to awareness has been that cloud computing is one of the most tangible examples of both what infrastructure 2.0 is and what it can do and virtualization is certainly one of the larger technological drivers of infrastructure 2.0 capable solutions today. So despite the frustration associated with cloud computing and virtualization stealing the stage, as it were, the spotlight is certainly helping to bring the issues which Infrastructure 2.0 is attempting to address into the fore. As it gains traction, one of the first challenges that must be addressed is to define what it is we mean when we say “Infrastructure 2.0.”

Like Web 2.0 – go ahead and try to define it simply – Infrastructure 2.0 remains, as James Urquhart put it recently, a “squishy term.” …

What complicates Infrastructure 2.0 is that not only is the term “squishy” but so is the very concept. After all, Infrastructure 2.0 is mostly about collaboration, about integration, about intelligence. These are not off the shelf “solutions” but rather enabling technologies that are designed to drive the flexibility and agility of enterprise networks forward in a such as way as to alleviate the pain points associated with the brittle, fragile network architectures of the past.

Greg Ness summed it the concept, at least, very well more than a year ago in “The beginning of the end of static infrastructure” when he said, “The issue comes down to static infrastructure incapable of keeping up with all of the new IP addresses and devices and initiatives and movement/change already taking place in large enterprises” and then noted that “the notion of application, endpoint and network intelligence thus far has been hamstrung by the lack of dynamic connectivity, or connectivity intelligence.”

What Greg noticed is missing is context, and perhaps even more importantly the ability to share that context across the entire infrastructure. I could, and have, gone on and on and on about this subject so for now I’ll just stop and offer up a few links to some of the insightful posts that shed more light on Infrastructure 2.0 – its drivers, its requirements, its breadth of applicability, and its goals. …

Listen to my conversation with Jeff Kaplan, managing director of strategic consulting firm THINKstrategies, and one of the foremost analysts tracking software-as-a-service, on-demand and cloud computing.

In this podcast, learn why it’s not enough to simply port an existing software package to the cloud without rearchitecting it, and hear about some of the ways enterprises will deal with hybrid environments that mix on-premise and cloud assets.

Tony Iams stratifies clouds into high, middle, low and vertical categories in his 1/15/2010 The Four Clouds of the Datacenter analysis of the HP/Microsoft agreement of last week:

In the real world, meteorologists classify clouds into four categories according to their base height: "high" clouds, "middle" clouds, "low" clouds, and "vertical" clouds, which can form at many heights. After this week's announcement that HP and Microsoft would jointly invest $250 million in developing and selling integrated cloud computing technology, it seems that a similar selection will emerge in datacenters, as vendors seek to offer clouds targeting different strata of customers. HP and Microsoft announced that they would jointly develop systems that are highly optimized for hosting Microsoft Exchange Server and Microsoft SQL Server. These systems will provide pre-integrated server, storage, networking and application packages for deploying and managing Microsoft's database and e-mail services with "push-button" simplicity.

While the prospect of tapping into third-party computing infrastructures remains a goal for many organizations, the most pressing concern for most is to virtualize as much as possible of their internal infrastructure into "secure" or "private" clouds. Indeed, for many users "cloud" currently implies nothing more than converging virtualized server, storage, and network resources into a single pool that workloads can draw upon as needed. However, these users are finding that the complexity of deploying virtual infrastructure can be overwhelming, especially in mid-sized organizations, which have the need for datacenter capabilities, but lack the depth of personnel to manage complex new datacenter functions such as virtual infrastructure.

In response, there has been a movement among the major systems vendors to provide integrated stacks that combine multiple layers of IT infrastructure, including servers, storage, networking, and software, into a single package that can be managed as a unit. Cisco is taking advantage of the need for such solutions to break into the server market, collaborating with VMware and EMC as part of the Virtual Computing Environment (VCE) alliance to deliver integrated solutions that are optimized for deploying cloud infrastructure. Oracle has integrated its database software with advanced storage functions based on Solid State Disks (SSD) in its Exadata appliance, and if its acquisition of Sun succeeds, it will be able to fold much of Sun's server, storage and software technology into future systems. IBM has strengthened the integration between its various server platforms by unifying the management experience for administrators across all of them. …

Tony continues with a table that describes the Integrated stacks for deploying datacenter clouds from HP/Microsoft (low), Oracle (middle), IBM (high), and VMWare/EMC (vertical). Tony is an analyst with Ideas International.

Cisco Security Intelligence Operations announces the Cisco 2009 Annual Security Report. The updated report includes information about 2009 global threats and trends, as well as security recommendations for 2010.

Managing and securing today's distributed and agile network is increasingly challenging, with cloud computing and sharing of data threatening security norms. Online criminals are continuing to exploit users trust in consumer applications and devices, increasing the risk to organizations and employees.

Report Highlights

Online criminals have taken advantage of the large social media following, exploiting users' willingness to respond to messages that are supposedly from people they know and trust.

Politically-motivated threats are increasing, while governments are teaming up and promoting online security.

Up to 90 percent of spam is untargeted. That includes spam delivered by botnets that floods inboxes with messages from supposed banks, educational institutions, and service providers.

More than 80 percent of the web can be classified as "uncategorized" or "unknown", making it challenging for traditional URL filtering technology. The new Cisco Cybercrime Return on Investment Matrix tracks the performance of the underground online criminal marketplace, helping organizations understand the latest targets.

The 1/18/2010 meetup appears to be overbooked, but I’ll try to make the next one.

INPUT and SIIA will present SaaS/Gov 2010 on 2/11/2010 at The Westin, Washington, DC:

SaaS/Gov is the most comprehensive conference bringing together federal IT purchasers and software industry executives to address the government's movement towards Software as a Service (SaaS) and Cloud Computing.

The Microsoft BizSparkCamp for Windows Azurewill be held at Microsoft Technology Center, New York, NY from Thu 1/28/2010 to Fri 1/29/2010. This event consists of ½ day of training, 1 day of active prototype/development time, and ½ day for packaging/finishing and reporting out to a panel of judges for various prizes.

This event is a no-fee event (plan your own travel expenses) and each team can bring 3 participants (1 business and 1 – 2 developer). It is required to have at least 1 developer as part of your team. To nominate your team, please submit the following details to Sanjay Jain (preferably via your BizSaprk Sponsor) no later than by Mon 18 January 2010. Nominations will be judged according to the strength of the founding team, originality and creativity of the idea, and ability to leverage Windows Azure Scenarios.

Presenting for the evening will be Chris Rolon. Chris is an Architectural Consultant for Neudesic, a solutions company focused on the delivery of products and services based on Microsoft .NET technologies. Chris brings an extensive technology background with more than 25 years of industry experience in custom application development and implementation.

Chris now heads the Windows Azure User Groups in New York and Malvern, PA, where he has been speaking on Azure since January of 2009. Chris has just completed a national tour where he spoke to other Microsoft partners on the merits of Windows Azure.

The meeting will be held at Google, 76 Ninth Ave, 4th Floor, New York, NY 10011. The presentation is free for NYTC members and sponsors, $15 for non-members.

John Willis adds his support for CloudCampHaiti in this 1/18/2010 post:

I would love to use this opportunity to inform you of something the “Cloudcamp.org” has setup. CloudCampHaiti is a virtual unconference we are running this Wednesday afternoon (http://www.cloudcamp.org/haiti). Our primary goal is to raise money for the Red Cross. One hundred percent of the proceeds will be going to the Haiti earthquake victims. However, we also have a theme “How The Cloud Can Help” . We want to see how cloud computing and expert resources can be used to help in disasters like this. Our registration process is simple, $25 to attend the virtual conference, $50 to be listed as a special donor, and $250 to have your company logo.

CloudCampHaiti is going to be a great event. We are going to have some of the biggest names in cloud computing present and we will also have a panel session and open discussion based on the on the “How The Cloud Can Help” theme. This is a great opportunity to learn, participate and help.

I will be popping along to CloudCamp this Thursday. I’ve been to CloudCamp once before when it coincided with Qcon London when I was a speaker in 2009. But TBH I really didn’t pay much attention last time around as it was a) the end of a long day and b) I was only dabbling with cloud at the time.

On January 1st I switched full time to the Windows Azure Platform which means this time I will be paying attention :-) …

Jim Swartz, a longtime veteran of the CIO ranks, explains his view of the cloud’s pros and cons, as well as what he sees in store for IT leaders in 2010.

Sybase has made a major transition to become a mobility software provider. But beyond boosting the company's core offerings, CIO Jim Swartz is looking heavily at cloud computing, SaaS and maximizing the potential of the Millennial generation.

Swartz, formerly an IT leader at several companies, including SRI International and SAIC, spoke recently with CIO Insight Editor in Chief Brian P. Watson about his priorities for 2010. …

… Apple’s recent acquisition of digital music startup Lala rekindled speculation of an iTunes subscription service. There’s no shortage of subscription offerings (Napster, Rhapsody,Spotify,Pandora, etc), but none have attracted the millions of subscribers necessary to make the high royalty structures work. Experts have pondered that Apple’s design expertise and hardware integration could make subscription work. And leveraging Lala’s digital library, licenses from the major labels, and a management team who cycled through several business models including the ten cent web song rental could make it a reality. It’s a logical assumption, but after talking to a wide variety of insider sources it’s clear there is no upcoming Apple subscription service and Apple has far different plans. …

Lala will play a critical role in Apple’s music future, but not for the reasons cited above. Lala’s licenses with major labels are non-transferable, so they’re not usable for any new iTunes service. The 10 cent song rental model never gained traction and does not cover mobile devices thus is of little value to Apple. What is of value is the personal music storage service which was an often overlooked component of Lala’s business. As Apple did with the original iPods, Lala realized that any music solution must include music already possessed by the user. The Lala setup process provides software to store a personal music library online and then play it from any web browser alongside web songs they vend. This technology plus the engineering and management team is the true value of Lala to Apple. …

Michael is the founder and former CEO of digital music pioneer MP3.com. He is currently the CEO of music locker company MP3tunes. Robertson is also an adviser to Google Voice.

IBM in Monday announced the technology and business expansion of its LotusLive cloud collaboration platform through a new R&D pipeline from IBM Research and plans to open the LotusLive suite to new partners.

I am here at Lotusphere 2010 in Orlando, sitting in the press room. John Fontana from Network World just walked in and and asked me what I thought of the event so far. My reply:

“Well I am not saying Lotus has all its ducks in a row, but at least it has plenty of them now.”

The Lotus portfolio is now looking both increasingly broad, and compelling at scale. …

See the Windows Azure Infrastructure section for details of the New York Technology Council’s inaugural meeting, which featured Bill Zack’s 3 Screens & the Cloud presentation, as well as the views of Alfred Spector, Google’s Vice President of Research.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.