When the computer industry buys into a buzzword, it's like getting a pop song stuck in your head. It's all you hear. Worse, the same half-dozen questions about the hyped trend are incessantly paraded out, with responses that succeed mainly in revealing how poorly understood the buzzword actually is.

These days, the hottest buzzphrase is "cloud computing," and for John Willis, a systems management consultant and author of an IT management and cloud blog, the most annoying question is this: Will enterprises embrace this style of computing?

"It's not a binary question," he asserts. "There will be things for the enterprise that will completely make sense and things that won't."

The better question, he says, is whether you understand the various offerings and architectures that fit under that umbrella term, the scenarios where one or more of those offerings would work, and the benefits and downsides of using them.

Even cloud users and proponents don't always recognize the downsides and thus don't prepare for what could go wrong, says Dave Methvin, chief technology officer at PC Pitstop LLC, which uses Amazon.com Inc.'s S3 cloud-based storage system and Google Apps . "They're trusting in the cloud too much and don't realize what the implications are," he says.

With that as prologue, here are seven turbulent areas where current and potential users of cloud computing need to be particularly wary.

Costs, Part I: Cloud Infrastructure Providers

When Brad Jefferson first founded Animoto Productions, a Web service that enables people to turn images and music into high-production video, he chose a Web hosting provider for the company's processing needs. Looking out over the horizon, however, Jefferson could see that the provider wouldn't be able to meet anticipated peak processing requirements.

But rather than investing in in-house servers and staff, Jefferson turned to Amazon's Elastic Compute Cloud, a Web service known as EC2 that provides resizable computing capacity in the cloud, and RightScale Inc., which provides system management for users of Web-based services such as EC2. With EC2, companies pay only for the server capacity they use, and they obtain and configure capacity over the Web.

"This is a capital-intensive business," Jefferson said in a podcast interview with Willis. "We could either go the venture capital route and give away a lot of equity or go to Amazon and pay by the drink."

His decision was validated in April, when usage spiked from 50 EC2 servers to 5,000 in one week. Jefferson says he never could have anticipated such needs. Even if he had, it would have cost millions to build the type of infrastructure that could have handled that spike. And investing in that infrastructure would have been overkill, since that capacity isn't needed all the time, he says.

But paying by the drink might make less economic sense once an application is used at a consistent level, Willis says. In fact, Jefferson says he might consider a hybrid approach when he gets a better sense of Animoto's usage patterns. In-house servers could take care of Animoto's ongoing, persistent requirements, and anything over that could be handled by the cloud.

Costs, Part II: Cloud Storage Providers

Storage in the cloud is another hot topic, but it's important to closely evaluate the costs, says George Crump, founder of Storage Switzerland LLC, an analyst firm that focuses on the virtualization and storage marketplaces.
At about 25 cents per gigabyte per month, cloud-based storage systems look like a huge bargain, Crump says. But although Crump is a proponent of cloud storage, the current cost models don't reflect how storage really works, he says. That's because traditional internal storage systems are designed to reduce storage costs over the life of the data by moving older and less-accessed data to less-expensive media, such as slower disk, tape or optical systems. But today, cloud companies essentially charge the same amount "from Day One to Day 700," Crump says.

Amazon's formula for calculating monthly rates for its S3 cloud storage service is based on the amount of data being stored, the number of access requests made and the number of data transfers, according to Methvin. The more you do, the more you pay.

Crump says that with the constant decline of storage media costs, it's not economical to store data in the cloud over a long period of time.

Cloud storage vendors need to create a different pricing model, he says. One idea is to move data that hasn't been accessed in, say, six months to a slower form of media and charge less for this storage. Users would also need to agree to lower service levels on the older data. "They might charge you $200 for 64G the first year; and the next year, instead of your having to buy more storage, they'd ask permission to archive 32G of the data and charge maybe 4 cents per gigabyte," Crump explains.

To further drive down their own costs and users' monthly fees, providers could store older data on systems that can power down or off when not in use, Crump says.

Sudden Code Changes

With cloud computing, companies have little to no control over when an application service provider decides to make a code change. This can wreak havoc when the code isn't thoroughly tested and doesn't work with all browsers.

That's what happened to users of Los Angeles-based SiteMeter Inc.'s Web traffic analysis system this summer. SiteMeter is a software-as-a-service-based (SaaS) operation that offers an application that works by injecting scripts into the HTML code of Web pages that users want tracked.

In July, the company released code that caused some problems. Any visitor using Internet Explorer to view Web pages with embedded SiteMeter code got an error message. When users began to complain, Web site owners weren't immediately sure where the problem was.

"If it were your own company pushing out live code and a problem occurred, you'd make the connection," Methvin explains. "But in this situation, the people using the cloud service started having users complaining, and it was a couple of hours later when they said, 'Maybe it's SiteMeter.' And sure enough, when they took the code out, it stopped happening."

The problem with the new code was greatly magnified because something had changed in the cloud without the users' knowledge. "There was no clear audit trail that the average user of SiteMeter could see and say, 'Ah, they updated the code,' " Methvin says.

Soon after, SiteMeter unexpectedly upgraded its system, quickly drawing the ire of users such as Michael van der Galien, editor of PoliGazette, a Web-based news and opinion site. The new version was "frustratingly slow and impractical," van der Galien says on his blog.

In addition, he says, current users had to provide a special code to reactivate their accounts, which caused additional frustration. Negative reaction was so immediate and intense that SiteMeter quickly retreated to its old system, much to the relief of van der Galien and hundreds of other users.

"Imagine Microsoft saying, 'As of this date, Word 2003 will cease to exist, and we'll be switching to 2007,' " Methvin says. "Users would all get confused and swamp the help desk, and that's kind of what happened."

Over time, he says, companies such as SiteMeter will learn to use beta programs, announce changes in advance, run systems in parallel and take other measures when making changes. Meanwhile, let the buyer beware.

Service Disruptions

Given the much-discussed outages of Amazon S3 , Google's Gmail and Apple's MobileMe , it's clear that cloud users need to prepare for service disruptions. For starters, they should demand that service providers notify them of current and even potential outages.
"You don't want to be caught by surprise," says Methvin, who uses both S3 and Gmail. Some vendors have relied on passive notification approaches, such as their own blogs, he says, but they're becoming more proactive.

For example, some vendors are providing a status page where users can monitor problems or subscribe to RSS feeds or cell phone alerts that notify them when there's trouble. "If there's a problem, the cloud service should give you feedback as to what's wrong and how to fix it," Methvin says.

Users should also create contingency plans with outages in mind. At PC Pitstop, for instance, an S3 outage would mean users couldn't purchase products on its site, since it relies on cloud storage for downloads. That's why Methvin created a fallback option. If S3 goes down, products can be downloaded from the company's own servers.

PC Pitstop doesn't have a backup plan for Google Apps, but Methvin reasons that with all of its resources, Google would be able to get a system such as e-mail up and running more quickly than his own staffers could if they had to manage a complex system like Microsoft Exchange. "You lose a little bit of control, but it's not necessarily the kind of control you want to have," he says.

Overall, it's important to understand your vendor's fail-over strategy and develop one for yourself. For instance, Palo Alto Software Inc. offers a cloud-based e-mail system that uses a caching strategy to enable continuous use during an outage. Called Email Center Pro, the system relies on S3 for primary storage, but it's designed so that if S3 goes down, users can still view locally cached copies of recent e-mails.

Forrester Research Inc. advises customers to ask whether the cloud service provider has geographically dispersed redundancy built into its architecture and how long it would take to get service running on backup. Others advise prospective users to discuss service-level agreements with vendors and arrange for outage compensation.

Many vendors reimburse customers for lost service. Amazon.com, for example, applies a 10% credit if S3 availability dips below 99.9% in a month.

Vendor Expertise

One of the biggest enticements of cloud computing is the promise of IT without the IT staff. However, veteran cloud users are adamant that this is not what you get. In fact, since many cloud vendors are new companies, their expertise -- especially with enterprise-level needs -- can be thin, says Rene Bonvanie , senior vice president at Serena Software Inc. It's essential to supplement providers' skills with those of your own in-house staff, he adds.

"The reality is that most of the companies operating these services are not nearly as experienced as we hoped they would be," Bonvanie says.

The inexperience shows up in application stability, especially when users need to integrate applications for functions like cross-application customer reporting, he says.

Serena itself provides a cloud-based application life-cycle management system, and it has decided to run most of its own business in the cloud as well. It uses a suite of office productivity applications from Google, a marketing automation application from MarketBright Inc. and an on-demand billing system from Aria Systems Inc.

So far, it has pushed its sales and marketing automation, payroll, intranet management, collaboration software and content management systems to the cloud. The only noncloud application is SAP, for which Serena outsourced management to an offshore firm.

According to Bonvanie, "the elimination of labor associated with cloud computing is greatly exaggerated."

The onus is still on the cloud consumer when it comes to integration. "Not only are you dealing with more moving parts, but they're not always as stable as you might think," he says.

"Today, there's no complete suite of SaaS applications, no equivalent of Oracle or R/3, and I don't think there ever will be," Bonvanie says. "Therefore, we in IT get a few more things pushed to us that are, quite honestly, not trivial."

Global Concerns

Cloud vendors today have a U.S.-centric view of providing services, and they need to adjust to the response-time needs of users around the world, says Reuven Cohen , founder and chief technologist at Enomaly Inc., a cloud infrastructure provider. This means ensuring that the application performs as well for users in, say, London as it does for those in Cincinnati.
Bonvanie agrees. Some cloud vendors "forget that we're more distributed than they are," he says.

For instance, San Bruno, Calif.-based MarketBright's cloud-based marketing application works great for Serena's marketing department in Redwood City, Calif., but performance diminished when personnel in Australia and India began using it. "People should investigate whether the vendor has optimized the application to work well around the world," Bonvanie says. "Don't just do an evaluation a few miles from where the hardware sits."

Worldwide optimization can be accomplished either by situating servers globally or by relying on a Web application acceleration service, also called a content delivery network, such as that of Akamai Technologies Inc. These systems work across the Internet to improve performance, scalability and cost efficiency for users.

Of course, situating servers globally can raise thorny geopolitical issues, Willis points out. Although it would be great to be able to load-balance application servers on demand in the Pacific Rim, Russia, China or Australia, the industry "isn't even close to that yet," he says. "We haven't even started that whole geopolitical discussion."