Top 12 Cloud Trends Of 2012

Five years into the cloud computing phenomenon, we're much more aware of the limitations and consequences. Here are 12 trends to watch in the coming year, starting with numbers 7 to 12.

As we go down through increasingly nested layers of virtualization, how do we know when we've reached the real, physical, bare-metal machine? Philosopher Rene Descartes started his famous "I think, therefore I am" reasoning with a thought experiment: What if we're a disembodied brain in a vat, being fed a perfect set of sensory information, being tricked into thinking we're in the real world?

Decades of science fiction, from The Matrix to Inception to Vanilla Sky, have tackled the notion that we might not be in the world we experience but instead are living within a simulation.

This has important consequences for hardware makers. When we don't know what virtualization layer we're at, the jar is what matters most, because it's the only thing that truly knows. The bare metal has an important role to play, because it establishes trust. It thwarts trickery. It accelerates security and dedicates resources.

New hardware takes time to find its way into the wild. But the latest chipsets have features that distinguish them from the hypervisors they're running, and in a virtual world where no machine knows if it's just a brain in a jar, the jar is critical. In 2012, we'll start to expect more of the bare metal, because it's the only thing we can really trust.

Cloud Trend No. 9: The Rise Of Real Brokerages.

Enterprises use dozens of clouds already. Those bills are adding up, not just in terms of cost, but in terms of complexity. Some providers bill by machine; others by CPU cycle; others by user, megabyte, or request.

Having several providers is useful, because it offers the customer some degree of independence and negotiating leverage. But managing myriad cloud offerings will soon turn enterprise IT professionals into procurement officers and contract negotiators, handling varying terms and conditions, payment schemes, and disputes.

In a market with many buyers and sellers, brokerages inevitably emerge. They simplify and standardize transactions. They perform "bulk breaking"--the sharing of a good across many buyers--and assortment. And they find pricing efficiencies.

We're already seeing the start of cloud brokerages. Spot markets are an early indicator of the market liquidity necessary for a brokerage. Cross-cloud platform-as-a-service offerings like OpenShift and Cloud Foundry encourage workloads to move from cloud to cloud. And brokers like Cloudability aim to streamline billing and management of multiple contracts.

In 2012, expect to see the first real cloud brokerage offerings, as enterprise IT organizations look to team with other companies to procure and manage commodity cloud capacity.

Cloud Trend No. 8: An SLA Detente.

One of the biggest enterprise IT complaints is that the cloud offers bad service level agreements. Here's why that complaint doesn't hold up.

I drive a Volkswagen, but I don't get my insurance from that company. I get it from one that specializes in amortizing risk across clients. My insurance company knows the chances I'll get into an accident, as well as how safe my car is. It spends a lot of time reviewing safety features of cars and understanding regulations and quality checks by governments.

If you want to amortize the risk, you'll find an insurer or certifier of some kind that can inspect the cloud provider on your behalf and understand its reliability. Cloud providers are no more in the business of amortizing risk than carmakers are in the business of selling insurance.

Now consider hardware. We don't ask a hardware maker to guarantee their equipment. We ask for how likely it is to fail--the mean time between failure--and use this baseline to create an architecture that will give us the reliability we need. We build this architecture out of resilient tiers that can fail gracefully: DNS, load-balancers, and so on.

Clouds offer availability zones, CDN front-ends, shared storage, message queues, and dozens of other building blocks with which an architect can create applications of unprecedented scale and resiliency.

And that's the second problem with the complaints about cloud SLAs: The best SLA is the one you architect for yourself.

A combination of certifications and amortization from insurance companies will assuage some of the enterprise SLA concerns, by giving risk a price. The remaining concerns will be addressed by better architecture. In 2012, we'll realize that the providers have been trying to tell us something: You can have any SLA you want, as long as you code it yourself and find a way to turn risk into economic value.

Cloud Trend No. 7: Disaster Recovery And Scaling Are The New Drivers.

The first thing we virtualized was the print server. When virtualization first emerged, IT used it as a way to cut costs by consolidating otherwise idle machines running mundane tasks: print, email, and intranet servers--things that weren't mission critical but were taking up space.

After a few years, virtualization found its way into test and development, where the rate of change was high enough that ease of deployment was paramount. This consolidation was good, but what really helped was the ability to quickly clone, copy, spin up, and tear down machines as QA needed.

Today, we're using virtualization for production applications, and we know that many virtual machines, running on commodity hardware, properly clustered and architected, can actually be more reliable than standalone high-end servers.

That's an important shift--from non-mission-critical applications to really critical ones. Cloud computing is undergoing a similar shift. Early cloud use was for experimentation, throwaway applications, and spiky, batch computing jobs. But now companies are realizing that highly available, cross-geography deployments can help them survive outages better than machines they own. On-demand computing changes the economics of disaster recovery significantly.

Moreover, the ability to scale up and down according to the user experience we want to deliver makes cloud computing attractive for time-sensitive applications, and as we learn to code elastic applications, clouds look like the right place to run them.

This means that in 2012, disaster recovery and elastic scaling will replace cost savings and convenience as the big reasons for enterprises to adopt the cloud.

There are six more predictions to go to round out the top 12. You can find them here.

Alistair Croll, founder of analyst firm Bitcurrent, is conference chair of the Cloud Connect events. Cloud Connect will take place in Santa Clara, Calif., from Feb. 13 to 16.

Data centers face increased resource demands and flat budgets. IT can do more to boost efficiency, but now is also the time to rethink the assumptions that go into traditional data center designs. Our State Of The Data Center report shows you steps you can take today to squeeze more from what you have, and also provide guidance on building a next-generation data center. (Free registration required.)

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.

This is a great article for Cloud trends. I particularly like #7 which talks about the large use of Cloud for DR. I also found another relevant article on DR in the Cloud here http://www.fracrack.com/blog/2...

Cloud = mainframe. We are going backwards with new buzzwords. All these years coding for distributed servers and we are going back to consolidated operating systems with multiple cores. Mainframes - yay - I used to work on them. At least with mainframes you had the option to keep your data in house behind your own protection. With the cloud anyone can take a crack at it. Today companies like verisign are being hacked daily and they are supposed to be trust authorities. How safe do you think your data will be on a mainframe (sorry, cloud) that is accessible by millions of people. When anon get a hold of an administrator password it will not matter how compartmentalized your data is. At least when you are behind hardline firewalls your are not subject to random hacks "I pick you pikachu". The hackers have to be in the same geographical location, have access to your hard lines, have the technical ability to both hack your comms and hack your encryption (a rare combination). For any company where every transaction means money (Banks, stock exchanges, clearing houses etc.) doing business in the cloud is clear insanity. For music stores and book sales, go for it. It saves a lot on infrastructure costs. Shared mainframe time. Also if a music store goes down it means 10,000 people out of a job, not an entire bank and all of it's investors out of their homes. Also don't trot out the adage that the data is encrypted, everyone knows that the data cannot be attacked directly. They attack the people with the passwords to the data. Key loggers etc. It takes one little mistake in the wee hours of the morning when you are half awake to accidentally load a key logger. One person in your trust chain gets compromised and you may as well not have encryption. Why? Because encryption keys should be cycled. But they aren't. They are normally hard coded, because they are a pain to change. All it takes is one annoyed exeployee and you may as well be sending clear text. DES has been compromised. MD5 look up sum tables are prevalent, it may not be your password, but it ends up appearing the same to a computer due to failures in the MD5 hash algorithm. So the attacker does not need your password, he needs the sum of your password, he needs your encrypted data - and with your encrypted data he can decrypt the rest of your data. Maintaining decent solid, rapidly changing security is beyond our current programming models. The hackers are ahead of us on this one, and until we catch up there will be a lot more hacking to come.Back on topic - I realize I deviated but i will leave it because it seriously applies to the whole 'cloud' / 'mainframe' concept. If you are a little startup by all means go cloud. If you deal in billions - I would stay away and hire a good CIO.

With nearly all of the Cloud Providers that I've worked with as s Consultant it's amazing how their SLAs are only centred around uptime and availability. This is quite ironic in that you wouldn't buy a PC for example based on the fact that it will turn on and stay turned on when its performance is atrocious!

It is here where I think the Cloud will certainly need to mature as more and more critical applications are considered for it. SLAs need to be refined around performance metrics as opposed to just uptime and availability.

With the company that I work for namely Virtual Instruments we can uniquely measure the infrastructure performance of critical applications deployed in the cloud by looking across the SAN fabric. What we've found to be incredibly successful is enabling our Cloud provider clients to in fact mature their SLAs based on performance metrics such as response times.

We've also helped these Cloud Providers to help establish SLAs for their end users who didn't necessarily have any in place for their low tier apps. This has been a clear differentiator for them in gaining new customers and convincing them to deploy more key applications into the Cloud.

From what I'm seeing 2012 will certainly be the year where Performance will take more precedence in the drive towards the Cloud and that means a better grasp of SLA distinction and definition.

One of the issues with cloud computing and addressed in the article is scale -- and sharding of data is increasing as a way to deal with this problem. We've got a general blog post on what database sharding is and how it can be implemented. (http://www.scalebase.com/datab...

The workforce is changing as businesses become global and technology erodes geographical and physical barriers.IT organizations are critical to enabling this transition and can utilize next-generation tools and strategies to provide world-class support regardless of location, platform or device