(Formerly "Service Level Automation in the Datacenter")Cloud Computing and Utility Computing for the Enterprise and the Individual.

Monday, January 28, 2008

It's the labor, baby...

I'm getting ready to go back to work on Wednesday, so I decided today (while Owen is at school and Mia has Emery) to get caught up on some of the blog chatter out there. First, read Nick Carr's interview with GRIDToday. Damn it, I wish this was the sentiment he communicated in "The Big Switch", not the "its all going to hell" tone the book actually conveyed.

Second, Google Alerts, as always, is an excellent source, and I found an interesting contrarian viewpoint about cloud computing from Robin Harris (apparently a storage marketing consultant). Robin argues there are two myths that are propelling "cloud computing" as a buzz phrase, but that private data centers will never go away in any real quantity.

Daniel Lemire responds with a short-but-sweet post that points out the main problem with Robin's thinking: he assumes that hardware is the issue, and ignores the cost of labor required to support that hardware. (Daniel also makes a point about latency being the real issue in making cloud computing work, not bandwidth, but I won't address that argument here, especially with Cisco's announcement today.)

The cost of labor, combined with real economies of scale is the real core of the economics of cloud computing. Take this quote from Nick Carr's GRIDToday interview:

If you look at the big trends in big-company IT right now, you see this move toward a much more consolidated, networked, virtualized infrastructure; a fairly rapid shift of compressing the number of datacenters you run, the number of computers you run. Ultimately … if you can virtualize your own IT infrastructure and make it much more efficient by consolidating it, at some point it becomes natural to start to think about how you can gain even more advantages and more cost savings by beginning to consolidate across companies rather than just within companies.

Where does labor come into play in that quote? Well, consider "compessing of the number of datacenters you run", and add to that to the announcement that the Google datacenter in Lenoir, North Carolina will hire a mere 200 workers (up to 4 times as many as announced Microsoft and Yahoo data centers). This is a datacenter that will handle traffic for millions of people and organizations worldwide. If, as Robin implies, corporations will take advantage of the same clustering, storage and network technologies that the Googles and Microsofts of the world leverage, then certainly the labor required to support those data centers will go down.

The rub here is that, once corporations experience these new economies of scale, they will begin to look for ways to push the savings as far as possible. Now the "consolidat[ion] across companies rather than just within companies" takes hold, and companies begin to shut down their own datacenters and rely on the compute utility grid. Its already happening with small business, as Nick, I and many others have pointed out. Check out Don McAskill's SmugMug blog if you don't believe me. Or GigaOM's coverage of Standout Jobs. It may take decades, as Nick notes, but big business will eventually catch on. (Certainly those startups that turn into big businesses using the cloud will drive some of these economics.)

One more objection to Robin's post. To argue that "Networks are cheap" is a falicy, he notes that networks still lag is speed behind processors, memory, bus speeds, etc. Unfortunately, that misses the point entirely. All that is needed are network speeds that get to the point where functions complete in a time that is acceptible for human users and economically viable for system communications. That function is independent of the network's speed relative to other components. For example, my choice of Google Analytics to monitor blog traffic is solely dependent on my satisfaction with the speed of the conversation. I don't care how fast Google's hardware is, and all evidence seems to point to the fact that their individual systems and storage aren't exceptionally fast at all.

The highway/ramp/feeder analogy is certainly a good one, but certainly you agree that the three combine to establish traffic throughput; adding feeder and onramp capacity does nothing if the freeway is the bottleneck. Greatly increasing the freeway capacity in this way allows (and, I would argue, motivates) those that are building ramps and feeders to increase capacity to match.

I appreciate you pointing this out, however. It really is a great analogy.

About Me

James Urquhart is a widely experienced enterprise software field technologist. James started his career programming a manufacturing job tracking system on the Macintosh (circa 1991), and slowly expanded his experience to include distributed systems architectures, online community and identity systems, and most recently utility computing and cloud computing architectures. He has held positions in pre and post sales services, software engineering, product marketing, and program management for the online developer communities of one of the largest developer sites in the world. His admittedly schizophrenic background is driven by a desire to work with technologies that are disruptive, but that simplify computing overall.

James is also an avid blogger. His primary blog, recently renamed "The Wisdom of Clouds" (http://blog.jamesurquhart.com), is focused on utility computing, cloud computing and their effect in enteprises and individuals.

In addition to his online work, James is the father of two children: a son, Owen; and a daughter, Emery; and the husband of the perfect friend and wife, Mia. James lives in Alameda, CA, plays rock and bluegrass guitar.