Monday, February 26, 2007

Much to my delight, the last few weeks have been filled with customer activity, ranging from helping a Service Level Automation-enabled appliance for a major software company, to assisting the financial wing of one of the world's largest manufacturers to experience first hand the benefits of utility computing.

The latter runs an application that is highly dependent on Windows Terminal Services to deliver a client-server UI to thousands of retail outlets world-wide. Uptime is critical to this application, as customers will go elsewhere for financing if this application doesn't confirm credit within minutes of a purchase decision.

Unfortunately, WTS is also a very inefficient consumer of server payload. It is a session-based infrastructure, which means that a user will be attached to a specific server for their desktop access until they either log out or are kicked off. If 15 user sessions share a physical server, there is no way to predict the load on the system. All 15 sessions could be idle, or all could quickly start consuming cycles simultaneously.

This gives me my first really good example of when CPU and/or memory utilization are not good Service Level metrics on their own. Imagine an environment using WTS to support hundreds of users. These users use their Windows sessions to run a variety of tasks, much like any Windows user. Some tasks use a high level of CPU and memory, others very little. Quite often, the session will sit idle for several minutes.

Now, if you create lots of sessions because the CPU is idle, you could end up with problems if they all get active at the same time (say right after lunch). If you stop creating sessions on a server because CPU utilization is high, you may end up with a highly under utilized server when one user's game of Quake wraps up.

That's not to say that CPU or memory utilization aren't an important part of the Service Level "equation". The truth is, there are several metrics that apply to WTS capacity: sessions, CPU utilization, memory utilization, licenses, etc. Since determining Service Level compliance probably involves evaluating the relationships between several of these metrics at once, there will most likely be one or two compound metrics based on mathematical equations combining these "root" metrics in a way that reasonable thresholds can be set.

Another interesting observation is that this is a lot like the Java EE Service Level Automation problem. (Thanks to Luis Cuyun for pointing this out.) While most horizontally scalable application tiers can be scaled up and down as a unit (i.e. "add a node/remove any node"), app servers, hypervisors and (now) WTS all must be monitored as a unit, but managed on a per server basis (in this case, "add a node/remove this specific node"). This is because the "instances" that each of these software resources are hosting are "sticky" to a server (VMotion not withstanding), and you don't want to shut down any server when capacity is not longer needed, you want to only remove the specific servers with no live sessions remaining on them.

(Speaking of VMotion, one of the things that both Java EE and WTS will require to be really optimizable is the ability to move live services/sessions from one server to another in real-time. Anyone know of a technology addressing either of these?)

The good news for a good Service Level Automation environment (*ahem*) is that if one of these problems (Java EE, virtual servers or WTS) is solved correctly, the same basic technology can be applied to all of them. That's not to say that anyone is doing this for WTS today (to my knowledge, no one is), but I like the idea that the use cases that apply to Java EE SLA also apply to WTS SLA.

I'm hoping to have more to write about this as this pilot continues. In the meantime, anyone with Windows experience is welcome and encouraged to contribute their two cents to this discussion. In particular, are there any tried and true service level metrics for WTS that are being used out there? In general, there are so many moving parts here, that I am sure there are many critical factors to Service Level Automation of WTS that I have not covered, or even considered.

Thursday, February 01, 2007

Vinay Pai, Cassatt's truly talented Director of Product Development, has joined the blogging fray. He starts with an excellent post on his recent experiments with completely provisioning 400+ servers in just a few hours without having an extensive system administration background. Vinay's blog should be a fun read for those of you who are considering Cassatt (or, at least, service level automation and/or utility computing infrastructure). His day to day job has him getting "down and dirty" with both live customer installations and extreme test scenarios.

What's that you say? You want a single URL where all of the best minds blogging about service level automation, dynamic datacenters and utility computing come together in a single, easy to read package? Well, your wish is granted! Most of the blogs referenced in my "Links" section are gathered there in real time via RSS, with the latest post always at the top, so you always get the freshest insight into next generation IT architecture, technology and culture...

About Me

James Urquhart is a widely experienced enterprise software field technologist. James started his career programming a manufacturing job tracking system on the Macintosh (circa 1991), and slowly expanded his experience to include distributed systems architectures, online community and identity systems, and most recently utility computing and cloud computing architectures. He has held positions in pre and post sales services, software engineering, product marketing, and program management for the online developer communities of one of the largest developer sites in the world. His admittedly schizophrenic background is driven by a desire to work with technologies that are disruptive, but that simplify computing overall.

James is also an avid blogger. His primary blog, recently renamed "The Wisdom of Clouds" (http://blog.jamesurquhart.com), is focused on utility computing, cloud computing and their effect in enteprises and individuals.

In addition to his online work, James is the father of two children: a son, Owen; and a daughter, Emery; and the husband of the perfect friend and wife, Mia. James lives in Alameda, CA, plays rock and bluegrass guitar.