*nix scripts for reference, and thoughts on technology.

Monthly Archives: February 2011

Post navigation

This is just a quick hack to find which uid owns how many processes on your system. Obviusly, this can be altered to just about anything, but it’s a good example of how perl can use Unix commands inside a script.

One of the big buzzwords in IT in the last couple of years has been virtualization. It saves money in various ways, right?

Well, let’s take a look, albeit a subjective one*:

First let’s look at the hardware. Virtualization on this level usually means that you have less physical servers in your datacenter. This will save hardware, which is good, right? Sure, but (and that’s a huge BUT) it also means that you will have bigger machines, with more components inside like more CPU, more RAM, more disks, etc. The upfront cost of these machines will very likely be smaller than the cost of the servers you’re replacing, but it won’t be proportionally smaller. Chances are you will spend a considerable amount of money on them. But still…cheaper, yep.

Let’s have a look at your headcount, which is always a great way of looking at it in a presentation to execs: As an IT-manager you’re probably cost-driven, and headcount can be a huge factor in your budget plans. Less machines must equal less manpower. But is this really the case? Chances are your admins and engineers are busy keeping your current infrastructure afloat, giving them almost no time to get acquainted with new technology, because, let’s face it, most companies do not see IT as their core business. So this basically means you will have to give them time to familiarize themselves with various virtualization technologies. Various? Yes, because it’s never a good idea to have only one vendor supplying you, and thereby creating a monopoly. Training is expensive, much like hiring new people (which really doesn’t help your headcount) to increase the brainpool of your company. But still…cheaper, yep.

Now we’ll turn to licensing you’re virtual platforms. Obviously this will depend heavily on the vendor(s) you choose, but let’s say you decide to go with VMware and KVM. Those licenses do not come cheap, neither do the licenses for commercially using the OS. You will also have to invest in training your personnel on the new platform of your choice. But still…cheaper, maybe.

Dig into OS- and application-licensing: Each instance of your virtual OS will run application, most of them will have licensing cost attached to them. The more instances you run on your virtual platform, the smaller the performance of those instances will be. See the law of diminishing returns for reference. So this will force you to buy more virtual platforms, because an application that you loadbalanced between 30 physical hosts, may not do so well on just 30 virtual hosts, adding to your bottom line. But sill…cheaper, could be.

Finally let’s see about the issue of managing these new machines: Your brand new virtual datacenter needs software, and people, to manage it. Very likely this is software that you haven’t used before…more licenses for the hypervisors, the performance monitoring, etc. It’s also almost certain that your current operations team has no experience with this software. This will increase your incident count, and therefore the time spent by these people managing/monitoring your virtual infrastructure, which is going to be very different from the landscape you had. But still…cheaper, not so much.

In closing, I’d also like to remind you that there are a ton of applications which you may not be able to virtualize due to various reasons such as the need to recode an entire application for the new platform and it’s (new?) application server, or the limit most virtualization platforms have in offering e.g. memory to their guests. See the VMware Wiki for examples. But still…who are we kidding?

Virtualization does make sense in a lot of fields if you’re…Google or the likes. Not so much if you have a very diverse infrastructure of custom-built systems/applications to suit your specific needs in a large organization, at least not on a level that some vendors would like you to think.

* Subjective mainly because I look at this issue from the perspective of a Systems Engineer within a large organization, that does not have many “standardized” systems. Your milage may vary.

Every now and then ZFS/a zpool can have problems during boot time, and here’s a way to solve them.

ok boot -m milestone=none
# mount / as writeable
/sbin/mount -o rw, remount /
# Remove or move the zpool cache, so ZFS "forgets" that zpools exist on this system
rm /etc/zfs/zpool.cache
# Determine which pools may have problems with
fmdump -eV
# boot to
svcadm milestone all
# then import the pools one by one, but skip the ones with reported problems
zpool import