Technical Article

OS Virtualization: Diminishing Returns are Still Returns

I got an e-mail newsletter yesterday with a link to BEA's Virtualization TCO calculator. As my team is engaged in a lively debate regarding virtualization and its alleged benefits (you can tell which side of the fence I'm on at the moment) I visited the calculator to see what it would say. Then I sent the following results to the team, a smug smile on my face because the virtualized OS environment turned out to be more expensive than the non-virtualized environment.

TCO Summary

Non-Virtualized

Virtualized OS

Virtualized LVM

Server Hardware

$125,000

$125,000

$100,000

Software License/Support

$1,483,000

$2,058,313

$2,241,250

Software Administration

$270,000

$1,215,000

$378,000

Unplanned Downtime

$1,500,000

$375,000

$300,000

Data Center Real Estate

$27,977

$27,977

$18,651

Power and Cooling

$56,365

$56,365

$45,092

Total

$3,462,342

$3,857,655

$3,082,994

Someone, of course, pointed out that the power and cooling costs didn't quite jive. My thought was that maybe it does. After all, while you are reducing the number of servers in a virtualized architecture, you're also increasing the average utilization of those servers. That ought to even out in the end. But I didn't want to leave it to simply theory, I wanted some concrete proof.

So I did some research and found somereal data1 with which I thought I could back up my theory, then plugged that data into a spreadsheet. I thought "Ha! I'll show you that this isn't the rosy picture so many virtualization supporters like to paint."

So here's where I get to admit I was wrong. Yeah, I know - hard to believe, but it's true. And here's the data behind that startling epiphany. Each grouping of data is based on consolidation of 10 servers with average CPU utilization of 40%, 65%, and 80% respectively. The resulting number of servers is the number required to maintain the same load overall, but at 95% utilization per server.

Number of Servers

Amps per server

Watts per server

BTU/Hour per server

Total Amps

Total Watts

Total BTU

40% AVG -> 95% AVG

10

1.5

325

1107

15

3250

11070

NON VIRTUALIZED

4

1.9

415

1414

7.6

1660

5656

VIRTUALIZED

Number of Servers

Amps per server

Watts per server

BTU/Hour per server

Total Amps

Total Watts

Total BTU

65% AVG -> 95% AVG

10

1.7

366

1249

17

3660

12490

NON VIRTUALIZED

7

1.9

415

1414

13.3

2905

9898

VIRTUALIZED

Number of Servers

Amps per server

Watts per server

BTU/Hour per server

Total Amps

Total Watts

Total BTU

80% AVG -> 95% AVG

10

1.8

391

1107

18

3910

11070

NON VIRTUALIZED

8

1.9

415

1414

15.2

3320

11312

VIRTUALIZED

It appears, then, that although there is a diminishing return on the amount of energy saved by consolidating physical servers and virtualizing the environment, still there is a return. Fewer servers == less power, even given that modern servers draw variable power based on utilization. Turns out that OS virtualization may indeed be more energy efficient in the long run.

Hmmmph. Well, there's still the issue of increased licensing and administrative costs.