It seems everyday that somebody is complaining that VMware ESX is too expensive. Blog after blog and article after article says that VMware should drop it’s price to stay competitive. I don’t get it. I don’t hear anyone making the same argument about Windows 7. I can get Ubuntu for free, why should I have to pay for Windows 7? Why does VMware get the high price rap? Why should the leader change their pricing model to stay competitive? My dad taught me when I was very young that you get what you pay for. If you buy a Ford, don’t expect a Ferrari. Now I don’t have anything against Fords. I’ve owned several, but none of them would out perform a Ferrari.

And that is what we are getting at here. I’m not going to do price comparisons with XenServer and Hyper-V, that’s been done to death by all three camps. I just want to put my views down where I can see them to help me better present my thoughts. First, sticking with the performance theme. VMware pays a lot of smart people to develop things that lets face it, their competitors don’t have. Lockstep Fault Tolerance, Memory Over Commit, Transparent Page Sharing, Storage VMotion, Distributed vSwitch, Host Profiles, and coming soon, Memory Compression. I’m not going to argue the merits of these and why I think they are invaluable, I’m just going to reiterate, you get what you pay for.

But lets look at how expensive VMware really is. One thing I have seen is that most cost calculations leave off the fact that primary motivator for the move to a virtualized platform is to retire old hardware. Warranties on typical servers are three years, even though I’ve seen many companies stretch the life cycle to five years or more. I’m going to use a three year refresh rate in my computations.

So, lets say you are a small to midsized business and this year you have 20 servers that are going end of life. You could purchase 20 new servers to replace them, or you could purchase three new servers plus VMware licenses. I’m going to use vSphere Enterprise for my example, since a SMB wouldn’t need Enterprise Plus. The table below shows the “cost” of VMware. The only thing that isn’t included in this is storage as there are too many variables in storage. Some people already have the free space on the SAN, some only need to buy disks for an existing SAN and some need a new SAN. There are too many different scenarios for me to include them all in this short blog entry.

So the “cost” of a VMware solution is a savings of $80,066. Even if you don’t have the time and resources for the upgrade, you can hire a Solutions Architect (like me, for example) for less than the $80k and still come out ahead. So, I’m a forward thinking person, I don’t just look at the right now, I like to look at, lets say, five years down the road. Let’s assume you have a modest 20% growth in your server farm. What happens over the next five years?

Now year three gets even better because we have plenty of room for the 20% growth with the existing infrastructure.

Year four. Time for a server refresh and time to purchase SnS for vSphere.

Year 5. Again Time to refresh the year 2 servers and purchase SNS. Again we have room for the 20% growth in the existing infrastructure and do not need to buy new servers.

Granted this all assumes that the price of vSphere doesn’t increase and the prices of servers doesn’t increase. I’m sure some of you noticed I left the 20% static from the first year instead of increasing it every year. I just found the charts easier to read and understand if the numbers stayed the same. The difference is only six servers over five years which I think is fairly negligible for this exercise. I feel this is a more real world exercise than what most analyst show when they compare purchasing VMware with doing nothing. Most analyst when explaining why VMware is too expensive will show you $60,000 vs $0 in the first example. They forget that time marches on.

So, where does this leave us over a five year period? Just in case you weren’t keeping score, we now have thirty-six servers, or thirty-six virtual servers running on five vSphere hosts. A modest ratio of 7:1 which will allow for a one host failover solution giving you a 9:1 ratio during a host failure or maintenance.

Wait a minute, Tony are you saying that I will save almost $300,000 over five years just virtualizing 20 servers and virtualizing all my new servers? Yes, that’s what I’m saying. And not only do you save almost $300,000, you now have increased uptime through vSphere High Availability. Your System Administrators can now do hardware maintenance and repairs during the day and spend nights and weekends with their families. This may not seem like much to a manager since the System Administrators are on salary and get paid the same for a ninety hour week as for a forty hour week, but trust me, as an ex-System Administrator, this will increase the moral and decrease turnover.

But wait, there’s more! Not only does it cut and dice, chop and slice, it’s also GREEN! Lets look at the power consumption over the same 5 years. I’m going to use nice round numbers and assume each server is running continuously at 300 Watts. Of course this will vary, but it’s a good average mark. The smart folks at Dell use a 2.8 Power Usage Effectiveness rating, so I won’t second guess them and I will use the same. I will also use the same $.10 per kWh that the fine folks at Dell use in their calculations.

That’s a saving of $87,565 over five years of operating expenses. Bringing the grand total to a savings of $376,721. If you have a larger environment the savings just increases. As I said, this doesn’t include storage cost, but you have to purchase storage no matter solution you choose.

So let’s recap. The high cost of vSphere for twenty servers is a savings of $379,217 over a five year period. In the first year alone you get a savings of $80,000 on a $60,000 purchase. Sounds like vSphere pays for itself in less than twelve months. A fact that has been proven by people like Forrester, who are better trained than I at making these kinds of calculations. If VMware drops their price, will innovation stop? Would VMware be able to stay so far ahead of their competition? And if the cost of vSphere is a savings of almost $400k for 20 servers, how low do you want them to go? How greedy are we? Have we become spoiled with the so called free cost of open source software? Don’t just look at the license cost, do an in-depth analysis, then decide if you can afford not to virtualize.

Ok, I’m going to break form here and write something besides a hint, tip, or preview of technology to come. This is about my new hero. As a young boy, I loved sports, still do actually, but as a young boy I really loved sports. Football, basketball, baseball, softball, kickball, I didn’t care. If someone was playing I was in. Unfortunately, at 5’11” I was too short for basketball, too slow for football, and not good enough a hitter for baseball. And then there was the Olympics. For two weeks every few years I was enthralled by the worlds best athletes. Summer Olympics, Winter Olympics, it didn’t matter. Seeing those athletes stand on the podium with their national anthem playing was one of the most awesome experiences I could imagine. I was so in awe that I even had the summer and winter Olympic video games on my computer, an old IBM 8086. But, of course, there was no way I was athletic enough for the Olympics, so it remained a dream.

As the years passed I discovered that there was something that I was good at. Computers. Working with computers I was a natural, PCs, servers, networks, it didn’t matter, I could handle it. The years passed by and I took on the look of a Systems Administrator. You know the ones, goatee, what hair I have left cut short, overweight from all the Mountain Dews I drank trying to stay awake doing after hours system upgrades while the users were home living their lives. Now don’t get me wrong, I know we don’t all look like this, some of my peers actually do things like …gulp… run, and without chasing someone or being chased. I’m just saying that if you were building the stereotypical Systems Administrator he would look at lot like this….

Imagine my surprise when I saw Steven Holcomb…

I turned to my wife and said, “He missed his calling, he should have been in IT.” Well, as I watched him over the next several days, he placed sixth in the two man bobsled, and then guided his team to the gold medal in the four man bobsled. Steven Holcomb definitely did not miss his calling. But still, I couldn’t help being curious, so I did some research. I was amazed at what I found….Holcomb majored in Computer Science at the University of Utah and the University of Phoenix’s online program. He is an A+ Certified Computer Technician, Network+ Certified Network Technician, and a Microsoft Certified Professional. What? He’s one of us!!!! You mean the dream of being an Olympic champion is not out of reach, even for us IT guys? I find this out now, when I’m 40 years old?!?!?!? But, I could’ve been a contender. I think Holkie said it best on twitter, “Find a dream, work for it, then live it!”

Steven Holcomb, thanks for showing us that anyone, even an IT geek, can realize the Olympic dream. Congratulations on your Gold Medal.

Last week I was in Vegas with around two thousand other partners for VMware’s Partners Exchange. Quite a bit of the talk there was centered around VMware View. At one point I looked at my session list and thought I was at a View conference. One of these sessions was a lab that was billed as a View install and config lab, but was actually changed to a View 4.x preview. I use 4.x because VMware has yet to commit to a version number, even though most of the world is already calling it 4.5. All of this View bombardment got me to thinking about the way I view View. When I talk to people about the advantages of View, the talk invariably turns to things like security, high availability, cost and ROI. What ultimately gets lost is the secret ingredient, ThinApp. Well, VMware is changing that, ThinApp is a secret no more. The timing couldn’t be better. Not just for VMware, who is looking for an edge to keep them at the head of the virtualization pack, but also for the customer. Gone are the days of five or ten applications. I have seen and heard cases of two to four hundred applications. Wouldn’t it be great if you could put out a desktop that had zero applications? How easy would that be to maintain? How small of a footprint would that be? How easy would it be to manage that single image? But how would the end user do their job? That’s where ThinApp comes in. So many of today’s administrators are having to deal with applications that are changing too fast or not changing at all. How do you deal with two applications when one only works with Internet Explorer 6 and the other only works with Internet Explorer 8? Or what happens when you need two or more versions of Java running at the same time? Even worse, what if you have moved to Windows 7 and your application only runs on Windows XP? ThinApp takes care of that by separating the application from the operating system. Even better, it will let you publish applications to groups of people. Need everyone in accounting to get this new application ASAP? There’s a ThinApp for that (sorry, couldn’t resist). This is such a powerful tool that you have to experience it to appreciate it. VMware must think this as well. VMware wants you to see View and ThinApp and they are bringing it to you with VMware Express. They are also giving away ThinApp with your purchase of vSphere to enable you to try it out yourself. VMware is banking on the fact that once you see it, you will have to have it. Yes, ThinApp is a secret no more.

Well it does in vSphere ESX 4.0 at least. What am I talking about? Many of you may have tried to mount an ISO image to ESX in the past. You probably used a command similar to mount -o loop -t iso9660 /<path to iso>/file.iso. You probably got this back: ioctl: LOOP_SET_FD: Invalid argument. After doing some online research, like I did, you will find that you can’t mount an ISO that resides on a VMFS volume from an ESX 3.x host. Just for the kicks, I tried it on an ESX 4.0 host and it worked. No more having to try to upload to the local ESX file space on every server. Now I can keep all my ISOs on one VMFS LUN and use them for the hosts or the guests. Thanks VMware!

It’s a new year and I’ve decided to finally start the blog I’ve been threatening to write for over a year. Some of the things that I write here will be helpful hints that stumble upon along the way. Some, like this, will be in response to things that I have heard or read. In this case, this post on myvirtualcloud.net got me to thinking. Quite a few people are asking about using local storage for VDI implementations. Personally, I disagree with using local storage for VDI. Maybe, in a few instances it could be justified, but to me, the risks aren’t worth the rewards.

Somewhere along the way, the VDI message has been misconstrued. I’ve heard cost as the deciding factor of whether VDI is implemented or not. Just that, cost, and nothing else. Maybe we are to blame, in today’s economy (don’t you hate that phrase), cost is the driving factor and vendors have used cost to try to sell VDI. The problem is, at first blush, VDI isn’t cheaper than traditional PCs. You have to figure in the cost of the VMware View license (I’ll look at this from a VMware perspective, but the same would apply to other solutions), the cost of the Microsoft license, XP, or Windows 7, assuming you aren’t using a Linux distro for your desktop. Just these two licenses alone can be expensive, then add in the cost of the thin terminal, the host servers, and the shared storage and the price starts to mount quickly. Pretty soon the cost is about the same as a traditional desktop. Because of this, people have started looking for a way to cut costs. Some just dismiss VDI and say that VMware must cut the cost of View. Others look at the other pieces of the puzzle and in many cases, storage is the target.

I believe that shared storage is practical for two reasons. First, competition is making storage cheaper and more efficient all the time. Second, thanks to linked clones, the amount of storage needed is no longer astronomical. With linked clones a hundred desktops can take up a few gigabytes versus the terrabyte that it would have previously taken.

Not only do I believe that shared storage for VDI is practical, it’s critical. The most important element in the success of a VDI implementation is user experience. No matter how great the design looks on paper, or what price you were able to negotiate, the bottom line is “Is the user happy?” Today’s user is accustomed to having their desktop respond as soon as they move their mouse. With shared storage and HA, you can guarantee that will happen. Your users will never know that you have a server offline and are frantically working to bring it back online. All they know is that they can continue to do their job. This is what will make or break your VDI implementation. This makes shared storage worth the cost. With all the tools available with products like View, you can spin off a new image for your users in minutes, if you have another host lying around with enough local storage to handle the images. But unlike the days of traditional PCs when you got one user call for broken hardware, you now have sixty or more. That’s 60 phone calls that someone has to answer, 60 tickets that some helpdesk person will probably generate, and 60 irate users. Even if it is for only an hour, that’s 60 users that are upset and that’s not a good situation to be in.

A last word on cost. Sure VDI can seem costly, it’s a large upfront cost, and I only recommend doing a wholesale conversion to VDI when you need to upgrade your hardware and are already planning on a big expense. So where are the cost savings? Why move to VDI? The cost savings come in a couple of areas. One is the longevity of the systems. A thin terminal is going to have a much longer life span than a traditional PC. You also have the ability to do things like memory upgrades by allocating more memory to the VMs without having to buy more physical memory. This will extend the life of the virtual desktops. Another savings point that IT has to be mindful to champion is the savings in power and cooling. Since PCs reside in the offices and not in IT, these savings will be seen by facilities and not impact the IT budget, so be sure to let management know where those facility savings are coming from. The savings that impresses me the most though is manageability. Simply put it takes less people, less time to manage a VDI infrastructure than it does a traditional PC infrastructure. I’m not advocating people losing their jobs, but it does allow people to be used on projects and proactive activities rather than continually fighting fires. According to research, all of these savings add up to a 12 to 18 month return on investment. So, would I implement VDI if I were doing a massive upgrade project? Yes. Would I implement VDI in small phases where there were use cases to support it? Yes. Would I replace outdated hardware with thin terminals as they break or come up for retirement? Yes. Would I implement VDI with shared storage? YES!