I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.

Please check the box if you want to proceed.

By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

virtual machines (VMs). In fact, all IT organizations would benefit from examining automated virtualization management in general. So, let's look at how this capability will become a must-have very soon, driving vendors' product development in the next few years, and at the products available today.

This tip is part of my virtualization series, Addressing all phases of virtualization adoption. In the last installment, we examined some security challenges virtualization poses, mainly talking about disaster recovery, configuring clusters and creating failover structures.

From server sprawl to VM sprawl At first, server virtualization's ability to consolidate tents of physical servers in just one host was considered a real solution for reducing the uncontrolled proliferation of new servers. Unfortunately, early adopters experienced just the opposite result. What happened?

The good news is that they found that the cost savings of implementing a new server in a virtual data center is dramatic because provisioning now takes hours, sometimes minutes, instead of weeks or months. The only real limitations to deployment are the availability of physical resources to assign new VMs and, if Windows is used, license prices. (The latter has less impact when a large corporation has a volume licensing agreement with Microsoft.)

Suddenly, IT managers could quickly move from planning to live implementation. Yet, this ease often provided a false perception of infrastructure limits.

Since multi-tier architectures seemed less complex to build, IT directors contemplated new scenarios, such as isolation of applications for security, performance or compatibility reasons. Also, deployment of new applications for testing could be done with no hesitation.

What often happened in this scenario was that companies did not enforce strict policies. Virtual infrastructures, depending upon their dimension, presented different challenges that weren't considered.

Bigger corporations, for instance, are still trying to understand the accounting aspect of virtualization in their costs centers. They gave new resources to departments, but infrastructure administrators wouldn't determine which VMs were actually used and how at a later date.

Smaller companies without an authorization process have granted provisioning capabilities to several individuals -- even to people without deep virtualization knowledge -- in order to faster execute projects. So, within a short period, almost everybody wanting a new VM could simply assemble a VM and power it on.

In such uncontrolled provisioning environments, three things typically happen:

Many who create and deploy new VMs have no understanding of the big picture, such as knowing how many VMs a physical host can really handle, how many are planned to be hosted by a single physical server, and which kind of workloads are best suited for a certain location.

Every new VM deployment compromises the big picture itself, leading to performance issues and continuous rebuilding of consolidation plans.

Every new VM brings a set of operating system and application licenses, which require special attention before being assigned but are not getting that attention.

So, without really realizing it's happening, companies have planted a virtual machine jungle without documentation, management of related licenses, precise roles, or even an owner. Obviously, this ad hoc approach will impact the overall health of a virtual data center.

The need for automation When the virtual data center grows, IT managers need new ways to perform usual operations and tools that help them scale up when needed.

When handling a large number of VMs, the biggest problem is their placement. As I've said many times during this series, correct distribution of workloads is mandatory to achieve good performances with given physical resources.

Choosing the best host for a virtual machine is not easy when taking into account the host machine's free resources and already-hosted workloads. That's when capacity planning tools are highly desirable.

Managing capacity manually during everyday data center life is just overwhelming. After all, a lot of time is needed to decide placement. Also, the whole environment is almost liquid, with several machines moving from one host to another to balance resources usage for host machine maintenance or other reasons. In this scenario, the best placement becomes a relative concept.

Another remarkable problem in large virtual infrastructures is customizing VM deployment.

While virtualization technologies used in conjunction with tools like Microsoft Sysprep make it easy to create clones and distribute them with new parameters, current deployment processes don't scale well and only consider single operating systems.

In large infrastructures, business units rarely require single virtual machines, more often asking for multi-tier configurations. Every time these mini-virtual infrastructures need to be deployed, IT administrators have to manually put in place specific network topologies, access permissions, service level agreement policies and so on.

In such scenarios, it is improbable that the required VMs will need the simple customization Sysprep offers: Installation of specific applications, interconnection to existing remote services, execution of scripts able to run before and after deployment and so on. These are all operations to be performed for each virtual infrastructure -- a huge loss of time.

Finally, deployments of most virtual infrastructures represent a typical scenario where, to test several different stand-alone projects from several departments, the original project will have to be destroyed and recreated on demand. On any new provisioning, both requestors and administrators will have to remember correct settings and customizations for all tiers.

An emerging market Considering such big risks and needs in today's virtual datacenters, it's not surprising that vendors are working hard to offer reliable, scalable automated provisioning tools.

Young start-ups -- Dunes, from Switzerland; Surgient from Austin, Tex.; VMLogix from Boston -- have to compete against current virtualization market leader VMware. That's a tall order, because VMware acquired know-how and an already available product from another young company, Akimbi, in the summer of 2006.

Akimbi Slingshot proved to be an interesting product before the acquisition, and VMware has spent a lot of time improving it further and integrating it into its ESX Server and VirtualCenter flagship solutions. This integration will be an important selling point, since it leverages the already-acquired skills of VMware customers in a familiar management environment.

On the other side, every day more IT managers look at agnostic products able to automate VM provisioning in mixed environments where the virtualization platform doesn't matter. Here, Surgient's products (VQMS/VTMS/VMDS) or VMLogix LabManager have much more appeal, able to support VMware platforms as well as Microsoft and, in a near future, Xen.

Apart from Dunes, all mentioned vendors are now focusing their products on the very first practical application of automated provisioning: Virtual lab management. It's easy to find a priority commitment on basic provisioning capabilities, like multi-tier deployments, enhanced customizations of deployed clones and physical resources scheduling. This is probably all customers feel is needed at the moment when virtual data centers have yet to reach critical masses.

In the near future, IT organizations will search for harder-to-find features, like provisioning authorization flow management or license management.

In any case, the autonomic data center is still far from here. So far, only Dunes and its Virtual Service Orchestrator (VS-O) is offering a true framework to perform full automation of today's virtual datacenters.

About the author: Alessandro Perilli is a recognized IT security and virtualization technology analyst. He is CISSP certified and is also certified in Check Point, Cisco, Citrix, CompTIA, Microsoft, and Prosoft. In 2006 he received the Microsoft Most Valuable Professional (MVP) award for security technologies. Perilli pioneered modern virtualization evangelism, and is the founder of the well-known blog virtualization.info. Alessandro Perilli is also the founder of the False Negatives project, a high quality IT security consulting and training business in Italy.

0 comments

Register

Login

Forgot your password?

Your password has been sent to:

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy