Virtualization news from VMware and the community of virtualization users, including the VMware Communities and VMTN, the VMware Technology Network.

Category Archives: data center

I am one of the VMware
vSphere product marketing managers at VMware and wanted to give you a summary
of all the upcoming content related to the vSphere launch. We have developed
resources for all audiences and technical levels to help you understand what is
VMware vSphere, how does it work, and how to upgrade to it from VI3 or deploy
it for the first time.

Here is a quick summary of
vSphere resources. More details are provided below the table.

Visit the VMware
vSphere web pageto learn more about the features and benefits of
VMware vSphere based on your company size. You will find datasheets, demos, and solution
briefs on the vSphere product pages.

WEBCASTS / PODCASTS:

We have developed 3
webcast series:

On-demand
webcasts are pre-recorded
and cover what’s new in vSphere while providing technical deep dives into
vSphere features.

We also have a series of podcasts for
those of you who would like to learn about vSphere on the go! The podcasts will
cover vSphere editions and provide technical deep dives into the new features.

UPGRADE AND EVALUATION CENTERS:

We have launched VMware vSphere Upgrade
Center, a site that contains all the information you need to upgrade
from VI3 to vSphere 4. The site will point you to upgrade preparation
checklists, access to vSphere upgrade communities, vSphere entitlement paths
for VI customers with active subscription contracts, and details on the new and
improved licensing mechanism. We will add more resources to this page including
upgrade best practices when vSphere becomes Generally Available.I highly recommend that you bookmark this
page if you are an existing VI customer.

[coming soon] We are
creating a vSphere Evaluation Center
packed with demos and technical documentation to provide a guided evaluation
experience for both new and existing customers. The site will go live when
vSphere becomes generally available.

QUICKSTART SERIES:

Note: This resource is NOT designed for existing VI
customers

One last resource I want
to point out to New Customers is theVMware vSphere QuickStart
Series. This is a new FREE course that will be taught live over the web
in four 2-hour modules. It is designed to teach VMware vSphere and VMware ESXi
evaluators how to do a basic installation, configuration and management of either
ESXi or vSphere. The course primarily consists of live product demonstrations to
ensure new users gain practical experience that can be leveraged to do a basic
vSphere POC or small deployment. If you
are already familiar with VMware Infrastructure (VI3), you should not attend
this class unless you want an 8 hour review of what you already know.

Nice video of a day in the life of some VI servers using DRS with DPM (distributed power management) enabled. As the workforce comes into the office, utilization increases, ESX servers come out of standby mode, VMs get VMotioned, and everybody's happy. The process reverses itself after 5pm. (Who leaves work at 5pm?) And the servers happily sleep overnight or until you need them again. And the sys admins? Feet up, watching YouTube, never touching a power switch or a mouse. It's all automatic.

We started the test with 13 tiles worth of VMs (108 VMs in all) on the
DRS cluster. With all of these VMs idle, DPM consolidated them to a
single host and turned off three servers. As the load was applied to
the VMs at 9:00 AM and driven through an eight-hour workday, DRS and
DPM powered on servers and balanced load, as needed. When the day ended
at 5:00 PM, the load was again consolidated and servers were powered
down. The video we shot includes power meters of the systems under test
and screenshots of activity induced by DRS and DPM.

Now you may say, “Yes, Microsoft is late, but I’m ok waiting.” But
waiting costs your company real dollars. Look at this simple example:
By using VMware VMotion for planned server maintenance in a 150-VM environment you can save almost $60,000 a year
in operational costs. If we scaled to a 1000-VM environment, it results
in almost $400,000 of cost savings a year. If you use VMotion for more
than planned server maintenance, and use it for dynamic load balancing,
distributed power management, etc, you’ll save even more!

Live Migration is a Core Virtualization Requirement

To pre-announce live migration – twice – shows that even Microsoft has realized the foundational role that live migration plays in a virtual datacenter. It’s not a “nice-to-have” but a “must-have” capability.

This
VMware VMbook focuses on business continuity and disaster recovery
(BCDR) and is intended to guide the reader through the step-by-step
process to set-up a multisite VMware Infrastructure that is capable of
supporting BCDR services for designated virtual machines at time of
test or during an actual event that necessitated the declaration of a
disaster, resulting in the activation of services in a designated BCDR
site.

A VMworld Expert Session lets you interact with an industry speaker. The expert records a presentation and then sticks around for two weeks on a discussion forum to answer questions and keep the discussion going. The lastest expert session is from Kevin Epstein of Scalent Systems, who help keep your data center running smoothly with business continuity and automation solutions. Link: VMworld.com: Scalent Expert Session.

Are you responsible for ensuring the uptime and availability of hundreds of server systems?

Are you wrestling with the tradeoffs between budget and reliability, availability, performance, and utilization?

Have you ever wondered how long it would really take to bring back your business if your data center were hit by a disaster?

Join this presentation and discussion on the big three challenges
facing server failover—software configuration, network connectivity,
and storage access. Contrast several different approaches—from
traditional backup—to the use of virtual machines—to the next
generation of workflow engines (such as VMware Site Recovery Manager)
and complementary real-time data center automation (such as Scalent
V/OE). This presentation features case studies from several Scalent
clients, as well as a discussion of real cost savings based on actual
reduction of disaster recovery hardware, even as reliability was
improved.

IBM's Massimo Re Ferre' with another long thought-piece on the philosophical differences between traditional custering (application- and OS-dependent, complicated) with approaches like VI3's High Availability (HA) (treats workload as a virtual appliance, simpler). Massimo works directly with customers, so although he recognizes that paradigms are changing, he looks at the strength and weaknesses of both approaches, and alludes some of the organizational and operational changes you'll have to make to get there.

If you stop for a minute and think about what it is happening in this
x86 virtualization industry, you'll notice that many infrastructure
services that were typically loaded within the standard Windows OS are
now being provided at the virtual infrastructure layer. An easy example
would be network interface fault tolerance: nowadays in virtual
environments you typically configure a virtual switch at the
hypervisor level, comprised of a bond of two or more Ethernet adapters
and you associate virtual machines to the switch with a single virtual
network connection. What you have done in this case is that you have
basically delegated the virtual infrastructure of dealing with Ethernet
connectivity problems. This is a very basic example and there are many
others like this such as storage configuration/ redundancy/ connectivity. ...

We are clearly at an inflection point now where many customers that
used to do standard cluster deployments on physical servers (which was
the only option to provide high availability) are now arguing how to do
that. They now have the choice to either continue to do so in virtual
servers as opposed to physical servers (thus applying the same rules,
practices and with little disruption as far their IT organization
policies are concerned) or turning to a brand new strategy to provide
the same (or similar) high availability scenarios (at the cost of
heavily changing the established rules and standards). The reason I am
saying we are at an inflection point is because I really believe that
the second scenario is the future of x86 application deployments, but
obviously as we stand today there are things that you cannot
technically do or achieve with it. Plus, there is a cultural problem
from moving from an established scenario to the other.

Needless to say, I was able to power through with minimal cursing and
no thrown or kicked components. What ended up being the most
challenging aspect of the entire process was digging through my boxes
of old computer junk that I refuse to throw away to find my null modem
cable. I'm glad I was able to find it because I truly question the
ability to walk into a retail store and buy one now-a-days.

My operating system of choice was Ubuntu 7.10 Server. Let me start by
saying it would have absolutely been 10X easier to build this server
had I used Ubuntu 6.06 Server. The iSCSI Enterprise Target software is
not available in the universe repositories and had to be compiled, and
only then after modification to the make file.

One thing that some people may notice about the configuration is it
is unique in the fact that I have specified a ScsiSN value to each LUN.
While poking around and trying to get the stupid thing to properly
build I saw that there was a README.vmware file in the build directory.
I figured it might actually apply to what I was doing so decided to
open it up. As I expected, it absolutely applied and made sense of some
wierd issues I had seen in the past.

Make sure you check out the comments from VMware Communities regular Jason Boche, who has his own home lab. [via]

Rich Brambley at VM /ETC has also been posting about white box and on-the-cheap VI setups. See this post: Cheap ESX solutions for testing, where he points to some great threads at VMware Communities -- this has been a rolling discussion for years. See alsoESX home lab hardware shopping list. (Actually, take a look at VM /ETC for the whole month of March -- resource pools, VDM, small business P2V, monitoring, and more. Rich is kicking @$$ over there.)

Over at VMworld.com they have just started the second "Ask the Expert" session, this time featuring Larry Aszmann, CTO of Compellent Technologies. You can view Larry's presentation online, and then Larry has promised to stick around for a few weeks to answer your questions.

[Update: just finished watching Larry's presentation and it's very interesting. It's really much more about the Green Data Center and how to reduce your spend than an advertisement for Compellent's products. Some factoids: 80% of data center energy is wasted; data center energy consumption is going to double from 2006 to 20011. Here's the kicker: 2/3 of data center energy is on supporting your IT devices -- servers, storage, networking. Data center buildout is extremely capital intensive (and is a gift to your landlord when your lease is up). So every time you increase the energy usage of your servers & storage, your total energy spend goes up 3x as much. Thus, virtualize your servers and look at your storage. 25% of disk space is actually used -- so use thin provisioning. 80% of your data is inactive and rarely accessed, so use ILM -- information lifecycle management -- that puts inactive data on slower, less power hungry devices. Literally cool stuff.]

Larry Aszmann, CTO of Compellent Technologies
Lawrence E. Aszmann has served as CTO and Secretary since co-founding
Compellent in March 2002. From July 1995 to August 2001, Mr. Aszmann
served as CTO of Xiotech, which Mr. Aszmann co-founded in July 1995.

Expert Session Overview
Compellent Storage Center is one of the most powerful and easy-to-use
SAN in the marketplace. Compellent offers technology independence that
allows enterprise customers to mix and match iSCSI and Fibre Channel
connectivity and manage multiple tiers of Fibre Channel and SATA disk
technologies from one pool of virtual storage. The powerful GUI manages
native thin provisioning, hardware snapshot, snapshot replication and
automated tiered storage all from a web browser with no server-side
code or agents.

Dan Kusnetzky, who has a blog here: Virtually Speaking on ZDNet, has written a number of thought pieces with his consulting/analyst hat over here: Recent Publications from the Kusnetzky Group at his website. He's usually exploring the interface between the technology of virtualization and operationalization in a business process.

I like this recent one: Virtualization: Evolution not Revolution (pdf link). In this short 3-pager, his basic point is that things move slowly in the enterprise data center, because IT managers must be risk averse.

The Golden Rules of IT

1) If it's not broken, don't fix it. Most organizations simply don't have thetime, the resources or the funds to re-implement things that are currentlyworking.

I think paradoxically this has been one driver for VMware's successful adoption. It is so easy to get started with VMware -- download VMware Server or a VI3 eval, then convert [warning: sound] some necessary but little-used old servers that are just sucking up electricity, and go. You don't need a special paravirtualized kernel, just whatever you were running (Windows, Linux, Solaris, etc.); don't need to recompile your app; don't need to get special hardware; and you don't even really need a SAN or other fancy enterprise storage to get started -- just virtualize, no re-implementation needed. The key point you need to realize at this level is that you treat a virtual machine just like its physical counterpart -- although try not to have every antivirus and backup job in every virtual machine on an ESX Server fire off at the same time.

Now when that works great and you do want to see how to take more advantage of the opportunities afforded by virtual infrastructure, then you do have to do some more planning -- maybe get more storage, certainly get some expertise and evaluation of your current infrastructure, and start to figure out how this affects your processes when a new server can be provisioned in a few minutes and your DR plan is finally something more than just a fantasy.

Ultimately you end do up with a data center that looks, acts, and is managed quite differently than what you started with. So was that by evolution or revolution?

(Anyway, Dan has a lot of great stuff there; read up, then go forth and virtualize carefully but with great ultimate success.)

I'm working on my Facebook account last night (look for more VMware-related activity there; come on by!) when I hear "virtualization" and "data center" on the TV. That's unusual enough that my ears perk up and I reach for the Tivo remote to get the whole thing. Turns out PG&E has launched a new site, wecandothis.com, about energy efficiency. Part of the effort is in promoting their virtualization rebates for data centers that reduce their hardware footprint. Unfortunately, it's a crappy Flash site, so I can't point you directly to the virtualization video, but go there and click on the computer that labels itself "Server Virtualization" when your mouse hovers over it. The spot has some nice visuals and ends with a lonely rack in the data center you see to your right. Good stuff.

Most companies want to
foster collaboration among workers, but VMWare* also wants to make the
most of the hilly Palo Alto site, where hundreds of trees already were
growing. Kevin Burke, a partner with William McDonough, said that
Greene was clear that VMWare's campus should enhance connections
between people and with nature.

"We placed a great deal of emphasis on integration of the building with
the landscape," he said, noting that 80 foot-tall redwoods and
eucalyptus trees were saved, and even a heritage oak was boxed for two
years during construction and then replanted. It is thriving. ...

As far as Greene is concerned, it's worth every penny. She wanted the
campus to be as sustainable as possible, down to the cafeteria floor
composed of recycled beer bottles and hardwood floors elsewhere on the
campus that were saved from a Wisconsin barn once owned by Thomas
Edison.

Among her favorite features, however, are the bridges, which allow
employees to walk from building to building. She said she got the idea
for bridges from one of Apple's Cupertino campuses.

"It's the feature that has gotten the most feedback as to why people
enjoy it," Burke said. "People can get up from their desk and go for a
walk. It's a marvelous stroll. Plus, there's something fun about
walking across a bridge."

Windows, too, were given high priority, not only for their ability to
let in light, but also fresh air. At VMWare, 750 windows open and close.

Love the new campus, the bridges, and our windows that actually open. Ah, fresh air. It can get pretty bright, so I'm seeing umbrellas, tarps, and other light-blocking strategies crop up.

The interior layout is also good, but a little twisty when you're trying to give directions. Although there's a lobby for visitors with a cool waterfall, the rest of the place has no 'front' and no long corridors or other thoroughfares through the buildings. Every building mixes engineering with other groups, so there's a great mix of people as you zig-zag across the campus.