VMware takes EMC 'beyond virtual servers'

Here's the no-brainer statement of the week: EMC says that it is not going to sell off its stake in virtualization juggernaut VMware.

The disk storage maker and parent company of VMware is hosting its strategic forum for institutional investors - what we might call an analyst conference - in Boston this morning, and EMC's president, chief executive officer, and chairman, Joe Tucci, came right to the point at the beginning of his presentation.

"We have no intention, as I said before, in separating these two companies and separating these two strategies from each other," Tucci said emphatically. "We will get more traction in the market and more opportunity doing what we're doing."

That means keeping VMware relatively independent but still under firm control, as VMware founders Diane Greene (formerly president and chief executive officer) and Mendel Rosenblum (formerly chief technology officer and Greene's husband) found out the hard way last year when Greene was fired and replaced by Microsoft hot-shot Paul Maritz. (Two months later, Rosenblum left the company.)

It also means leveraging EMC's substantial expertise in storage (one of the messier technology issues that have to be coped with in virtualized environments, using EMC's cred inside data centers, and keeping VMware just independent enough to be the de facto, "open" standard for virtualization for servers, PCs, and someday cloud-style private and public pools of infrastructure.

At the conference, Maritz spent an hour walking Wall Street through the company's product plans and strategies at a fairly high level, and with a little more detail than was provided last month at the VMworld Europe conference, here and here. But he also made the case that what VMware was trying to accomplish was something both "important" and "endearing" to the long-term future of information technology, not just about condensing 2,000 physical servers down to 2,000 virtual machines running on 300 or 400 servers, which is the level of compression VMware is delivering today.

"I am told by customers that virtualization is one of the few technologies in the past few decades that has underpromised and overdelivered," Maritz explained. "They wanted to reduce capex [capital expenditures], but they got more. The environment not only got cheaper, but it got more flexible."

The sales pitch for the new vSphere lineup of products coming out this year, which will replace the current ESX Server 3.5 hypervisor for x64 servers and the Virtual Infrastructure 3 lineup of related management tools and add-ons for that hypervisor, is going to be a lot broader than why ESX and its stack is better than XenServer and Hyper-V and their now shared Essentials toolset.

"When I talk to people over 45, I say we're building the software mainframe, the mainframe of the 21st century," Maritz said. "And they get all nostalgic and say that they always knew that this was the right thing to do instead of client/server. And when I talk to people who are under 45, I say that we're building the internal cloud."

The joke is, of course, that it is all the same Virtual Data Center-Operating System (VDC-OS) software that is driving this cloudy x64-based mainframish thingamabob. And if this works, then the joke will be on any vendor that is either trying to sell a non-x64 platform or that thinks they can build a cloudy operating system that can cope with legacy Windows and Linux client/server style applications and their software stacks or new Web 2.0 style apps based on frameworks such as Spring or Ruby on Rails better than VMware can.

"We are going to be the layer that orchestrates resources," Maritz said. "We are not alone, and others are trying to compete with us and to catch up with us. But his is not an easy thing to do. The vSphere development team has nearly 2,000 people. That's bigger than any team I used to ship Windows in the 1990s."

He paused for effect, to let the number and significance of that remark sink in. "There are a handful of companies that can marshal the size and depth of resources to pull this off." And later in his talk, just to give a dig at the open source community when talking about compute clouds as a product, not an architecture, Maritz said that this is not "something that you can cobble together with some open source libraries."

You can see now why Tucci wanted Maritz running VMware. He has dealt with software projects of a size, scope, and import much greater than VMware's previous top brass had ever imagined in their fast rise to glory.

Beyond virtual servers

The four key elements of the vSphere stack will include vCompute, vNetwork, and vStorage - the three virtualization features for the three parts of a modern computer system - plus vCenter Suite, the management tools for the stack.

vCenter Suite, says Maritz, will move the "management layer up," letting administrators set up software stacks, set service level targets, and policies that determine what is to happen when service levels are not being met (add more machines to the cluster, shut down other workloads, and so forth).

The vCenter interface is a simple dashboard and it is meant to be, but if admins want to drill down and muck about in the code that sets policies or do something manually, they can. But the whole point is to let vCenter do the work of managing the compute, storage, and network pools.

vCompute is where the server hypverisor lives, and as we previously reported, the future hypervisor would have double the VirtualSMP capability of ESX Server 3.5. So vCompute 4 will be able to have a single VM span with as many as eight physical processor cores in a machine. Each virtual machine will also be able to have as much as 256 GB of main memory allocated to it, up from 64 GB from ESX Server 3.5, and up to ten network interface cards per VM as well.

In Maritz' presentation, it is clear that VMware is thinking about managing a large pool of servers and masking this from administrators and applications; he flashed some data across the screen very quickly, showing that vCompute will span 4,096 cores in a single cluster, up to 64 TB of main memory, and I could have sworn I saw 25 million I/O operations per second (IOPS) for aggregate disk bandwidth. (The numbers went by very fast, but that is twice what people are expecting on the memory and IOPS capacities.)

"This has gone way beyond server virtualization," Maritz boasted. "This is about building a single, giant computer." And, by the way, one that can support any workload - bar none - according to VMware.

Someone attending the VMworld event last month published these feeds and speeds at the VMwaretips blog, but Maritz did not go into such detail today. But that posting claims the future vCompute hypervisor will allow as many as 20 VMs per core, up to a maximum of 256 VMs per host, with physical servers having as much as 64 cores; a cluster of machines in one pool and managed by one instance of vCenter will span up to 64 hosts, for a total maximum of 16,384 VMs across those 64 machines, which would have a maximum of 4,096 processor cores.

The posting said that main memory per host will top out at 512 GB and total memory per cluster would max out at 32 TB. Maximum network bandwidth (four 10 Gigabit Ethernet cards per host machine) was specified as 40 GB/sec. VMware has said previously that it would be able to deliver 200,000 IOPS per host on the kicker to ESX Server 3.5, doubling up disk bandwidth with vCompute.

The vNetwork part of the vSphere software stack essentially means setting a big virtual switch, written in software and running in a virtual machine itself, between server VMs and the physical switches that these VMs actually need to talk to. This way, when a virtual machine moves from one physical server to another, it is still talking to the virtual switch and doesn't have to be reconfigured to talk to a new physical switch attached to that physical server that is now hosting the VM.

It's like a hypervisor layer for switching, and the Nexus 1000V switch that VMware co-developed with Cisco Systems, and which will be a key element of next week's "California" blade server launch.

The vNetwork layer is not being written by Cisco to give it some kind of monopoly, but according to Maritz is being virtualized so customers who like using Cisco switches and Cisco administration tools can keep on using them and see the Nexus 1000V just like they would a real switch. Presumably, other switch vendors have been invited to create their own virtual switches and play in the vNetwork layer.

The vStorage layer, which virtualizes storage and meshes with the various high availability, snapshotting, thin provisioning and other features of modern disk arrays, doesn't just work with EMC products, but those of other key storage providers in the data center - NetApp, IBM, Hitachi, Hewlett-Packard, Sun Microsystems, and myriad niche but clever players.

And all of this openness is important to VMware and thus EMC because if vSphere isn't open, if VMware doesn't let server and storage and networking gear suppliers stay in the game somehow, they won't help sell it. And they will then try to block it or thwart it with other point products. And considering the scope of what VMware is shooting for, they might just better try to kill the company now before it sucks much of the profits out of the data center.

It might be cheaper at around $10bn or so (with VMware having a market capitalization of around $7.8bn today as we go to press) to just buy VMware now. But EMC is not, as Tucci said, interested in selling. Only those with the stomach for a hostile takeover and great big bags of cash need apply. ®