By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

doubt that IT managers are ready for that level of consolidation.

Last week, Intel Corp. and IBM announced that they were working with VMware Inc. to help the company double the amount of memory its ESX 3 platform can address, from 64 GB up to 128 GB, matching the maximum memory capacity of its System x 3950.

Some say ESX's next stop on the memory road will probably be even higher. With true quad core coming out, "128 GB seems too small," said Tony Kay, Sun Microsystems Inc. systems virtualization manager. "I would expect [VMware] to jump right over 128 [GB] right to 256 [GB]," Kay said.

Memory, arguably more than CPU, is the limiting factor in how many virtual machines you can run on a single physical host. "If you are virtualizing, you don't need more processing power almost by definition," said Gordon Haff, principal IT adviser with Illuminata Inc. in Nashua, N.H. "If you did, you wouldn't be virtualizing."

Indeed, according to a virtualization sizing document used by IBM field engineers, simply doubling the memory on an 8 CPU System x 3950 from 32 GB to 64 GB increases the number of recommended virtual machines per system from 33 to 67. The VMs were running workloads limited to one virtual CPU. The document shows a similar linear increase in the number of VMs for larger workloads as well.

That's good news for IT shops that use virtualization to consolidate their systems. For them, "he who has the fewest number of servers wins," said Jay Bretzmann, IBM marketing director for System x machines, its Intel x86 server line.

But adding massive amounts of memory to a box has numerous drawbacks, not least of which is price.

4 GB DIMM sticker shock

Given the memory limitations of the Advanced Micro Devices Inc. and Intel processors, achieving 128 GB memory capacities requires the use of 4 GB DIMMs (dual inline memory modules), which currently cost a small fortune. For example, IBM sells a kit for its x3950 System x comprising two 2 GB memory sticks for $799. An 8 GB kit comprises two 4 GB memory sticks and costs $5,569 -- almost seven times as much!

That will change soon enough, said Martin Reynolds, a Gartner Inc. fellow. "DRAM drops, what, 30% to 40% per year," and soon enough, "it will be cheaper to buy one 4 GB DIMM than two at 2 GB."

A larger problem for x86 systems is getting good scalability out of larger systems, Reynolds said.

With large "big iron" systems such as mainframes and Sun's E15K, "there's a large amount of silicon dedicated to managing [memory] coherence issues across large systems," Reynolds said. But with x86 systems, "you get into trouble with more than eight processors," because they don't have the necessary silicon to efficiently keep memory coherent between processors.

Thus, Reynolds said, you probably won't get the same efficiency from one server with 128 GB of RAM as you would from four systems with 32 GB of RAM.

According to Sun's Kay, very few environments are willing to go beyond 20 to 30 VMs on a system," even though a Sun Fire 4600 can easily host 80 VMs, he said. "Regardless of who you buy from, you're going to see hardware failures," and the larger the number of VMs on a system, the more dramatic the impact of a hardware failure becomes.

One exception Kay has seen is an ISV using virtualization in its test lab environment and aiming for 64 VMs per host. "It's not that their work isn't important, but because they're a developer they can afford an outage [of that magnitude.]"

Licensing, too, could emerge as an inhibitor to larger x86 systems, wrote John Enck, Gartner research vice president in an email, since "many software contracts are licensed based on the physical cores in the server and not the virtual processors used by the application."

All these issues won't stop server vendors from trying, though, as they attempt to offset the slower server sales that Gartner attributes to virtualization.

But, for the time being, "I don't see my clients buying [these larger systems]," Enck wrote. "There is a line you cross when having so many virtual machines on a host causes you to invest in fault-tolerant solutions (e.g., clustering or failover) and then the cost benefits of virtualization disappear."

0 comments

Register

Login

Forgot your password?

Your password has been sent to:

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy