Posts Tagged ‘vsphere 5’

If you’ve recently upgraded your ESXi from 5.0 build 456551 and were logging to syslog, it’s possible that your events are no longer being received by your syslog server. It seems that there was a “feature” in ESXi 5.0 build 456551 that allowed syslog to escape the ESXi firewall regardless of the firewall setting. This could be especially problematic if your upgraded from ESXi 4.x where there was no firewall configuration needed for syslog traffic.

VMware notes that syslog traffic was not affected by the ESXi firewall in v5 build 456551. See KB2003322 for details.

However, in ESXi 5.0 Update 1, the firewall rules definitely applies and if you were “grandfathered-in” during the upgrade to build 456551: check your syslog for your ESXi 5 servers. If your no longer getting syslog entries, either set the policy in the host’s Configuration->Security Profile->Properties… control panel:

Enabling syslog traffic in the ESXi firewall within the vSphere Client interface.

Or use ESXCLI to do the work (especially with multiple hosts):

esxcli network firewall ruleset set –ruleset-id=syslog –enable=true

esxcli network firewall refresh

That will take care of the “absent” syslog entries.

SOLORI’s Take: Gotcha! As ESXi becomes more like ESX in terms of provisioning, old-school ESXiers (like me) need to make sure they’re up-to-speed on the latest changes in ESXi. Ashamed to admit it, but this exact scenario got me in my home lab… Until I stumbled onto KB2003322 I didn’t think to go back and check the ESXi firewall settings – after all, it was previously working 😉

In my last post, I mentioned a much complained about “idle” CPU utilization quirk with NexentaStor when running as a virtual machine. After reading many supposed remedies on forum postings (some reference in the last blog, none worked) I went pit-bull on the problem… and got lucky.

For me, the problem cropped-up while running storage benchmarks on some virtual storage appliances for a client. These VSA’s are bound to a dedicated LSI 9211-8i SAS/6G controller using VMware’s PCI pass-through (Host Configuration, Hardware, Advanced Settings). The VSA uses the LSI controller to access SAS/6G disks and SSDs in a connected JBOD – this approach allows for many permutations on storage HA and avoids physical RDMs and VMDKs. Using a JBOD allows for attachments to PCIe-equipped blades, dense rack servers, etc. and has little impact on VM CPU utilization (in theory).

So I was very surprised to find idle CPU utilization (according to ESXi’s performance charting) hovering around 50% from a fresh installation. This runs contrary to my experience with NexentaStor, but I’ve seen reports of such issues on the forums and even on my own blog. I’ve never been able to reproduce more than a 15-20% per vCPU bias between what’s reported in the VM and what ESXi/vCenter sees. I’ve always attributed this difference to vSMP and virtual hardware (disk activity) which is not seen by the OS but is handled by the VMM.

CPU record of idle and IOzone testing of SAS-attached VSA

During the testing phase, I’m primarily looking at the disk throughput, but I notice a persistent CPU utilization of 50% – even when idle. Regardless, the 4 vCPU VSA appears to perform well (about 725MB/sec 2-process throughput on initial write) despite the CPU deficit (3 vCPU test pictured above, about 600MB/sec write). However, after writing my last blog entry, the 50% CPU leach just kept bothering me.

After wasting several hours researching and tweaking with very little (positive) effect, a client e-mail prompted a NMV walk through with resulted in an unexpected consequence: the act of powering-off the VSA from web GUI (NMV) resulted is significantly reduced idle CPU utilization.

Getting lucky: noticing a trend after using NMV to reboot for a client walk-through of the GUI.

Working with the 3 vCPU VSA over the previous several hours, I had consistently used the NMC (CLI) to reboot and power-off the VM. The fact of simply using the NMV to shutdown the VSA couldn’t have anything to do with idle CPU consumption, could it? Remembering that these were fresh installations I wondered if this was specific to a fresh installation or could it show up in an upgrade. According to the forums, this only hampered VMs, not hardware.

I grabbed a NexentaStor 3.1.0 VM out of the library (one that had been progressively upgraded from 3.0.1) and set about the upgrade process. The result was unexpected: no difference in idle CPU from the previous version; this problem was NOT specific to 3.1.2, but specific to the installation/setup process itself (at least that was the prevailing hypothesis.)

Regression into my legacy VSA library, upgrading from 3.1.1 to 3.1.2 to check if the problem follows the NexentaStor version.

If anything, the upgraded VSA exhibited slightly less idle CPU utilization than its previous version. Noteworthy, however, was the extremely high CPU utilization as the VSA sat waiting for a yes/no response (NMC/CLI) to the “would you like to reboot now” question at the end of the upgrade process (see chart above). Once “no” was selected, CPU dropped immediately to normal levels.

Now it seemed apparent that perhaps an vestige of the web-based setup process (completed by a series of “wizard” pages) must be lingering around (much like the yes/no CPU glutton.) Fortunately, I had another freshly installed VSA to test with – exactly configured and processed as the first one. I fired-up the NMV and shutdown the VSA…

Confirming the impact of the "fix" on a second fresh installed NexentaStor VSA

After powering-on the VM from the vSphere Client it was obvious. This VSA had been running idle for some time, so it’s idle performance baseline – established prior across several reboots from CLI – was well recorded by the ESXi host (see above.) The resulting drop in idle CPU was nothing short of astounding: the 3 vCPU configuration has dropped from a 50% average utilization to 23% idle utilization. Naturally, these findings (still anecdotal) have been forwarded on to engineers at Nexenta. Unfortunately, now I have to go back and re-run my storage benchmarks; hopefully clearing the underlying bug has reduced the needed vCPU count…

I was recently involved in a process of migrating from vSphere 4 to vSphere 5 for an enterprise client leapfrogging from vSphere 4.0 to vSphere 5.0. Their platform is and AMD service farm with modern, socket G34 CPU blades and 10G Ethernet connectivity – all moving parts on VMware’s Hardware Compatibility List for all versions of vSphere involved in the process.

Supermicro AS-2022TG Platform Compatibility

Intel 10G Ethernet, i82599EB Chipset based NIC

Although VMware lists the 2022TG-HIBQRF as ESXi 5.0 compatible and not the 2022TG-HTRF, it is necessary to note the only difference between the two is the presence of a Mellanox ConnectX-2 QDR infiniband controller on-board: the motherboards and BIOS are exactly the same, the Mellanox SMT components are just mission on the HTRF version.

It is key to note that VMware also distinguishes the ESXi compatible platform by supported BIOS version 2.0a (Supermicro’s current version) versus 1.0b for the HTRF version. The current version is also required for AMD Opteron 6200 series CPUs which is not a factor in this current upgrade process (i.e. only 6100-series CPUs are in use). For this client, the hardware support level of the current BIOS (1.0c) was sufficient.

Safe Assumptions

So is it safe to assume that a BIOS update is not necessary when migrating to a newer version of vSphere? In the past, it’s been feature driven. For instance, proper use new hardware features like Intel EPT, AMD RVI or VMDirectPath (pci pass-through) have required BIOS updates in the past. All of these features were supported by the “legacy” version of vSphere and existing BIOS – so sounds safe to assume a direct import into vCenter 5 will work and then we can let vCenter manage the ESXi update, right?

Well, not entirely: when importing the host to vCenter5 the process gets all the way through inventory import and the fails abruptly with a terse message “A general system error occurred: internal error.” Looking at the error details in vCenter5 is of no real help.

Import of ESXi 4 host fails in vCenter5 for unknow reason.

A search of the term in VMware Communities is of no help either (returns non-relevant issues). However, digging down to the vCenter5 VPXD log (typically found in the hidden directory structure “C:\ProgramData\VMware\VMware VirtualCenter\Logs\”) does return a nugget that is both helpful and obscure.

Reviewing the vCenter VPXD log for evidence of the import problem.

If you’ve read through these logs before, you’ll note that the SSL certificate check has been disabled. This was defeated in vCenter Server Settings to rule-out potentially stale SSL certificates on the “legacy” ESXi nodes – it was not helpful in mitigating the error. The section highlighted was, however, helpful in uncovering a relevant VMware Knowledgebase article – the key language, “Alert:false@ D:/build/ob/bora-455964/bora/vim/lib/vdb/vdb.cpp:3253” turns up only one KB article – and it’s a winner.

Knowledge Base article search for cryptic VPXD error code.

It is important – if not helpful – to note that searching KB for “import fail internal error” does return nine different (and unrelated) articles, but it does NOT return this KB (we’ve made a request to VMware to make this KB easier to find in a simpler search). VMware’s KB2008366 illuminates the real reason why the host import fails: non-Y2K compliant BIOS date is rejected as NULL data by vCenter5.

Y2K Date Requirement, Really?

Yes, the spectre of Y2K strikes 12 years later and stands as the sole roadblock to importing your perfectly functioning ESXi 4 host into vCenter5. According the the KB article, you can tell if you’re on the hook for a BIOS update by checking the “Hardware/Processors” information pane in the “Host Configuration” tab inside vCenter4.

ESXi 4.x host BIOS version/date exposed in vCenter4

According to vCenter date policy, this platform was minted in 1910. The KB makes it clear that any two-digit year will be imported as 19XX, where XX is the two digit year. Seeing as how not even a precursor of ESX existed in 1999, this choice is just dead stupid. Even so, the x86 PC wasn’t even invented until 1978, so a simple “date check” inequality (i.e. if “two_digit_date” < 78 then “four_digit_date” = 2000 + “two_digit_date”) would have resolved the problem for the next 65 years.

Instead, VMware will have you go through the process of upgrading and testing a new (and, as 6200 Opterons are just now available to the upgrade market, a likely unnecessary) BIOS version on your otherwise “trusty” platform.

Non-Y2K compliant BIOS date

Y2K-compliant BIOS date, post upgrade

Just to add insult to injury with this upgrade process, the BIOS upgrade for this platform comes with an added frustration: the IPMI/BMC firmware must also be updated to accommodate the new hardware monitoring capabilities of the new BIOS. Without the BMC update, vCenter will complain of Northbridge chipset overheat warnings from the platform until the BMC firmware is updated.

So, after the BIOS update, BMC update and painstaking hours (to days) of “new” product testing, we arrive at the following benefit: vCenter gets the BIOS version date correctly.

Bar Unnecessarily High

VMware actually says, “if the BIOS release date of the host is in the MM/DD/YY format, contact the hardware vendor to obtain the current MM/DD/YYYY format.” Really? So my platform is not vCenter5 worthy unless the BIOS date is four-digit year formatted? Put another way, VMware’s coders can create the premier cloud platform but they can’t handle a simple Y2K date inequality. #FAIL

Forget “the vRAM tax”, this obstacle is just dead stupid and unnecessary; and it will stand in the way of many more vSphere 5 upgrades. Relying on a BIOS update for a platform that was previously supported (remember 1.0b BIOS above?) just to account for the BIOS date is arbitrary at best, and it does not pose a compelling argument to your vendor’s support wing when dealing with an otherwise flawless BIOS.

SOLORI’s Take:

We’ve submitted a vCenter feature request to remove this exclusion for hundreds of vSphere 4.x hosts, maybe you should too…

On the way to VMworld this morning this morning I started-out by listening to @Scott_lowe, @mike_laverick and @duncanyp about stretched clusters and some esoteric storage considerations. Then i was off reading @sakacc blogging about his take on stretch clusters and the black hole of node failure when I stumbled on a retweet @bgracely via @andreliebovici about the spectre of change in our industry. Suddenly these things seemed very well related within the context of my destination: VMworld 2011.

Back about a month ago when vSphere 5 was announced the buzz about the “upgrade” was consumed by discussions about licensing and vRAM. Naturally, this was not the focus VMware was hoping for, especially considering how much of a step forward vSphere 5 is over VS4. Rather, VMware – by all deserved rights – wanted to hear “excited” conversations about how VS5 was closing the gap on vCloud architecture problems and pain-points.

Personally, I managed to keep the vRAM licensing issue out of SOLORI’s blog for two reasons: 1) the initial vRAM targets were so off that VMware had to make a change, and 2) significant avenues for the discussion were available elsewhere. That does not mean I wasn’t outspoken about my thoughts on vRAM – made obvious by contributions to some community discussions on the topic – or VMware’s reasoning for moving to vRAM. Suffice to say VMware did “the right thing” – as I had confidence they would – and the current vRAM targets capture 100% of my clients without additional licenses.

I hinted that VS5 answers a lot of the hanging questions from VS4 in terms of facilitating how cloud confederations are architected, but the question is: in the distraction, did VS5’s “goodness” get lost in the scuffle? If so, can they get back the mind share they may have lost to Chicken Little reactionaries?

First, if VMware’s lost ground to anyone, it’s VMware. The vast majority of cool-headed admins I talked to were either not affected by vRAM or were willing to take a wait-and-see outlook on vSphere 5 with continued use of vSphere 4.1. Some did evaluate Hyper-V’s “readiness” but most didn’t blink. By comparison, vSphere 4.1 still had more to offer private cloud than anything else.

Secondly, vSphere 5 “goodness” did get lost in the scuffle, and that’s okay! It may be somewhat counter intuitive but I believe VMware will actually come out well ahead of their “would be” position in the market, and it is precisely because of these things, not just in spite of them. Here’s my reasoning:

1) In the way the vSphere 5 launch announcement and vRAM licensing debacle unfolded, lot of the “hot air” about vRAM was vented along the way. Subsequently, VMware gained some service cred by actually listening to their client base and making a significant change to their platform pricing model. VMware got more bang-for-their-buck out of that move as the effect on stock price may never be known here, given the timing of the S&P ratings splash, but I would have expected to see a slight hit. Fortunately, 20-30% sector slides trump vRAM, and only Microsoft is talking about vRAM now (that is until they adopt something similar.)

On that topic, anytime you can get your competitor talking about your product instead of theirs, it usually turns out to be a good thing. Even in this case, where the topic has nothing to do with the needs of most businesses, negative marketing against vRAM will ultimately do more to establish VMware as an innovator than an “already too expensive alternative to XYZ.”

2) SOLORI’s law of conservation of marketing momentum: goodness preserved, not destroyed. VMworld 2011 turns out to be perfectly timed to generate excitement in all of the “goodness” that vSphere 5 has to offer. More importantly, it can now do so with increased vigor and without a lot of energy siphoned-off discussing vRAM, utilization models and what have you: been there done that, on to the meat and away with the garnish.

3) Again it’s odd timing, but the market slide has more folks looking at cloud than ever before. Confidence in cloud offerings has been a deterrent for private cloud users, partly because of the “no clear choices” scenario and partly because concerns about data migration in and around the public cloud. Instability and weak growth in the world economy have people reevaluating CAPEX-heavy initiatives as well as priorities. The bar for cloud offerings has never been lower.

In vSphere 5, VMware hints at the ability for more cloud providers to be transparent to the subscriber: if they adopt vSphere. Ultimately, this will facilitate vendor agnosticism much like the early days of the Internet. Back then, operators discovered that common protocols allowed for dial-up vendors to share resources in a reciprocal and transparent manner. This allowed the resources of provider A to be utilized by a subscriber of provider B: the end user was completely unaware of the difference. For those that don’t have strict requirements on where their data “lives” and/or are more interested in adherence to availability and SLA requirements, this can actually induce a broader market instead of a narrower one.

If you’ve looked past vRAM, you may have noticed for yourself that vSphere has more to deliver cloud offerings than ever before. VMware will try to convince you that whether cloud bursting, migrating to cloud or expanding hybrid cloud options, having a common underlying architecture promotes better flexibility and reduces overall cost and complexity. They want you to conclude that vSphere 5 is the basis for that architecture. Many will come away from Las Vegas – having seen it – believing it too.

So, as I – and an estimated 20K+ other virtualization junkies – head off to Las Vegas for a week of geek overload, parties and social networking, my thoughts turn to @duncanyp‘s 140+ improvements, enhancements and advances waiting back home in my vSphere 5 lab. Last week he challenged his “followers” to be the first to post examples of all of them; with the myriad of hands-on labs and expert sessions just over the horizon, I hope to do it one better and actually experience them first hand.

These things all add up to a win-win for VMware and a strong showing for VMworld. It’s going to be an exciting and – tip of the hat to @bgracely – industry changing week! Now off to the fray…

Nexneta Systems Inc released version 3.1 of its open storage software yesterday with a couple of VMware vSphere-specific feature enhancements. These enhancements are specifically targets at VMware’s vStorage API for Array Integration (VAAI) which promises to accelerate certain “costly” storage operations by pushing their implementation to the storage array instead of the ESX host.

SCSI Block Copy: Supported in vSphere 4.1 and later.
Example. Avoids reading and writing of block data “through” the ESX host during a block copy operation by allowing VMware to instruct the SAN to do so.

SCSI Unmap: Supported in vSphere 5 and later. Enables freed blocks to be returned to the zpool for new allocation when no longer used for VM storage.

Additional “optimizations” and improvements from Nexenta in 3.1 include:

Multiple VIP per service for HA Cluster, fail-over of local users and elimination of separate heartbeat device

JBOD management for select devices from within the NMV

Given the addition of VAAI features, the upgrade offers some compelling reasons to make the move to NexentaStor 3.1 and at the same time removes obstacles from choosing NexentaStor as a VMware iSCSI platform for SMB/SME (versus low-end EMC VNXE, which at last look was still waiting on VAAI support.) However, for existing vSphere 4.1+ environments, a word of caution: you will want to “test, test, test” before upgrading to (or enabling) VAAI (fortunately, there’s a NexentaStor VSA available).

Auto-Sync Resume

In the past, NexentaStor’s auto-sync plug-in has been the only integrated means of block replication from one storage pool (or array) to another. In the past, the plug-in allowed for periodic replication events to be scheduled which drew from a marker snapshot until the replication was complete. Upon extended error (where the replication fails), the failure of the replication causes a roll-back to the marker point, eliminating any data that has transferred between the pools. For WAN replication, this can be costly as no check-points are created along the way.

More problematically, there has been no way to recreate a replication service in the event it has been either deleted or missing (i.e. zpool moved to a new host.) This creates a requirement for the replication to start over from scratch – a problem for very large datasets. With Auto-Sync 3.1, later problem is resolved, and provided NexentaStor can find at least one pair of identical snapshots for the file system.

Where I find this new “feature” particularly helpful is in seed replications to external storage devices (i.e. USB2.0 arrays, JBODs, etc.). This allows for a replication to external, removable storage to (1) be completed locally, (2) shipped to a central repository, and (3) a remote replication service created to continue the replication updates over the WAN.

Additionally, consider the case where the above local-to-WAN replication seeding takes place over the course of several months and the hardware at the central repository fails, requiring the replication pool to be moved to another NexentaStor instance. In the past, the limitation on auto-sync would have required a brand new replication set, regardless of the consistency of the replicated data on the relocated pool. Now, a new (replacement) service can be created pointing to the new destination, and auto-sync promises to find the data – intact – and resume the replication updates starting with the last identical marker snapshot.

NexentaStor Native Transport

The default transport for replication in NexentaStor 3.1 is now NexentaStor’s TCP-based Remote Replication protocol (RR). While SSH is still an option for non-NexentaStor destinations, netcat is no longer supported for auto-sync replications. While no indication of performance benefits are available, two tunable parameters are available for RR auto-sync services (per service): TCP connection count (-n) and TCP package size (-P). Defaults for each of these are 4 and 1024, respectively, meaning 4 connections and 1024KB PDU size for the replication session.

Conclusions

For VMware vSphere deployments in SMB, SME and ROBO environments, NexentaStor 3.1 looks to be a good fit, offering high-performance CIFS, NFS, iSCSI and Fiber Channel options in a unified storage environment complete with VAAI support to accelerate vStorage applications. For VMware View installations using NexentaStor, the VAAI/ATS feature should resolve some iSCSI locking behavior issues that have made NFS more attractive but remove SCSI-based VAAI features. That said, with the storage provisioning changes in View 4.5 and upcoming View 5, the ability to pick from FC, iSCSI or NFS (especially at 10G) from within the same storage platform has definite advantages (if not complexity implications.) Suffice to say, NexentaStor’s update is adding more open storage tools to the VMware virtualization architect’s bag of tricks.

Update, 8/12/2011:

Nexenta has found some problems with 3.1 post Q/A. They’ve released this statement on the matter:

Nexenta places the highest importance on maintaining access to and integrity of customer data. The purpose of this Technical Bulletin is to make you aware of an issue with the process of upgrading to 3.1. Nexenta has discovered an issue with the software delivery mechanism we use. This issue can result in errors during the upgrade process and some functionality not being installed properly. Please postpone upgrading to v3.1 until our next Technical Bulletin update. We are actively working to get this corrected and get it back to 100 % service as fast as possible. Until the issue is resolved we have removed 3.1 from the website and suspended upgrades. Thanks for your patience.

According to sources from within Nexenta, the problems appear to be more related to APT repository/distribution issues “rather than the 3.1 codebase.” All ISO and repository distribution for 3.1 has been pulled until further notice and links to information about 3.1 on the corporate Nexenta site are no longer working…

Update, 8/17/2011:

Today, while working on a follow-up post, the lab systems (virtual storage appliances) were updated to NexentaStor 3.1.1 (both Enterprise and Community editions). Since a question was raised about the applicability of the VAAI enhancements to Community Edition (NexentaStor CE), I’ve got a teaser for you: see the following image of two identical LUNs mounted to an ESXi host from NexentaStor Enterprise Edition (NSEE) and NexentaStor Community Edition (NSCE). If you look closely, you’ll notice they BOTH show “supported” status.

Popular Posts

In Medio Stat Veritas

SOLORI's Take and Quick Take posts express my personal opinion unless explicitly attributed to other sources. Where possible, supporting facts are presented to properly frame and ground these opinions, however they are presented "AS-IS" without regard to warranty or promise: expressed or implied.

Comments are open to all registered users and may be edited for decorum. Spam is deleted with prejudice.