Entries in vmware
(19)

An interesting article from the Lone Sysadmin concerning virtual appliances and his love/hate relationship with them. I agree with him pretty much across the board, but I've added (and he has graciously made a comment note of it) a fourth point regarding localization.

Pretty much all of the virtual appliances out there come with the US QWERTY layout by default, but almost none of them include the Linux kbd package which would let me select an alternate layout. For those of us that work in the rest of the world and have corporate password policies that require complex passwords, this is a real problem.

When the article first crossed my RSS feed, I didn’t read all the way to the bottom, figuring that I’d wait for the beta program to start and then really dig in my teeth since I’m already good and busy with vSphere 4.1.

But one little line would have jumped out at me if I’d paid attention: support for Snow Leopard Server (SLS). This has been on my wish list for quite a while now since it will allow me to deploy virtual instances of the SLS. Technically, this doesn’t present much of a challenge as there are a number of hacks that let you do this right now. But the kicker will be support from Apple and VMware as this certainly wouldn’t be announced without the approval from Apple.

All of the sudden I can run a CAL-free server with lots of really nice features, not the least of which are excellent support for iOS clients with the integrated calendaring, wiki, blog, podcasting etc., that can also leverage my existing Active Directory environment.

And of course as a VM I’ll be able to profit from all of the advantages of vSphere.
Up until now, there hasn’t been a single feature in vSphere 5 that has been sufficiently compelling to make me look forward to the new version (1Tb of RAM and 32 vCPUS ? Cool, but way outside of the scope of my projects).

Now I’m champing at the bit…

Not to mention imagining a few interesting things about the eventual support for Lion. One of the things that I’ve seen is that the remote screen sharing is no longer a direct attach to the physical screen à la classic VNC, but a session by session screen instance à la Terminal Server.

I ran across an interesting problem recently with the (relatively) new USB passthrough feature on some ESXi 4.1 and 4.1 Update 1 machines.

For some reason, I would get into a situation where I couldn’t add any USB devices to any VMs. After digging around a bit it seems that you can produce this behavior by removing a USB device from a VM that is attached to an external hub. I was using an 8 port Trust hub that has two hubs internally. I’ve used the map/unmap function before with other systems without any difficulty, but in those cases, the USB peripheral was being connected directly to the ESX server rather than going through an external hub. As far as I can tell, the hub seems to be a contributing factor.

I would assign a USB device and then remove it. Immediately thereafter, it would claim that there were no USB devices available, even other devices that had not previously been assigned to a VM. Plugging in a new device made no difference and was not made available either.

When I went onto the command line, a “lsusb” would show me that all of the devices were in fact still seen by the ESX server. Disconnecting, reconnecting, restarting the server, restarting the usbarbitrator service made no difference. Once it got into this state it was impossible to map a USB device. Every time you go into Edit Settings > Add the USB Device option was greyed out.

Finally, a call to VMware support with some digging on their part revealed an internal tech note that recommended restarting the hostd service resets the state so that you can assign devices. There is a caveat though as ALL devices become available in the add USB device wizard, even devices that are currently mapped to VMs, so be careful to not double map a device.

There’s something that’s confused me with VMware Data Recovery that I’m wondering about. If you’ve ever watched the performance counters on the backup data stores assigned to VDR, one thing you notice right away is the very high level of read IOps. Glancing at it quickly that doesn’t seem to make any sense, since it’s the destination for your backups, not the source.

Looking further at it, it appears that there is a high level of read operations of 4KB. Again, more than a little strange. The only thing that I can think of is that it’s reading out the hash values of the dedup signatures, but even then it shouldn’t be needing 4Kb, unless that’s the minimum size on the filesystem.

If it’s the hash values of deduplicated blocks, there’s either something very inefficient in the VDR engine or my back of the envelope calculations are missing something.

Assuming two totally full 1Tb backup destinations, that works out to 2 147 483 648 KB (210241024*1024). Assuming 4KB blocks from the IO profile, there are 536 870 912 possible hash signatures to keep track of. Given a standard hash value of 128B, the entire hash table should only take up 65 536KB which fits easily into memory. By default VDR is configured with 2Gb of memory.

So that should negate my theory that it’s going to disk to pull out hash values to compare against incoming new data.

Which brings me back to my original question - why in the world is VDR hitting the destination disks hard in random read IO for small blocks? To the best of my knowledge, VDR uses a fixed block size for deduplication, unlike appliances like DataDomain which use variable block sizes.

The overall performance of VDR is dependent on having some fairly high performance disk storage which seems strange. This profile is completely understandable for when it’s doing integrity checks and scanning the entire contents of the destination, but very strange activity for performing backups.

As long as you’re aware of this, you can plan appropriately with reasonably high performance storage for use with VDR, but often the reflex is to go with high capacity, relatively slow disks for disk to disk backups which results in pretty horrible VDR performance.