Making the future of technology exciting!

Month: February 2011

So after spending many hours migrating all my VMs back to their uber performing datastores, I went to power on my secondary DC only to find it would not start up.

Something to do with a missing file.

“The system cannot find the file specified.
Cannot open the disk ‘DC02.vmdk’ or one of the snapshot disks it depends on.
VMware ESX cannot find the virtual disk “DC02.vmdk”. Verify the path is valid and try again. ”

I immediately checked the configuration settings in vCenter and all appeared correct. The datastore browser confirmed that it could see the 10GB vmdk file – so what could it be?

I never trust a GUI, so ssh’d over to the TSM and did a quick directory listing to find that whilst the -flat.vmdk file was there, the .vmdk file wasn’t! In the migration back, somehow the VM had lost the file that controls its understanding of disk geometry, controller type, provisioned format (thin/thick).

Knowing I had the -flat file was re-assuring, had the shoe been on the other foot and all I was left with was the vmdk file, I would have been a little lot more concerned.

The first step to resolution was to create a new identically sized virtual disk to the -flat file I had been left with. In turn, this would then create a new VMDK file that I could borrow.

Share this:

My existing iSCSI setup wasn’t delivering the I/O I expected so I went about upgrading both my eSATA array controller so I could RAID 10 across the 8 drives I have in my external drive enclosure (rather than the 4 the previous controller would allow) and in addition to that built a new Windows 2008 physical server to drive the I/O (rather than running it off my old Windows XP instance). To do this meant I had to offload all my existing VMFS data to another location temporarily to allow me to recreate the RAID. This was done using a number of external USB HDDs attached to the old iSCSI target server and passing them through as iSCSI targets to VMware. The VMs were then sVMotioned between iSCSI datastores until the external enclosure was free!

I installed Starwind (my iSCSI target software of choice) on the new server and hooked up the USB HDDs. I then proceeded to reconfigure it to represent these iSCSI HDDs to VMware.

I rescanned the iSCSI adapter but to my surprise couldn’t see the VMFS volume – only the LUN itself. Having worked with resignaturing in the past, I realised that the volume must still be there lurking in the background, it was merely being masked by VMware because it believed it was a snapshot because it was previously presented to the host under a different iSCSI IQN.

So without further ado, I ssh’d over to the TSM and ran the following command to confirm my thoughts:-

So, to re-add this back into the Storage view so that I could Storage vMotion the VMs back to my new 8 disk RAID setup, I ran the following command

esxcfg-volume -M USB_VMFS_01

(you can specify -m if you only wish to mount it once. -M mounts the drive persistently).

Tada! VMFS volumes all present and correct.

I’m now seeing a HUGE performance gain from using the 8 disks and I’m going to try my hardest to push the limits of the 1GB iSCSI connection before I consider adding a second NIC for Round Robin on both the VMware hosts and iSCSI target server.