A while back I had the opportunity to be a reviewer for the new book by PACK Publishing, https://www.packtpub.com/ , the VMware vSphere Security Cookbook, by Mike Greer. This was my first time as a book reviewer, and I had thought it would be a few minutes in the evening reading the sections the publisher emailed me, then adding some short comments, wrong! As I read over the chapters I started thinking about how I would write or explain the specific topic, then doing research on the web or in the VMware KB’s to verify what I thought, or what was in the chapters accurate and the best way to explain the specific topics. I can tell you from past experience in writing procedure documents for customers, you really need to have the steps to follow defined correctly and accurately. What I really like is the way it details the steps to configure the components of vSphere security whether you are doing it for the first time, or you have done it several times. The book is based on vSphere vCNS 5.5, however I have been working with NSX since its release and I can see that many of the interfaces are identical. This is especially true with most of the Edge Services Gateway configurations.

The book covers additional security areas that you always need to interface with, such as Microsoft Active Directory and SSL Certificates. The use of the SSL Certificate Automation Tool, with real life examples, is covered as well! You can find the book at: https://www.packtpub.com/virtualization-and-cloud/vsphere-security-cookbook I hope you find the book helpful in your daily vSphere administration!!

I recently gave a presentation at several Lunch and Learns that covers getting started with VSAN. I was happy to see that a third of the people who attended had already deployed VSAN to some extent, mostly as a test to get familiar with the configuration and technology. You can download the presentation from my web site: http://www.virtualsouthwest.com/presentations.html Since I gave the presentation, there has been a patch released that solves a couple of serious issues-

Virtual machine operations on the Virtual SAN datastore might fail with an error message similar to the following: create directory <server-detail>-<vm-name> (Cannot Create File) The clomd service might also stop responding.

Virtual SAN cluster might report that the Virtual SAN datastore is running out of space even though space is available in the datastore. An error message similar to the following is displayed: There is no more space for virtual disk <vm-name>.vmdk. You might be able to continue this session by freeing disk space on the relevant volume, and clicking _Retry. Click Cancel to terminate this session.

Under the Host Performance tab when you change the chart option to display the CPU Usage and Usage in MHz metrics and compare the values with the values displayed in the CPU Demand metric on the performance chart, you will notice a huge difference in the demand and usage in the CPU performance data. The difference is due to incorrect calculation of the values for Usage and Usage in MHz Full details of the KB are at: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2102046

Being a former Windows admin, I can never remember the steps to install VM Tools on a Linux VM, so I end up searching Google each time and finding the specific steps. So I am finally posting this more for me, but I hope it will help out others as well! So here are the steps I use: 1. From the vSphere client, select the VM, Right click, select Guest, then Install/Upgrade VMware Tools. This will mount the ISO file to the VM 2. In the Install/Upgrade Tools window, select Interactive Tools Upgrade button and click OK 3. Now log onto your Linux VM as root 4. Create a mount point and mount the CD-Rom using: - mkdir /mnt/cdrom - mkdir /dev/cdrom /mnt/cdrom 5. Unpack the Tools Tar gzip file: - Tar –xzvf /mnt/cdrom/VMwareTools-version-build.tar.gz 6. Run the Install.pl to install VMware Tools: - ./vmware-install.pl –d ( -d will use the default answers when installing tools ) 7. Unmount the CD: - umount /mnt/cdrom And you’re done! The last few I have installed in Cent OS did not need a reboot..

With the release of vSphere 5.5, the vCenter Server Appliance now supports 100 hosts and 3000 virtual machines.If you are looking to reduce the number of Windows servers used for your management, the vCenter appliance is a great option.And if you have stateless hosts, or are looking to use the new VSAN, you can use Auto Deploy to boot your hosts, thus freeing up your local disk drives. I have done both Auto Deploy and boot from SAN, and feel Auto Deploy is much easier to use, and it does not use up your SAN storage.To use Auto Deploy you need the vCenter server, an Auto Deploy server, a tftp server, PowerCLI, and a DHCP server with reservations for your hosts.There is a Auto Deploy Service integrated in the vCenter Appliance, shown below, but you need to start the service.

The vCenter Appliance also has a tftp service embedded in it, however it is not started by default, or when you start the Auto Deploy service.If you SSH to your appliance and check the "atftp" service it shows as unused.

So you will need to start the atftp service, then update chkconfig to make sure it starts after reboots. I use chkconfig --level 2345 atftp on Once the atftp service is running, you can use the vCenter appliance as your Auto Deploy server and tftp server!

In the next post, I will cover creating the image profiles and DHCP reservations for deploying your servers!

The 2013 vExperts were announced in May and I am honored to be one of them!The complete list can be found hereOne perk to being selected as a vExpert for 2013 is Trains Signal has offered a free 1 year subscription to all of their on line training videos! You other vExperts can find the info here . I would recommend the Train Signal video training to everyone working on a new or updated certification, whether for VMware, Cisco Microsoft or many others.I want to thank all my co-workers who nominated me and told me to apply! Hope to be on the vExpert list in the coming years!!

Here is an interesting issue I ran into recently--Alarms showing loss of path redundancy to storage -Several hosts disconnect from a cluster -Cannot access the host via vSphere client or SSH -One or more datastores shows dead and cannot be accessed -CPU on several hosts is at or near 100 percent I saw these issues just after several hosts reported the loss of redundant path to storage alarms. The storage is managed by a separate team, so I had them check the fabric and storage presented to the cluster, they didn’t see any issues except an alarm around the same time as the first loss of redundant path alarms… So what is the next step, try a rescan of the storage, well I did that and the rescan ran for several minutes and timed out, then that host disconnected from the cluster! Going back to the storage team I had them check the one of the LUN ID of the datastore that showed dead, they said it showed on line and didn’t see a problem. Finally they removed and re-presented the LUN to the clusters hosts. I tried another rescan and again it took forever and failed. So the next step, reboot a hosts? I had one that only had one VM on it, rebooted it and the previous dead datastore was back. A few minutes later the hosts that were previously disconnected from the cluster reconnected and appeared fine..?? I remember back on a 4.0 environment when someone powered off an iSCSI array the hosts disconnected from the cluster, so I assumed that having the storage pulled out from under the hosts is still an issue in vSphere 5.0. After doing some research and opening a case with VMware, this still can be an issue. The link below is to a KB that explains a Permanent Device Loss and All Paths Down error. One note on the KB is- “As the ESXi host is not able to determine if the device loss is permanent (PDL) or transient (APD), it indefinitely retries SCSI I/O, including:

Userworld I/O (hostd management agent)

Virtual machine guest I/O”

That explains why the hosts disconnected and why the CPU on some showed 100 percent. The hostd process just peaks trying to retry I/O, that slows the management agents so you can’t connect directly, and of course running a rescan of the storage just compounds the problem.Click here for a link to the KB article. The KB also notes that the only way to recover is to resolve the storage access issue and reboot the hosts. Nice… It turns out there are some settings that can be added to alleviate this issue from happening in 5.1 and in 5.0 Update 2. For more details see Cormac Hogans great info on the storage features in 5.1 starting here- (Hope he doesn't mind me sharing this link) Another KB states that if Storage I/0 Control is enabled, a host cannot remount the datastore. In my case SIOC was enabled on all of the datastores. The KB details steps to stop the SIOC service on a host to allow the removal of the datastore.Access this KB here- In my case I think rebooting the hosts was the only option to clear the I/0 to the lost datastore. Of course what caused the issue on the storage side is still a mystery. I have since added the settings to each of the hosts and to the cluster, if there is another issue like this one I am hoping it makes a difference.If you have experienced this or a similar issue please share your experiences.....

Well I have recovered from attending my first VMware Partner Exchange! I thought it was great and the breakouts were full of valuable technical information. I also attended a boot camp, which meant being in class from Saturday till Monday, not the funnest way to spend a weekend in Las Vegas, but definitely worthwhile. I attended several break outs that focused on virtualizing business critical applications, such as Microsoft SQL Server. One demo showed the use of a second or standby VM for patches and upgrades. The demo can be seen here- All of the presenters in the breakouts made plenty of time to answer questions during and after their presentations. It was great to ask questions from one of the actual developers of an area or product. The hands on labs were another great area to see and learn new technologies! For anyone else who was able to attend PEX this year, let me know your thoughts.