Friday, May 31, 2013

Just to keep you updated (no, this blog is not a straw fire!):
A colleague & I have meanwhile set up an IPv6 test lab on the same hardware I use in my home lab. And this means a complete setup: DHCPv6, RA, static IPv6, tunnels, firewalling, broad range of client OS - the whole nine yards.
It's going to take some time to write a series of blog posts describing the setup, and I'm still tempted to use IPv6 for the vSphere infrastructure as well. Maybe even for iSCSI, although it's no longer officially supported...

So stay tuned, there's a lot of stuff coming soon! Just have to finish the setup, writing - and my holidays. :-)

The update will take only a few minutes, in my case less than 10. The
appliance needs to be rebooted and runs fine afterwards. Don't worry
about these files, they will be deleted during the update anyway.

The
.hmac files contain hashes of /usr/lib64/libcrypto.so.0.9.8 and
/usr/lib64/libssl.so.0.9.8 used for FIPS compliance. When the
corresponding packages are updated, these files are not deleted
immediately:

Regarding
the appliance update the vami-sfcb fails to start, thus delaying the
whole update process until the maximum retry limit for this service is
reached. If the appliance is rebooted before this timeout, the postinstall phase was not executed and the vCenter will not start anymore. Either because of said OpenSSL error or because the vpxd does not start with the error message

Database version id '510' is incompatible with this release of VirtualCenter.

I was able to revive the appliance in my lab, but this is of course neither supported nor recommended. It runs fine again, but the state is not consistent and I would always recommend to boot it just one more time to perform a migration to a fresh installation and save the configuration & data. Depending on when the update was interrupted, your results may vary.

If the appliance itself does not properly start anymore, boot it from a Linux live CD (GParted or Parted magic are sufficient), mount the filesystem and delete the .hmac files. Perform a normal boot afterwards.
If the web UI allows to do a normal update, do so, and you should be fine.

Otherwise try it manually (the following steps assume you're familiar with Linux and you should check the prerequisites):

Log in to the appliance via SSH as root

cd /opt/vmware/var/lib/vami/update/data/job

cd to the latest subdirectory, which should have the highest number

Check if the update belongs to 5.1U1head manifest.xmlYou should see build 5.1.0.10000.

Attach the updaterepo ISO to the VM

mount /dev/sr0 /media/cdrom (create if necessary)

cd /opt/vmware/var/lib/vami/update/data/package-pool

ln -s /media/cdrom/update/package-pool package-pool

cd back to the job subdirectory

./pre_install '5.1.0.5300' '5.1.0.10000'

./test_command (may report "failed dependencies")

cp -p run_command run_repair

vi run_repair and change the first command from "rpm -Uv" to "rpm -Uv --no-deps --replacepkgs"

In my previous post I described how to reduce the vCenter memory requirements on Windows. Basically the same is true for the vCenter appliance, but the files are a bit harder to find. Besides that all disclaimer apply - this is in no way supported by VMware.

After these adjustments the VM memory can safely be reduced to 4-5 GB. But beware that - sadly enough - the Tomcat JVM still tends to eat up memory over time. Therefore I prefer to stick to 5 GB RAM, and here's the result:

With vSphere 5.1 the memory requirements of the vCenter server have dramatically increased. If all components reside on a single Windows server [VM], even the smallest inventory size will require 10 GB of memory, according to the VMware Installation and Setup guide. Although this document states a minimum of 4 GB memory forthe vCenter Appliance, it is in fact configured for 8 GB RAM after deployment. This will most likely exceed or significantly reduce the resources of small home labs or all-in-one setups with VMware Workstation.

Is this necessary? Nope. But due to the default JVM memory settings a simple reduction of the VMs’ RAM could lead to swapping and have a negative impact on the overall performance, obviously. The following adjustments to the application settings will allow to reduce the VM memory to 4-5 GB. This posting covers a Windows-based vCenter server, the following post will be related to the Appliance.

No need to mention that all of this is absolutely not supported by VMware, right?

Prerequisites:

The vCenter server is installed on a Windows 2008 R2 server VM with SQL Server 2008 R2 Express and no noteworthy additional software or roles. The SQL Server setting “Maximum server memory” has been configured for a low value – 256 MB should be fine.

After installation of the vCenter Server components edit the following files and change the settings:

This settings are the lowest value I have personally used without experiencing any problem in an environment of two ESXi hosts and about three dozen VMs with half of them up & running, the other powered off or templates. To be perfectly honest I did not try to find out the absolutely lowest possible settings – the result of the first shot were satisfying enough, cutting the RAM requirements in half and thus roughly back to pre-5.1 times.

If you do run into problems, either regarding performance or even functionality, please post a comment and the parameter & value you changed to resolve it.

Monday, April 22, 2013

I suppose
most of the virtualization blogs will include the description of the author’s
test & lab gear, so I’ll start with that. :-)

I decided not
to virtualize the lab itself, but to use real equipment. Yep, it’s possible to build
an all-in-one setup with a standard PC and VMware Workstation. But you’re not
able to try out the pros and cons of different network setups and
configurations or reproduce problems of customer environments. A high
performance PC with lots of RAM would even have been more expensive at that
time - I built my home lab in early 2011, so please keep in mind that it is 2 year
old stuff. So, here’s the list.

Two ESXi
hosts:
AMD Phenom II X6 1055T E0 (6 x 2.8 GHz) on Asus M4A88T-M mainboard with 24 GB
RAM DDR3-1333. One HP NC360T Intel-based dual port NIC, one Intel Gigabit CT
Desktop NIC, together with the onboard Realtek a total of 4 NICs. I got the HP
NICs from eBay where you still can find them (or even genuine Intel dual port NICs)
for around 50 Euro.

Network:
LevelOne GSW-1676 16 port Gigabit “smart” switch. Which basically means its friggin’
complicated to properly configure the VLANs, trunks and port settings using the Web UI. I’d
rather suggest to look for a Cisco SG200 series switch or the like.

The cost
was around 1000 Euro for the whole lab, which is not that much considering that
you have two physical boxes and a real network.

I chose AMD
since in my opinion they (still!) offer the best ratio of cores to cost. The single
thread performance of Intel CPU cores is superior, but with AMD you’ll get more
cores, and that usually better suits virtualization needs. The ASUS mainboard
officially supports only 4 GB DIMMs, and I started with 16 GB in each system.
Last year when the RAM got amazingly cheap, I tried a set of four 8 GB DIMMs
and found out that the board supports them without any problem, so the total
memory went up to 48 GB. When the vCenter memory dramatically increased with
vSphere 5.1 I was quite glad to have found the right time to expand the
resources. BTW: a guide on how to reduce the vCenter memory requirements down
to more home lab friendly 5 GB will follow soon.

Latest addition was a Juniper Netscreen-50 firewall. Used ones are around 40 Euro on
eBay. They have only 4 Fast Ethernet ports, but add another “real life”
complexity (like the switches) you’ll have to deal with when building real vSphere
environments. If you have the chance to grab one of these fine devices, I recommend to do so.

This blog just came to life, and I will use it to post my thoughts, findings, hints, tips & tricks around all virtualization aspects I will come across (and some other stuff maybe), with the main focus on VMware products.

My journey to Virtualization and Cloud Computing started in late 2005 with Solaris 10 Zones / Containers. Later on I started to focus on x86 technologies and VMware products. In early 2008 I took my first certification and became VCP3 #25734. Continued to keep my certification as VCP4 and VCP5-DCV and became VCAP4-DCD #483 in February 2011.

I’m working for a consulting company and usually available for challenging projects.