Pages

2010-07-28

VMworld 2010 is going to be a whirlwind of technological and social activity. Let's keep in touch! If you are going to VMworld 2010 in San Francisco and use social media - like blogs or Twitter -- please add yourself to the list below. We're going to use this in a few ways at the event.

We'll pull blog and Twitter feeds from this list to aggregate and feature on VMworld.com

If you do have a blog, you'll qualify to be an 'official blogger' at VMworld 2010, with access to the blogger lounge and blogger and press briefings

Other people will know to expect you and to watch out for you at the lounge

If you are a blogger and will be at VMworld in San Francisco - go and add yourself to the list.

There will be a separate list for Copenhagen released soon.

T minus 31 days till it starts.

And no I did not look at my calendar - I used Powershell to give me the answer.

My question is as follows. When changing this you see (in both screenshots) a nice warning from VMware that this could degrade vMotion performance.

I take this to be true if you were moving the swapfile location to a local VMFS volume located on the ESX host. When initiating a vMotion - the vswp file will have to be copied to the other host's vswp location for the migration to complete.

But what about a different Datastore that is shared storage - but not the location of the virtual machine files?

Now why would you do this? If you look on the right - you will see one reason - no snapshots, which will save precious disk space. Second reason - no replication is needed (unless of course this is a requirement)

Some say this is a risk - because if your datastore with the vswp files goes down then all your VM's go down, which in essence is true - but… Since in most cases the datastore is just another volume on the same storage array from which the VM's are running from - the chance of the vswp datastore failing is equal to that of the chance that VM's datastore will fail.

So my question at the end of the day. Is the warning still valid when you define a shared datastore - one which all the ESX hosts in that cluster can access?

There is of course a certain overhead needed to set this option on each ESX host in the cluster - but that is not (IMHO) so much of an issue.

First and foremost - I would like to commend the PowerCLI team for becoming active again and since my post PowerCLI - What Will the Future Hold? - I have noticed that they returned to Twitter and the Blogosphere - so this make me very happy!

I do want to point out though. Until now we did not have a full list of what exactly had changed regarding the namespaces.

The data type of the SearchRoot parameter has been changed to support a search within two or more containers at a time. This parameter now accepts an array of objects rather than a single object as it was with earlier versions. This makes it possible for the cmdlet to search multiple containers identified by the SearchRoot parameter value. For example, you can supply an array of strings each of which represents the canonical name of a certain container, to retrieve objects from all of the containers specified.

2010-07-15

Well I am happy to announce that this has changed - again! Have you tried to find the VMware vSphere Host Update Utility? It was never downloadable - it was part of the vSphere Client which was bundled with your ESXi Installation.(by the way that has also been removed from the installation and is not a separate download as well.

What once was

Is now..

The reason that the download is actually almost 160MB larger with less inside is a mystery to me (I guess the multiple languages now built in - make it a lot larger). But there is no Host Update utility any more.

The general consensus is that that VMware are pushing for Central Management - which means you need to use Update Manager - which is only part of the vCenter Package.

If you still want to update your hosts you will have to revert to a bit of command-line work.

You must configure a scratch partition and reboot the host before proceeding with the upgrade. You can configure a scratch partition for a host under the Software Advanced Settings in the Configuration tab of the vSphere Client.

What is the Scratch Partition?

During the autoconfiguration phase in the Host installation process, a 4GB VFAT scratch partition is created if the partition is not present on another disk. When ESXi boots, the system tries to find a suitable partition on a local disk to create a scratch partition. The scratch partition is not required. It is used to store vm-support output, which you need when you create a support bundle. If the scratch partition is not present, vm-support output is stored in a ramdisk. This might be problematic in low-memory situations, but is not critical.

For ESXi Installable, the partition is created during installation and is thus selected. VMware recommends that you leave it unchanged.

The previous post about what was new in vSphere 4.1 was a general overview with some slide shots. For all ye of little faith thinking that I was only going to post those screenshots with no details Nuh-Uh! I prefer to lay down the basics with screen shots - and then go into the details. I mean you do have to cater for all spectrums of the public from basic to advanced.

So without further a due - let's go into how you can add you ESX/i server into the domain.

But why would you?

Well actually there is a very simple reason - Security. One of the biggest problems it providing a single mechanism to authenticate yourself with the same credentials to all components of your infrastructure. With vCenter it is easy - since it is a Domain Member - all authentication is done through active directory. But going directly into the ESXi host - that is a different story all together you will have either authenticate with Linux credentials - or configure the authentication to be done by active directory - but for that you need a valid Linux user on the ESXi box.

(** Small note - since the future version of ESX will only be ESXi I have decided - that I will be using ESXi exclusively in my posts - unless the issue is directly related to the full ESX version)

There are 4 ways of doing this

ESXi Host directly

Host Profiles

CLI

Script

Before starting you need to make sure of a few things

You have correct time synchronization with between your ESX host and the Domain controllers - this is a must. Kerberos is extremely picky when time difference off.

You have proper DNS resolution from the ESX Host, and that the name servers are correct.

Also your ESX host has to have a FQDN - for example:

Hostname: esx1 Domain: maishsk.local FQDN: esx1.maishsk.local

On the ESXi Host

Log into your host directly - NOT through the vCenter. The documentation says

I have found that if you do this on the vCenter server - the Properties option is grayed out. and you cannot make the change.

Configuration Tab -> Authentication Services-> Properties

Enter domain name (in one of two ways) maishsk.local (Default computer location) or maishsk.local/Computers/ESX (for putting the computer account in the ESXi OU under the computers container)

Click Join Domain - and you will be asked for domain credentials - this user has to have permissions to add computers to the domain. Format is either administrator@maishsk.local or MAISHSK\administrator or just plain administrator

Once that is done - you can see on the Active Directory Users and Computer Console that you now have a new computer account.

To allow the the user/group access to the ESXi host your will have to define the permissions.at the appropriate level.

In the case I gave the Domain Admins full access to the Host

Permissions -> Add Permission -> Administrators ->Add

From the Server field choose your domain and search for your user/group (reminds anyone of vCenter?)

The user can now login with their domain credentials

*** Update ***

I would like to also point out the what Raphael Schitz posted on his blog regarding the ESX Admins group and how this group automatically has access to the host just added to the domain. Thanks for pointing this out

By default, the ESX host assigns the Administrator role to the “ ESX Admins ” group. By default, the ESX host assigns the Administrator role to the "ESX Admins" group. If the group does not exist when the host joins the domain, the host will not assign the role. If the group does not exist When the host joins the domain, the host Will not assign the role. In this case, you must create the “ESX Admins” group in the Active Directory. The host will periodically check the domain controller for the group and will assign the role when the group exists . In this box, you must create the "ESX Admins" group in the Active Directory. The host Will periodically check the domain controller for the group and Will assign the role When the group exists.

2010-07-13

Now that vSphere 4.1 is now released one of the new not so exciting features or developments that have emerged but are definitely worth mentioning.VMware has finally made the change.
Up until this release - the live migration of virtual machines between hosts was known as

VMotion (with a capital V)
From here on it will now be know asvMotion (with a small v)
This moves everything into synch with all the other VMware branding
I hope this finally puts this long lasting debate to rest.

Network Traffic Management. A great new feature allowing you to define shares on the Network traffic of your ESX servers.

The diagram at left should be familiar to most. When using 1GigE NICs, ESX hosts are typically deployed with NICs dedicated to particular traffic types. For example you may dedicate 4x 1GigE NICs for VM traffic; one NIC to iSCSI, another NIC to vMotion, and another to the service console. Each traffic type gets a dedicated bandwidth by virtue of the physical NIC allocation.

Moving to the diagram at right … ESX hosts deployed with 10GigE NICs are likely to be deployed (for the time being) with only two 10GigE interfaces. Multiple traffic types will be converged over the two interfaces. So long as the load offered to the 10GE interfaces is less than 10GE, everything is ok—the NIC can service the offered load. But what happens when the offered load from the various traffic types exceeds the capacity of the interface? What happens when you offer say 11Gbps to a 10GigE interface? Something has to suffer. This is where Network IO Control steps in. It addresses the issue of oversubscription by allowing you to set the relative importance of predetermined traffic types.

Network IO Control isolates the traffic types and ensures one traffic type is not dominated by others. It ensures or guarantees a minimum level of service for each traffic type when those traffic types (or flows) compete for a vmnic (physical NIC)

Note that NetIOC is available only with the vDS. It is not available on the standard switch (vSS).

NetIOC is controlled with two parameters—Limits and Shares.

Limits, as the name suggests, sets a limit for that traffic type (e.g VM traffic) across the NIC team. The value is specified in absolute terms in Mbps. When set, that traffic type will not exceed that limit *outbound* (or egress) of the host

Shares specify the relative importance of that traffic type when those traffic types compete for a particular vmnic (physical NIC). Shares are specified in abstract units numbered between 1 and 100 and indicate the relative importance of that traffic type. For example, if iSCSI has a shares value of 50, and FT logging has a shares value of 100, then FT traffic will get 2x the bandwidth of iSCSI when they compete. If they were both set at 50, or both set at 100, then they would both get the same level of service (bandwidth).

There are a number of preset values for shares ranging from low to high. You can also set custom values. Note that the limits and shares apply to output or egress from the ESX host, not input.

Remember that shares apply to the vmnics; limits apply across a team.

This slide shows how NetIOC is configured through the vSphere Client to vCenter Server. Select the networking inventory panels; click on the vDS switch and then select the “Resource Allocation” tab. Here you will see the various traffic types supported or identified. NetIOC currently categorizes the traffic types as follows:

- FT

- iSCSI

- vMotion

- Management

- NFS

- VM traffic

For each traffic type you can specify a limit in Mbps and a share value. Right clicking on the traffic type brings up the configuration panel.

In this example, if *every* traffic type tried to send unlimited traffic, VM traffic would get 50/400ths or one-eighth of the bandwidth of the physical interface as a minimum.

After all the videos and Posts published regarding the new Feature of SIOC, here are some of the new features and solutions that SIOC addresses.

The problem Storage I/O control is addressing is the situation where some less important workloads are taking the majority of I/O bandwidth from more important applications. In the case of the three applications shown here, the data mining is hogging a majority of the storage I/O resource. And the two more important to the business operations are getting less performance than needed.

What a Virtualization Admin wants to see is a distribution of I/O that is aligned with the importance of each virtual machine. Where the most important business critical applications are getting the I/O bandwidth needed for them to be responsive and the less critical data mining application is taking less I/O bandwidth.

I/O shares can be set at the Virtual Machine level and although this capability has been there for a few previous releases, it was not enforced on a VMware cluster wide level until release 4.1. Prior to 4.1 the I/O shares and limits were enforced for a VM with more than one virtual disk or a number of VMs on a single ESX server. But with 4.1 these I/O shares are now used to distribute I/O bandwidth across all the ESX servers which have access to that shared datastore.

The ability to set shares for I/O is done via edit properties on the virtual machine. This screen shows two virtual disks and the ability to set priority and limits on the I/Os per second.

Once the shares are set on the virtual machines in a VMware cluster, one needs to also enable the “Storage I/O Control” option on the properties screen for that datastores on which you want to have Storage I/O control working. The other thing that is needed for Storage I/O to kick in is that congestion measured in the form of latency must exist for a period of time on the datastore before the I/O control kicks in.

The example which comes to mind is a car pool lane is not typically enforced when there is not a lot of traffic on the highway. It would be of limited value if you could travel at the same speed in the non car pool lane as well as the car pool lane. In much the same way, Storage I/O control will not be put into action when there is latency below a sustained value of 30 ms.

One can then observe which VMs have what shares and limits set via the virtual machine tab for the datastore. As datastores are now objects managed by vCenter, there are several new views in vSphere that enable you see which ESX Servers are connected to a datastore and which VMs are sharing that datastore. Many of these views also allow one to customize which columns are displayed and create specific views to report on usage.

The way in which these I/O shares are used to effect performance is that queue depth for each ESX server can be assigned and throttled to align the specific shares assigned for each VM running on the collective pool of storage. In the case of our 3 VMs displayed earlier, we have the data mining vm getting the least number of queues assigned while the other two VMs are getting many more queuing slots enabled for their I/O.

Boot From SAN will be fully supported in ESXi 4.1. It was as only experimentally supported in ESXi 4.0. Boot from SAN will be supported for FC, iSCSI, and FCoE – for the latter two, it will depend upon hardware qualification, so please check the HCL and Release Notes for vSphere 4.1.

Scripted Installation, the equivalent of Kickstart, will be supported on ESXi 4.1. The installer can boot over the network, and at that point you can also do an interactive installation, or else set it up to do a scripted installation. Both the installed image and the config file (called “ks.cfg”) can be obtained over the network using a variety of protocols. There is also an ability to specify preinstall, postinstall, and first-boot scripts. For example, the postinstall script can configure all the host settings, and the first boot script could join the host to vCenter. These three types of scripts run either in the context of the Tech Support Mode shell (which is a highly stripped down version of bash) or in Python.

In ESXi 4.0, Tech Support Mode usage was ambiguous. We stated that you should only use it with guidance from VMware Support, but VMware also issued several KBs telling customers how to use it. Getting into Tech Support Mode was also not very user-friendly.

In ESXi 4.1, Tech Support Mode (TSM) will be fully supported. You can enable and disable it either in the DCUI or in vCenter. TSM over SSH, aka Remote TSM, is also fully supported, and can enabled and disabled independently of local TSM.

The warning not to use TSM will be removed from the login screen However, anytime TSM is enabled (either local or remote), a warning banner will appear in vSphere Client for that host. This is meant to reinforce the recommendation that TSM only be used for fixing problems, not on a routine basis.

By default, after you enable TSM (both local and remote), they will automatically become disabled after 10 minutes. This time is configurable, and the timeout can also be disabled entirely. When TSM times out, running sessions are not terminated, allowing you to continue a debugging session. All commands issued in TSM are logged by hostd and sent to syslog, allowing for an incontrovertible audit trail.

Other new vCLI commands include network troubleshooting and new information exposed in resxtop. Finally, the ability to forcibly kill a VM has been added to vCLI, thus eliminating one of the most common reasons for wanting to use TSM.

There is now an ability to totally lock down a host. Lockdown mode in ESXi 4.1 forces all remote access to go through vCenter. The only local access is for root to access the DCUI – this could be used, for example, to turn off lockdown mode in case vCenter is down. However, there is an option to disable DCUI in vCenter. In this case, with Lockdown mode turned on, there is no possible way to manage the host directly – everything must be done through vCenter. If vCenter is down, the only recourse in this case is to reimage the box.

Of course, Lockdown Mode can be selectively disable for a host if there is a need to troubleshoot or fix it via TSM, and then enabled again.