Browsed byAuthor: Daniel

In my last post about Infinio Accelerator we introduced product and basics about it. Now it is time to go more deep – how this server side cache is working ?

Infinio’s cache inserts server RAM (and optionally, flash devices) transparently into the I/O stream. By dynamically populating server-side media with the hottest data, Infinio’s software reduces storage requirements to a small fraction of the workload size. Infinio is built on VMware’s vSphere APIs for I/O Filtering (VAIO) framework. This enables administrators to use VMware’s Storage Policy Based Management to apply Infinio’s storage acceleration filter to VMs, VMDKs, or groups of VMs transparently.

An Infinio cluster seamlessly supports typical cluster-wide VMware operations, such as vMotion, HA, and DRS. Introduction of Infinio doesn’t require any changes to the environment. Datastore configuration, snapshot and replication setup, backup scripts, and integration with VMware features like VAAI and vMotion all remain the same.

Infinio’s core engine is a content-based memory cache that scales out to accommodate expanding workloads and additional nodes. Deduplication enables the memory-first design, which can be complemented with flash devices for large working sets. In tiered configuration such as this, the cache is persistent, enabling fast warming after either planned or unplanned downtime.

Lets go with installation – is easy and entirely non-disruptive with no reboots or downtime. It can be completed in just a few steps via an automated installation wizard. The installation wizard collects vCenter credentials and location, and desired Management Console information, then automatically deploys the console :

Shared storage performance and characteristics (iops,latency) is crucial for overall vSphere platform performance and users satisfaction. In the advent of ssd and memory cache solutions we have many options to chose in case storage acceleration (local ssd, array side ssd , server side ssd). Lets discuse further server side caching – act of caching data on the server.

Data can be cached anywhere and at any point on the server that makes sense. It is common to cache commonly used data from the DB to prevent hitting the DB every time the data is required. We cache the results from competition scores since the operation is expensive in terms of both processor and database usage. It is also common to cache pages or page fragments so that they don’t need to be generated for every visitor.

In this article I would like to introduce one of the commercial server side caching solution from INFINIO – Infinio Accelerator 3.

Infinio Accelerator increases IOPS and decreases latency by caching a copy of the hottest data on serverside resources such as RAM and flash devices. Native inline deduplication ensures that all local storage resources are used as efficiently as possible,reducing the cost of performance. Infinio is built on VMware’s VAIO (vSphere APIs for I/O Filters) framework,which is the fastest and most secure way to intercept I/O coming from a virtual machine. Its benefits can be realized on any storage that VMware supports; in addition, integration with VMware features like DRS, SDRS, VAAI and vMotionall continue to function the same way once Infinio is installed. Finally, future storage innovation that VMware releases will be available immediately through I/O Filter integration.

The I/O Filter is the most direct path to storage for capabilities like caching and replication that need to intercept the data path. (Image courtesy of VMware)

Licensing

Infinio is licensed per ESXi host in an Infinio cluster. Software may be purchased for perpetual or term use:

A perpetual license allows the use of the licensed software indefinitely with an annual cost for support and maintenance.

A term license allows the use of software for one year, including support and maintenance.

For more information on licensing and pricing, contact sales@infinio.com.

I’m very happy to announce that we received very friendly response from Infinio support and we get an option to download trial version of software – next articles will describe product in more depth and show “real life” examples of use in our lab environment.

If esxi holding lock you can restart mgmt agents as per above advice or migrate all vms and reboot host or determine which process is holding the lock – just run one of these commands:

# lsof file

# lsof | grep -i file

For example:

# lsof | grep test02-flat.vmdk

You should see an output similar to:

COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME

71fd60b6- 3661 root 4r REG 0,9 10737418240 23533 Test02-flat.vmdk

Check the process with the PID returned in above, in our example:

# ps -ef | grep 3661

to kill the process, run the command:

# kill

All in all when we solve “locks” problems we can continue vm consolidation process :

Connect to the ESXi where is problematic vm directly

Power off problematic vm

Disable CBT for the virtual machine (very ofter ctk files are corrupt, for example we run backup job on vm with active snapshot – this is unsupported config) For more information, see: http://kb.vmware.com/kb/1031873

6.Remove any files ending with the *-ctk.vmdk file extension in the virtual machine directory.

Sound Card in vSphere Virtual Machine is an unsupported configuration. This is feature dedicated to Virtual Machines created in VMware Workstation. However, you can still add HD Audio device to vSphere Virtual Machine by manually editing .vmx file. I have tested it in our lab environment and it works just fine.

IMPORTANT:
Make a backup copy of the .vmx file. If your edits break the virtual machine, you can roll back to the original version of the file.
More information about editing files on ESXi host, refer to KB article: https://kb.vmware.com/kb/1020302

Once you have open vmx to edit, navigate to the bottom of the file and add following lines to the .vmx configuration file:sound.present = “true” sound.allowGuestConnectionControl = “false” sound.virtualDev = “hdaudio” sound.fileName = “-1” sound.autodetect = “true”

Save file and Power-On Virtual machine.

Once it have booted, and you have enabled Windows Audio Service, sound will work fine.

If you go to “Edit Settings” of the VM, you can see information that device is unsupported. Please be aware that if after adding sound card to you virtual machine, you may exprience any kind of unexpected behavior (tip: in our lab env work this config without issues).

Content Library was introduced in vSphere 6.0 as a way to centrally store and manage VM templates, ISOs, and even scripts. Content Library operates with a Publisher/Subscriber model where multiple vCenter Servers can subscribe to another vCenter Server’s published Content Library so that the data stored within that Content Library is replicated across for local usage. For example, if there are two data centers each with their own vCenter Server a customer could create a Content Library to store their VM templates, ISOs, and scripts in and then the vCenter Server in the other data center could subscribe and have all of those items replicated to a local datastore or even NAS storage. Any changes made to the files in data center 1 would be replicated down to data center 2.

With vSphere 6.5 VMware has added the ability to mount an ISO directly from the Content Library versus having to copy it out to a local datastore prior to mounting. Customers also now have the ability to run VM customizations against a VM during deployment from a VM template within a Content Library. Previously, customers need to pull the template out of CL if a customization was required. Customers can now easily import an updated version of a template as opposed to replacing templates which could disrupt automated processes.

There are now additional optimizations related to the synchronization between vCenter Servers reducing the bandwidth and time required for synchronization to complete.

Customers can also take comfort in knowing that their Content Libraries are also included in the new file-based backup and recovery functionality as well as handled by vCenter HA.

In vSphere 6.5 vCenter has a new native high availability solution that is available exclusively for the vCenter Server Appliance. This solution consists of Active, Passive, and Witness nodes which are cloned from the existing vCenter Server. The vCenter HA cluster can be enabled, disabled, or destroyed at any time. There is also a maintenance mode so planned maintenance does not cause an unwanted failover.

vCenter HA supports both an external PSC as well as an embedded PSC. Note, however, that in vSphere 6.5 at GA an embedded PSC cannot be used to replicate to any other PSC. Thus, if using an embedded PSC the vCenter Server cannot participate in Enhanced Linked Mode.

vCenter HA has some basic network requirements. A vCenter HA network must be established be and separate from the currently used subnet of the primary network interface of the vCenter Server Appliance (eth0). If using the Basic workflow a new interface, eth1, will be added to the appliance automatically prior to the cloning process. eth1 will be attached to the vCenter HA private network. The port group connecting to this network may reside on either a VMware Virtual Standard Switch (VSS) or a VMware Virtual Distributed Switch (VDS). There are no specific TCP/IP requirements for the vCenter HA network other than latency within the prescribed 10 ms RTT. Layer 2 connectivity is not required.

Failover can occur when an entire node is lost (host failure for example) or when certain key services fail. For the initial release of vCenter HA an RTO of about 5 minutes is expected but may vary slightly depending on load, size, and capabilities of the underlying hardware. During a failover event a temporary web page will be displaying indicating that a failover is in progress. That page will then refresh to the vSphere Web Client login page once vCenter Server is back online. In the case where a user is not active during the failover they may not be prompted to re-login. When compared to other high availability solutions, vCenter HA has several advantages:

PSC High Availability

After making vCenter Server highly available we also need to consider the availability options for the Platform Services Controller.

As you remember in vSphere 6.0 to provide HA for the PSC a supported load balancer was required –. If automated failover is not required we got option to manually repoint a vCenter Server between PSCs within an SSO site.

In vSphere 6.5 VMware is providing PSC HA solution that doesn’t require a load balancer but there is some integration work to be completed with other products in the SDDC portfolio before native PSC HA can be enabled.

I plan to test new vC and PSC HA features in our lab environment – will provide separate article with my configuration details. At this moment let me point you to VMware KB as additional reference:

The new vCenter Server Appliance Management Interface is still accessed via port 5480 for any vCenter Server or Platform Services Controller appliance. This refreshed UI now includes additional resource utilization graphs to provide a simple-to-consume visualization of CPU, Memory, Disk, and Database metrics :

Above screenshot to the right shows the new vCenter Database monitoring screen that provides some insight into the PostgreSQL database disk usage to help prevent crashes due to running out of space. There are also new default warnings presented in the vSphere Web Client to alert administrators when the database is getting close to running out of space and a graceful shutdown mechanism at 95% full to prevent database corruption. Customers can also configure syslog in this improved VAMI.

SUMMARY

New vCenter Server Appliance Management Interface

Built in monitoring : Network, CPU, and Memory

Visibility to vPostgres DB

Remote syslog configuration

New in vCenter Server 6.5 is native backup and restore for the vCenter Server Appliance. This new out-of-the-box functionality enables customers to backup vCenter Server and Platform Services Controller appliances directly from the VAMI or API. The backup consists of a set of files that will be streamed to a storage device of the customer’s choosing using SCP, HTTP(s), or FTP(s) protocols. This backup fully supports vCenter Server Appliances with embedded and external Platform Services Controllers.

The Restore workflow is launched from the same ISO from which the vCenter Server Appliance or PSC was originally deployed or upgraded. You can see from the lower screenshot that we have a new option to restore right from the deployment UI. The restore process deploys a new appliance and then uses the desired network protocol to ingest the backup files. It is important to note that the vCenter Server UUID and all configuration settings will be retained.

There is also an option to encrypt the backup files using symmetric key encryption. A simple checkbox and encrypted password is used to create the backup set and then that same password must be used to decrypt the backup set during a restore procedure. If the password is lost there is no way to recover those backup files as we do not store the password and do not use reversible encryption.

The vCenter Server Appliance deployment experience has been enhanced in the vSphere 6.5 release. Installation workflow is now performed in 2 stages. The first stage deploys an appliance with the basic configuration parameters: IP, hostname, and sizing information including storage, memory, and CPU resources.

Stage 2 then completes the configuration by setting up SSO and role-specific settings. Once Stage 1 is complete we can now snapshot the VM and rollback if any mistakes are made in Stage 2. This prevents from having to start completely over if anything were to go wrong during the deployment process.

NOTE!!! There are versions of the deployment application available for Windows, Linux, and macOS.

A new feature in vSphere 6.5 is the ability to migrate a Windows vCenter Server 5.5 or 6.0 to a vCenter Server Appliance 6.5. The migration process starts by running the Migration Assistant, which serves two purposes. The first, pre-checks of the source Windows vCenter Server 5.5 or 6.0 to determine if it meets the criteria to be migrated. Second, it is the data transport mechanism that migrates data from the source Windows vCenter Server 5.5 or 6.0 to the target vCenter Server Appliance 6.5.

The Migration tool will automatically deploy a new vCenter Server Appliance 6.5 and migrate configuration, inventory, and alarm data by default from a Windows vCenter Server 5.5 or 6.0. If you want to keep your historical and performance data (stats, events, tasks) along with configuration, inventory, and alarm data there is the option to also migrate that information. The vSphere 6.5 release of the Migration Tool provides granularity for historical and performance data selection.

Both embedded and external topologies are supported, the Migration Tool will not allow changing your topology during the migration process. Changing of topologies will need to be done before the migration process if consolidation of your vSphere SSO domain is required.

The vCenter Server Appliance 6.5 is the first VMware Appliance to run on Photon OS, it is a Linux OS optimized for virtualization which will become in near future standard for all VMware virtual appliances. Photon OS provide many benefits to the performance of the vCenter Server Appliance, which includes about 3x performance gain over its Windows counterpart and significantly reduces boot and restart times. This also means no more dependency on 3rd party for OS patching and should greatly reduces the amount of time it takes VMware to deliver security patches and updates to the vCenter Server Appliance.

VCSA – main features:

Native High Availability

VMware Update Manager

Improved Appliance Management

Native Backup / Restore

In vSphere 6.0 we saw performance and scalability parity for the vCenter Server Appliance when compared to it’s Windows-based counterpart. With vSphere 6.5 we now see feature parity and even new features that are exclusive to the vCenter Server Appliance. Let’s take a quick look at each of these new features before addressing them in more details later:

Let’s start with vCenter High Availability which is a native HA solution built right into the appliance. Using an Active/Passive/Witness architecture, vCenter is no longer a single point of failure and can provide a 5-minute RTO. This HA capability is available out of the box and has no dependency on shared storage, RDMs or external databases.

Next, we have the integration of VMware Update Manager into the vCenter Server Appliance. Now VMware Update Manager is included by default into the vCenter Server Appliance and makes deployment and configuration a snap.

Another exclusive feature of the vCenter Server Appliance 6.5 is the improved appliance management capabilities. The vCenter Server Appliance Management Interface continues its evolution and exposes additional health and configurations. This simple user interface now shows Network and Database statistics, disk space, and health in addition to CPU and memory statistics which reduces the reliance on using a command line interface for simple monitoring and operational tasks.

Finally, VMware have added a native backup and restore capability to the vCenter Server Appliance in 6.5 to allow for simple out-of-the-box backup options in addition to the traditional supported methods including VMware Data Protection and VMware vSphere Storage APIs – Data Protection (formerly known as VMware vStorage APIs for Data Protection or VADP). This new backup and restore mechanism allows customers to use a simple user interface to remove reliance on 3rd party backup solutions to protect their vCenter Servers and Platform Services Controllers.

Note !!! All these new features are only available in the vCenter Server Appliance.

Before vSphere 6.5 there was only one default gateway allowed for all VMKernel ports in an ESXi host. vSphere features such as DRS , iSCSI, vMotion, etc. leverage that use VMKERNEL ports are constrained by this limitation. Many of the VMKERNEL ports were not routable without the use of static routes unless they belonged to a subnet other than the one with the default gateway. These static routes had to be manually created and were hard to maintain.

vSphere 6.5 provides the capability to have separate default Gateways for every VMKernel port. This simplifies management of VMKernel ports and eliminates the need for static routes.

Prior to vSphere 6.5, VMware services like DRS, iSCSI, vMotion & provisioning leverage a single gateway. This has been an impediment as one needed to add static routes on all hosts to get around the problem. Managing these routes could be cumbersome process and not scalable.

vSphere 6.5 provides capabilities, where different services use different default gateways. It will make it easy for end users to consume these feature without the need to add static routes. vSphere 6.5 completely eliminates the need for static routes for all VMKernel based services making it simpler and more scalable.

2.SR-IOV provisioning:

VM provisioning workflow prior to vSphere 6.5, for SR-IOV devices required the user to manually assign the SR-IOV NIC. This resulted in VM provisioning operations being inflexible and not amenable to automation at scale. In vSphere 6.5 SR-IOV devices can be added to virtual machines like any other device making it easier to manage and automate.

3.Support for ERSPAN:

ERSPAN mirrors traffic on one or more “source” ports and delivers the mirrored traffic to one or more “destination” ports on another switch. vSphere 6.5 includes support for the ERSPAN protocol.

4.Improvements in DATAPATH:

vSphere 6.5 has data path improvements to handle heavy load. In order to process large numbers of packets, CPU needs to be performing optimally, in 6.5 ESXi hosts leverage CPU resources in order to maximize the packet rate of VMs.

Where are the improvements being made ?

VMXNET 3 optimization

Using copy TX for small messages size (<=256B)

Optimized usage of pinned memory

Physical NIC improvements

Native driver support for Intel cards (removes overhead of translating from VMkernel to VMKLinux data structures)