Download the required drivers for the latest folder container esxi-650-* hierachiehttp://vibsdepot.hpe.com/hpe/nov2016/(alternatively you could connect this online, but have to build the image without a proper internet connection)

The “esxi-650-devicedrivers” folder contains the right offline bundles for the drivers. Pick the ones you need for your hardware. If you have no idea how to find our what driver is required, please play around a little bit with the “esxcfg-*” commands on the ESXi Shell. List your network and storage adapters on an existing ESXi, best installed with vendor image, and note down the drivers are used.

The “esxi-650-bundles” contains all additional agents and tooling. Just download the hpe-esxi6.5uX-bundle-* file as this does contain the hpe-smx-provider CIM provider integration you need for proper hardware monitoring.Some of the drivers are double zipped. Just extract the first layer so you have the offline bundle. The second zip file should not contain any further *.zip file, but *.vib or a vib20 folder.

This list does also contain drivers now get removed, but later added from the HPE depot in a newer version.

Attention: Not all packages can be removed in the listed order as there dependencies between them. If the CLI does not allow you to remove a package as it is a required for another package, just remove the other first and try again.

First part of the series. As mentioned in the overview VMware provides a newly called “Security Configuration Guide”, but this don’t really faces the first part in hands-on, when elaborating a hardened hypervisor approach. All starts with the image we pick – it is the foundation of security. Just think of you are designing a bank depot for storing all the money. The holy grail – the money – is stored in the basement and the entrance of the building above the ground is highly secured by policemen staying at the doores and windows, but the basement has several holes for cooling, wastewater, etc, which are not secured anyhow. That’s not what we want to have with the hypervisor. So what are possible holes in our ESXi image?

These are services listening on the ESXi for providing data to vCenter or other management services. Some may be wanted, others not. Ok, fair enough, nice to know, but how this relates to the ESXi image? My main focus is to strip down the ESXi images as best as possible to guarantee functionality but don’t offer a high attack vector. So if we can remove unneeded services listening on any port, we can reduce the attack vector, so the attacker has not much possibilities to find any weakness in the system. But before removing anything, we need something we can remove things from. Picking the right base image is key. So what choices do we have for a base image:

VMware ESXi vanilla image This is offered only on the VMware website. It does not contain any relation to a specific hardware vendor. The driver set is integrated is capable to support most hardware on the HCL. It does not contain any OEM agents or services.

OEM ESXi image For the most vendors this is also offered on the VMware website and is marked as a vendor specific image. This image was built based on the VMware vanilla image. Additional vendor specific agents, drivers and tools were added to it, to support all the hardware the vendors has certified to the hypervisor version it was built for, to remote manage the hardware by vendor management tools and run firmware updates for the underlying hardware on the hypervisor.

It should be now very clear what are candidates for a removal:

OEM management agents Don’t trust any of these agents. Many of them caused PSODs for my customers and offer often bad secured services to the outside. But be aware a lot of this agents are bundled with the CIM integrations provided from the vendor. CIM provider integrations are something we want to have in the image to not lose track of the hardware outages. The vendor integrations are mostly much more powerful compared to what VMware provides via generic interfaces.

Drivers in general (optional) Drivers, independent if they were provided by VMware of the OEM, are not really a security concern, as they are only used if a matching devices is present. I like to remove the unneeded ones anyway to keep the image as clean as possible. Most customers have a static bill of material for hardware and so it is very easy to pick the required drivers and strip out the left ones.

OEM tooling A lot of hardware vendors provide extra tooling for example for running firmware upgrades on the ESXi Shell or to read configuration out of the BMC boards or BIOS. This is nice, but really unwanted. Like I don’t want to provide capabilites to bridge the isolation between hypervisor and VMs and also don’t want to do the same between hypervisor and hardware.

Unwanted functionality This is the most complicated part of the hardening. To chose the right default functionality, is not build into the kernel, can be removed. Good candidates are GUIs, like HTML 5 GUI or the USB 3 drivers.

This should be all for now. There is a good question coming up how to remove all this drivers/agents/tools/functionality from my image? I prefer the VMware Image Builder CLI based on PowerCLI. With 6.5 you also have the chance to use a Web Client GUI for it as part of the Auto Deploy feature.

However you alter your image, please do yourself the favor and document it!

For getting an idea how specific steps in the reality look like, please check the example for HPE hardware linked below:

Working as an architect in the VMware space you will sooner or later come across the VMware Validated Designs (VVD). Just a few weeks ago the latest version 4.0 was released to make adjustments for vSphere 6.5. It can be found here:

The designs are a great source for building your own architectures or building architectures for customers. The incorporated component architectures are natively built for availability, reliability and scalability. These are exactly the main goals I try to put in the designs I create for customers. The VVDs show up a good practise for a detailed setup that can be used for several use cases like Private Cloud or VDI deployments. VMware Cloud Foundation also makes use of the VVDs for its implementations.

But apart from this I also like to treat them as a framework which gives me the chance to keep the setup supported by VMware but also adjust it to the customer needs and make it fit like a second skin based on customers requirements.

Across their history they mainly relied/rely on a two region concept with one primary and fail-over region. This is a quite common architecture for U.S. setups. In the European space and especially in Germany customers often stick their existing architectures based a two datacenters setup working as an active/active pair. If you see this also as a two region setup or you would aggregate this into one region like me, is up to you. I prefer one region because the datacenters are in a short distance because of their synchronous replication/mirroring and so they build up a logical domain because for their active/active style. This is why I split the region down to two physical availability zones (AWS term) and one virtual across two datacenters. This does not need to be undestand now, it will get clearer in later chapter.

In my understanding the VVD framework needs some extension in regards to Stretched Clusters and this is why I like to set up a series which guides through a forked version of the VVDs I personally use for customer designs: