Stats

Tag: load balancing

In one of my previous articles I wrote how NSX upgrade to 6.2.4 impacts PowerCLI as it disables TLS 1.0 ciphers on Edge Load Balancer. The fix for PowerCLI was easy but what if there are other applications still using TLS1.0 that cannot be fixed/updated?

An example is vSphere Replication 6.1.1 which does not support TLS 1.2.

There is workaround. It is possible to create application rule that specifically enables TLS 1.0. The rule syntax is:

tlsv1 enable

Once the rule is created it can be added in the Advanced Configuration of the virtual Server.

Advertisements

Share this:

Like this:

As of NSX 6.2.3 TLS 1.0 support is deprecated on Edge Service Gateways. So if you are using load balancer with SSL offload, TLS 1.0 ciphers are no longer being supported and those clients that rely on them will not work anymore.

The supported ciphers can be easily checked with nmap. Here is nmap output to website behind NSX Edge 6.2.2 and 6.2.4 load balancer:

NSX 6.2.2 with TLS 1.0NSX 6.2.4 without TLS 1.0

In my case PowerCLI stopped working and could not connect anymore to vCloud Director endpoint behind the Edge load balancer. The error was not very descriptive: The underlying connection was closed: An unexpected error occurred on a send.

Share this:

Like this:

I got interesting question from a colleague if vCloud Director portal can be accessed over IPv6. I suspected the answer is yes so I had little bit of fun and did a quick test.

With NSX load balancer in front of my two VCD cells I created IPv6 VIPs for HTTP, HTTPs and VMware Remote Console (TCP 443) traffic and used the existing IPv4 pools. I also added these IPv6 addresses to my DNS servers so name resolution and certificates would work and was ready to test.

As I terminate SSL session on the LB and insert client IP into the header with X-Insert-For-HTTP I could observe both IPv6 and IPv4 clients in the vCloud Director logs:

Share this:

Like this:

One of the deployment options for vCenter Single Sign-On 5.5 (SSO) is high availability mode. It usually consists of two load balanced SSO nodes deployed in single site configuration. It is quite complex to set up and manage therefore I usually advise customers to avoid such configuration and instead co-deploy SSO together with vCenter Server on the same virtual machine – this results in the same availability of vCenter Service and SSO.

However there are reasons when you cannot do this and need to deploy highly available SSO instance. One is if you want to have multiple vCenter Servers in the same SSO domain with single pane of glass Web Client management. Another is vRealize Automation (vRA) deployment which also requires SSO.

VMware has published two whitepapers about the topic. The first VMware vCenter Server 5.5 Deploying a Centralized VMware vCenter Single Sign-On Server with a Network Load Balancer unfortunately unnecessarily adds even more complexity to the whole process. The paper also describes actve – active load balancing of the nodes which is however unsupported configuration (see here). While active – active load balancing might work with vCenter Server services it does not work with vRealize Automation (vCAC). This is due to the tokens used for solution authentication – WS Trust tokens are stateless but WebSSO are not. Also from what I heard vSphere 6 will not work in active – active configuration at all.

The second whitepaper Using VMware vCenter SSO 5.5 with VMware vCloud Automation Center 6.1 is more recent and while you see vCAC/vRA in its title it still very much applies for pure vSphere environments as well (skip the vRA specific chapters) and it is the one I would recommend. It also describes Active – Passive configuration of F5 Load Balancer.

The topic of this article is however usage of NSX load balancer instead of F5. Contrary to vCNS load balancer, NSX can be configured in Active – Passive mode and thus you can create supported HA SSO configuration with pure VMware solutions.

I will not go too deep in the SSO specific configurations in HA setup (did I mentioned it is complex?) as it is very well described in the second whitepaper mentioned above – instead I will focus on the NSX part of the configurations.

The architecture is like this: two SSO nodes with dedicated NSX load balancer in proxy – on a stick mode. This means LB is not inline of the traffic but instead has only 1 interface and SNAT and DNATs the traffic to the nodes. While inline transparent mode configuration is also possible I believe on a stick config is simpler and provides better resiliency (dedicated LB appliance for each application).

Here are the steps for NSX load balancer configuration:

Deploy Edge Service Gateway for the Load Balancer with one interface preferably in the same subnet as SSO nodes.

Enable Load Balancer feature

Upload CA certificate and SSO certificate. See the second whitepaper on how to create SSO certificate.

Configure service monitoring. While you could use the default TCP healh check, I prefer custom HTTPS type healthcheck which is monitoring /lookupservice URL.

Create Application Profile. During the SSO node configuration before the custom certificates are exchanged on each node you would use simple TCP profile or perhaps SSL passthrough profile (as the SSL certificate configured in NSX would not match self-signed certificate on the nodes). Another alternative is to edit /etc/hosts on each SSO node to fake the VIP hostname to point to the node (this is described in the first white paper). Once you replace the certificates on the nodes you can use SSL termination on the load balancer, configure VIP certificate and Pool Side certificate and also enable Insert X-Forwarded-For HTTP header so in theory we would see from where the authentication request is coming from (unfortunately SSO access log does not display the information).

Create Application Rule. Here we will define the logic that will perform the active – passive load balancing. Each SSO node will be in separate pool, with the primary node set up as default. ACL rule is defined to see if the primary node is up. If not we will switch the backend pool to the secondary node. The pool names must match the ones we will create in the next step.

# detect if pool “SSO_primary” is still UPacl SSO_primary_down nbsrv(SSO_primary) eq 0# use pool “SSO_secondary” if “SSO_primary” is deaduse_backend SSO_secondary if SSO_primary_down

Create SSO_primary and SSO_secondary pools. Each will have one SSO node with the healthcheck from step 4 and ports 7444. Notice that I have defined the pool member as vCenter VM container object so NSX will retrieve it’s IP address dynamically via VM Tools. While I could hardcode the node IP address this is nice showcase of NSX – vCenter integration. If inline mode you would check the Transparent checkbox for each pool.

Now we can create virtual server. We will select Application Profile from step 5, Default Pool from step 7, in the Advanced Tab Application Rule from step 6. For VIP I used the LB default IP (from step 1) and HTTPS 7444 port.

As a last step do not forget to disable firewall or create firewall rule for the IP and port define in the previous step.

Advertisements

Share this:

Like this:

About two years ago I have written article how to use vShield Edge to load balance vCloud Director cells. I am revisiting the subject; however this time with NSX Edge load balancer. One of the main improvements of the NSX Edge load balancer is SSL termination.

Why SSL Termination?

All vCloud GUI or API requests are made over HTTPS. When vShield (vCNS) Edge was used for load balancing it was just passing through the traffic untouched. There was no chance to inspect the request – the load balancer saw only source and destination IP and encrypted data. If we want to inspect the HTTP request we need to terminate the SSL session on the load balancer and then create a new SSL session towards the cell pool.

This way we can filter URLs, modify the header or do even more advanced inspection. I will demonstrate how we can easily block portal access for a given organization and how to add X-Forwarded-For header so vCloud Director can log the actual end-user’s and not only load balancer’s IP addresses.

Basic Configuration

I am going to use exactly the same setup as in my vShield article. Two vCloud Director cells (IP addresses 10.0.1.60-61 and 10.0.1.62-63) behind Virtual IPs – 10.0.2.80 (portal/API) and 10.0.2.81 (VMRC).

While NSX Edge load balancer is very similar to vShield load balancer the UI and the configuration workflow has changed quite a bit. I will only briefly describe the steps to set up basic load balancing:

Create Application Profiles for VCD HTTP (port 80), VCD HTTPS (port 443) and VCD VMRC (port 443). We will use HTTP, HTTPS and TCP types respectively. For HTTPS we will for now enable SSL passthrough.

Now we should have load balanced access to vCloud Director with identical functionality as in vShield Edge case.

Advanced Configuration

Now comes the fun part. To terminate SSL session at the Edge we need to create and upload to the load balancer vCloud http SSL certificate. Note that it is not possible to terminate VMRC proxy as it is a poor socket SSL connection. As I have vCloud Director 5.5 I had to use identical certificate as the one on the cells otherwise catalog OVF/ISO upload would fail with SSL thumbprint mismatch (see KB 2070908 for more details). The actual private key, certificate signing request and certificate creation and import was not straightforward so I am listing exact commands I used (do not create CSR on the load balancer as you will not be able to export the key to later import it to the cells):

Now we will create a simple Application Rule that will block vCloud portal access to organization ACME (filter /cloud/org/acme URL).

acl block_ACME path_beg – i /cloud/org/acmeblock if block_ACME

Now we will change previously created VCD_HTTPS Application Profile. We will disable SSL Passthrough, check Insert X-Forwarded-For HTTP header (which will pass to vCloud Director the original client IP address) and Enable Pool Side SSL. Select previously imported Service Certificate.

And finally we will assign the Application Rule to the VCD_HTTPS Virtual Server.

Now we can test if we can access vCloud Director portal and see the new certificate, we should not be able to access vCloud Director portal for ACME organization and we should see in the logs the client and proxy IP addresses.