Convert Possibilities into Business Value

Tag Archives: Spas Kaloferov

Introduction

In this article we will look into the alpha version of Microsoft Windows PowerShell v6 for both Linux and Microsoft Windows. We will show how to execute PowerShell commands between Linux , Windows, and VMware vRealize Orchestrator (vRO):

Linux to Windows

Windows to Linux

Linux to Linux

vRO to Linux

We will also show how to add a Linux PowerShell (PSHost) in vRO.

Currently, the alpha version of PowerShell v6 doesnot support the PSCredential object, so we cannot use the Invoke-Command command to programmatically pass credentials and execute commands from vRO, through a Linux PSHost, to other Linux machines, or Windows machines. Conversely, we cannot execute from vRO –> through a Windows PSHost –> to Linux Machines.

In addition to not supporting the PSCredential object, the alpha version doesn’t support WinRM. WinRM is Microsoft’s implementation of the WS-Management protocol, a standard Simple Object Access Protocol (SOAP)-based, firewall-friendly protocol that enables hardware and operating systems from different vendors to interoperate. Therefore, when adding a Linux machine as a PowerShell host in vRO, we will be using SSH instead of WinRM as the protocol of choice.

The PowerShell v6 RTM version is expected to support WinRM, so we will be able to add the Linux PSHost with WinRM, and not SSH.

You should now see the certificate shown below. The common name can differ, but if you compare the thumbprints, it should match the private key entry in your keystore.

I hope this post was valuable in helping you learn how to change the Package Signing Certificate in a vRealize Orchestrator appliance. Stay tuned for my next post!

Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

Background and General Considerations

In this post will we will take a look at some common issues one might experience when using the VMware vRealize Orchestrator (vRO) PowerShell Plug-In, especially when using HTTPS protocol or Kerberos authentication for the PowerShell Host (PSHost).

Most use cases require that the PowerShell script run with some kind of administrator-level permissions in the target system that vRO integrates with. Here are some of them:

Run a PowerShell script (.ps1) file from within a PowerShell script file from vRO.

Access mapped network drives from vRO.

Interact with Windows operating systems that have User Access Control (UAC) enabled.

Execute PowerCLI commands.

Integrate with Azure.

When you add a PowerShell Host, you must specify a user account. That account will be used to execute all PowerShell scripts from vRO. In most use cases, like the one above, that account must be an administrator account in the corresponding target system the script interacts with. In most cases, this is a domain-level account.

In order to successfully add the PowerShell Host to that account—and use that account when executing scripts from vRO—some prerequisites need to be met. In addition, the use cases mentioned require the PowerShell Host to be prepared for credential delegation (AKA Credential Security Service Provider [CredSSP], double-hop authentication or multi-hop authentication).

After preparing the PSHost, test it to make sure it accepts the execution or removes PowerShell commands.

Start by testing simple commands. I like to use the $env:computername PowerShell command that returns the hostname of the PSHost. You can use the winrs command in Windows for the test. Here’s an example of the syntax:

Continue by testing a command that requires credential delegation. I like to use a simple command, like dir\\<Server_FQDN\<sharename>, that accesses a share residing on a computer other than the PSHost itself. Here’s an example of the syntax:

If you are planning to add multiple PSHosts and are using domain-level accounts for each PSHost that are from different domains (e.g., vmware.com and support.vmware.com) you need to take this into consideration when preparing vRO for Kerberos authentication.

Note: In order to add the PSHost, the user must be a local administrator on the PSHost.

If you still cannot add the host, make sure your VMware appliance can authenticate successfully using Kerberos against the domains you’ve configured. To do this you can use the ldapsearch command and test Kerberos connectivity to the domain.

If your authentication problems continue, most likely there is a general authentication problem that might not be directly connected to the vRO appliance, such as:

A network related issue

Blocked firewall ports

DNS resolution problems

Unresponsive domain controllers

Troubleshooting Issues when Executing Scripts

Once you’ve successfully added the PSHost, it’s time to test PowerShell execution from the vRO.

To resolve the most commonissues whenexecuting PowerShell scripts from vRO, follow these steps:

While in vRO go to the Inventory tab and make sure you don’t see the word “unusable” in front of the PSHost name. If you do, remove the PSHost and add it to the vRO again.

Use the Invoke an external script workflow that is shipped with vRO to test PowerShell execution commands. Again, start with a simple command, like $env:computername.

Then, process with a command that requires credential delegation. Again, as before, you can use a command like dir\\<Server_FQDN\<sharename>.

Note: This command doesn’t support credential delegation, so a slight workaround is needed to achieve this functionality. You need to wrap the command you want to execute around an Invoke-Command command.

If you try to execute a command that requires credential delegation without using a workaround, you will receive an error similar to the following:

PowerShellInvocationError: Errors found while executing script <script>: Access is denied

Use the SilentlyContinue PowerShell error action preference to suppress output from “noisy” commands. Such commands are those that generate some kind of non-standard output, like:

Progress par showing the progress of the command execution

Hashes and other similar content

Finally, avoid using code in your commands or scripts that might generate popup messages, open other windows, or open other graphical user interfaces.

Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

In this post we will demonstrate how to configure a highly availability (HA) LDAP server to use with the VMware vRealize Orchestrator Server (vRO) Active Directory Plug-in. We will accomplish this task using F5 BIG-IP, which can also be used to achieve LDAP load balancing.

The Problem

The Configure Active Directory Server workflow part of the vRO Active Directory Plug-in allows you to configure a single active directory (AD) host via IP or URL. For example:

Q: What if we want to connect to multiple AD domain controller (DC) servers to achieve high availability?A: One way is to create additional DNS records for those servers with the same name, and use that name when running the workflow to add the AD server. DNS will return based on round robin, any of the given AD servers.

Q: Will this prevent me from hitting a DC server that is down or unreachable?A: No, health checks are not performed to determine if a server is down.

Q: How can I implement a health checking mechanism to determine if a given active directory domain controller server is down, so that this is not returned to vRO?A: By using F5 BIG-IP Virtual Server configured for LDAP request.

Q: How can I configure that in F5?A: This is covered in the next chapter.

The Solution

We can configure an F5 BIG-IP device to listen for and satisfy LDAP requests in the same way we configured it for vIDM in an earlier post.

Additional resources

Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

One of the worst things you can do is to buy a great product like VMware NSX Manager and not use its vast number of functionalities. If you are one of those people and want to “do better” then this article is for you. Will take a look how to configure SSL VPN-Plus functionality in VMware NSX. With SSL VPN-Plus, remote users can connect securely to private networks behind a NSX Edge gateway. By doing so remote users can access servers and applications in the private networks.

Consider a software development company that has made design decision and is planning to extend it’s existing network infrastructure and allow remote users access to some segments of it’s internal network. To accomplish this the company will be utilizing the already existing VMware NSX Manager network infrastructure platform to create a Virtual Private Network (VPN).

The company has identified the following requirements for their VPN implementation:

The VPN solution should utilize SSL certificate for communication encryption and be used with standard Web browser.

The VPN solution should use Windows Active Directory (AD) as identity source to authenticate users.

Only users within a given AD organizational unit (OU) should be granted access to the VPN.

Users should be utilizing User Principal Names (UPN’s) to authenticate to the VPN.

Only users who have accounts with specific characteristics, like those having an Employee ID associated with their account, should be able to authenticate to the VPN.

Configuring SSL VPN-Plus is a straightforward process, but fine tuning it’s configuration to meet your needs might sometimes be a bit tricky. Especially when configuring Active Directory for authentication. We will look into a couple of examples how to use the Login Attribute Name and Search Filter parameters fine grain and filter the users who should be granted VPN access.

Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

The increasingly global nature of content and migration of multimedia content distribution from typical broadcast channels to the Internet make Geo-Location a requirement for enforcing access restrictions. It also provides the basis for traditional performance-enhancing and disaster recovery solutions.

Also of rising importance is cloud computing, which introduces new challenges to IT in terms of global load balancing configurations. Hybrid architectures that attempt to seamlessly use public and private cloud implementations for scalability, disaster recovery and availability purposes can leverage accurate Geo-Location data to enable a broader spectrum of functionality and options.

Geo-Location improves the performance and availability of your applications by intelligently directing users to the closest or best-performing server running that application, whether it be physical, virtual or in a cloud environment.

VMware vRealize Automation Center (vRA) will be one of the products in this Proof of Concept (PoC) for which use case(s) for Load balancing and geo-location traffic management will be presented. This PoC can be used as a test environment for any other product that supports F5 BIG-IP Local Traffic Manager (LTM) and F5 BIG-IP Global Traffic Manager (GTM). After completing this PoC you should have the lab environment needed and feel comfortable enough to be able to setup more advanced configurations on your own and according to your business needs and functional requirements.

One of the typical scenarios which involving Geo-Location based traffic management is the ability to achieve traffic redirection on the basis of the source of the DNS query.

Consider a software development company that is planning to implement vRealize Automation Center to provide private cloud access to its employees where they can develop and test their applications. Later in this article I sometimes refer to the globally available vRA private cloud application as GeoApp. Our GeoApp must provide access to the company’s private cloud infrastructure from multiple cities across the globe.

The company has data centers in two locations: Los Angeles (LA) and New York (NY). Each data centerwillhost instance(s)of the GeoApp(vRealize Automation Center). Development (DEV) and Quality Engineering (QE) teams from both locations will access the GeoApp and use it to develop and test their homegrown software products.

Use Case 1

The company has made design decisions and is planning to implement the following to lay down the foundations for their private cloud infrastructure:

Deploy two GeoApp instances using vRealize Automation Center minimal setup in the LA data center for use by Los Angeles employees.

Deploy two GeoApp instances using vRealize Automation Center minimal setup in the NY data center for use by New York employees.

The company has identified the following requirements for their GeoApp implementation:

The GeoApp must be accessible to all the employees, regardless if they are in the Los Angeles or New York data center, under the single common URL geoapp.f5.vmware.com.

To ensure the employees get a responsive experience from the GeoApp (vRA) private cloud portal website, the company requires that LAemployees be redirected to the Los Angeles data center and NY employees be redirected to New York data center.

The workload of the teams must be distributed across their dedicated local GeoApp (vRA) instances.

This is roughly represented by the diagram below:

In case of a failure of a GeoApp instance, the traffic should be load balanced between available instances in the local data center.

This is roughly represented by the diagram below:

Use Case 2

The company has made design decision and is planning to implement the following to lay down the foundations for their private cloud infrastructure:

Deploy 1xGeoApp instance using VMware vRealize Automation Center (vRA) distributed setup in the Los Angeles datacenter for use by the LA employees. In this case the GeoApp can be seen as a 3-Tier application, containing 2 GeoApp nodes in each tier.

Deploy 1x GeoApp instance using VMware vRealize Automation Center (vRA) distributed setup in the New York datacenter for use by the NY employees. In this case the GeoApp can be seen as a 3-Tier application, containing 2 GeoApp nodes in each tier.

The company has identified the following requirements for their GeoApp implementation:

The GeoApp must be accessible from all the employees, regardless if they are in the Los Angeles or the New York datacenter, under a single common URL geoapp-uc2.f5.vmware.com.

To ensure that the employees get a responsive experience from the GeoApp (vRA) private cloud portal website, the company requires that the Los Angelesemployees be redirected to Los Angeles datacenter and the New York employees be redirected to New York datacenter.

The workload must be distributed across the Tier nodes of the local GeoApp (vRA) instance.

This is roughly represented by the diagram below:

In case of failure of a single Tier Node in a given GeoApp Tier, the workload should be forwarded to the remaining Tier Node in the local datacenter.

This is roughly represented by the diagram below:

In case of failure of all Tier Nodes in a given GerApp Tier , the workload of all tiers should be forwarded to the GeoApp instance in the remote datacenter

This is roughly represented by the diagram below:

Satisfying these requirements involves the implementation of two computing techniques:

Load balancing

Geo-Location-based traffic management

There are other software and hardware products that provide load balancing and/or Geo-Location capabilities, but we will be focusing on two of them to accomplish our goal:

For load balancing: F5 BIG-IP Local Traffic Manager (LTM)

For Geo-Location: F5 BIG-IP Global Traffic Manager (GTM)

Based on which deployment method you choose and what functional requirements you have you will then have to configure the following aspects of F5 BIG-IP devices, which will manage your traffic:

Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

di·ver·si·ty

“Diversity” was the first word that came to my mind when I joined VMware. I noticed the wide variety of different methods and processes utilized to replace certificates on the different VMware appliance products. For example, with VMware vRealizeTM OrchestratorTM, users must undergo a manual process to replace the certificate, but with VMware vRealizeTM AutomationTM administrators have a graphical user interface (GUI) option, and with VMware NSX ManagerTM there is another completely different GUI option to request and change for the certificate of the product.

Figure 1. SSL Certificates tab on the VMware NSX ManagerTM

This variety of certificate replacement methods and techniques is understandable as all of these VMware products are a result of different acquisitions. Although these products are great in their own unique ways, the lack of a common, smooth and user-friendly certificate replacement methodology has always filled the administrators and consultants with anxiety.

This anxiety often leads to certificate configuration issues among the majority of VMware family members, partners and end users. As a member of this family—and also of the majority—I recently felt this anxiety when I had to replace my VMware NSX Manager and NSX EdgeTM certificates.

pas·sion

I must say that up to the point where I had to replace these certificates, I had pretty awesome experiences installing and configuring VMware NSX Manager, and even developed advanced services like network load balancing. But I hit a minor roadblock with the certificates, and my passion to kick down any road block until it turns to dust wasn’t going to leave me alone.

ex·e·cu·tion

I got in touch with some of my awesome colleagues and NSX experts to get me back on the good experience track of NSX. As expected, they did (not that I have ever doubted them). Now, I was exploring the advanced VMware NSX Manager capabilities with full power – like SSL VPN-Plus where I had to again configure a certificate for my perimeter gateway edge device.

This time I wasn’t anxious because I now had the certificate replacement process under control.

cus·to·mer

As our customers are core to our mission, we want to empower them by freeing them from certificate replacement challenges so they can spend their time and energy on more pressing technological issues. To help empower other passionate enthusiasts, and help keep them on the good experience track of NSX, I’ve decided to describe the certificate replacement processes I’ve been using and share them in a blog post to make them available to everyone.

com·mu·ni·ty

We are all connected. We approach each other with open minds and humble hearts. We serve by dedicating our time, talent, and energy – creating a thriving community together. Please visit Managing NSX Edge and Manager Certificates to learn more about the certificate replacement process.

Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.