VMware App Volumes has a fully featured REST API that can be used to automate any function that can be done from within the App Volumes manager. A REST API uses the standard HTTP protocol and Verbs to retrieve or set data.

The input parameters, which are posted to the REST API and the data returned are formatted as JSON. Note: A user must be defined as an App Volumes Administrator in order to use the API

In order to set and retrieve data to the API you need to have a client, which is capable of sending and retrieving data using HTTP / URL syntax. There are many that are available for free. I will focus on cURL and Postman in this article. I have also written a few applications in VB.net using the App Volumes API. I will post some sample code for .NET in a future post. The API wasn’t really documented so I decided to put this post together to help others who may want to program against the API.

Clients for REST API

As mentioned, there are several ways to connect and retrieve data from the App Volumes API. Here are a couple of the most popular:

cURL is a command line data transfer utility for protocols using the URL syntax. It can be downloaded for free for most operating systems. I will provide some sample syntax for cURL in the next section.

Google offers an add-on for the Chrome browser called Postman. This is the client I prefer for quickly checking the API syntax and returning data. It is easy to use, and returns the nicely formatted JSON data. Postman can be easily added to any instance of the chrome browser and accessed from the Chrome App Launcher. You will also need to install the Postman Interceptor extension for Postman to be able to read and save browser-based cookies, which is where the session for the API connection will be saved.

Connecting to the App Volumes API

We will now use the Postman REST API and cURL clients to connect to App Volumes and create a session. App Volumes stores the session data in a cookie. Once we have that session cookie, we can reference it to pull other data from the API. We can log in one time, and as long as our session doesn’t time out, we can continue to use the API.

cURL

With cURL we will establish a connection to the API and save the session cookie to a text file. We can then reference the cookie text file when we want to do additional actions against the API

When you successfully log into the API you will receive the following JSON message:

{“success”:”Ok”}

If you look at the cookie file that is created, we can see that it contains a _session_id value. This is the session cookie.

We can now run additional commands and reference the cookie we saved:

Return Writable Volumes and format the JSON to make it easier to read. Notice we are referencing the cookies.txt file we created when we logged in. Note: The | python –m json.tool is optional and just helps format the returned JSON data to make it easier to read.

Enter a Key called username with the username of an App Volumes Administrator

Enter a Key called password with the password associated with this account

Hit Send to make the connection – if successfully authenticated you will see “success”: “Ok” in the return data of the body.

You have successfully authenticated to the App Volumes API. If you click the cookies tab, you will notice you have been issued a session cookie. This will be passed back to the App Volumes manager API each time you attempt to connect. As long as the session is still valid, you won’t be prompted for Authentication.

If your session times out, you will receive a message similar to the one below, just post back to the API to establish another session as we did earlier.

Issues Logging In

You may receive the following error messages when attempting to log in to the App Volumes API:

Invalid username or password – make sure the username (domain\username) and the password are correct and try again.

You must be in the Administrators group to Login

If you receive the message below, the account that you are attempting to log in with is not a member of the group defined as App Volumes Administrators

You can check what group is defined as the App Volumes Administrators group by logging into the App Volumes Manager and browsing to “Configuration” | “Administrators”. Make sure the account you are using to access the API is a member of this group and try connecting again.

Let’s start using the App Volumes API. I will break out the API calls as they are arranged in the App Volumes manager today. The same URL paths and verbs will work for either Postman or cURL. I will be using Postman in my examples as I find it easier to use.

General App Volumes API Calls

In this section we will cover API commands to retrieve general data from App Volumes.

Overview

VMware recently released the Access Point solution, which is a secure gateway for Horizon 6. The solution is deployed as a Linux based Virtual Appliance with a REST based API for querying and updating the appliance. It is deployed as an OVA, which can be done in a couple of ways. You can deploy it via vCenter with the “Deploy OVF Template” wizard or with the VMware OVF tool. The OVF tool is the preferred method as all of the API parameters to configure View and Certificates can be passed as a JSON string into the OVF tool. The issue is that it is command line based and the input string can become very complicated and hard to properly construct. Below is a sample OVF Tool Input string to deploy an Access Point appliance:

Sample OVF Tool Input String:

As you can see, it can be a very difficult string to construct properly. In addition, if you are updating the certificates for the appliance, the PEM files must be formatted into a single line string with appropriate embedded newline characters. This can all be very challenging and intimidating when attempting to deploy Access Point.

That is why I wrote this utility. It basically acts as a GUI wrapper for the OVF tool and will construct a proper OVF Tool input string including all of the JSON to be passed to the Access Point API.

VMware Access Point

Access Point has been covered in other articles. I am not going to cover much on Access Point in general other than what is below:

Access Point functions as a secure gateway for users who want to access Horizon 6 desktops and applications from outside the corporate firewall.

Access Point appliances typically reside within a DMZ and act as a proxy host for connections inside your company’s trusted network. This design provides an additional layer of security by shielding View virtual desktops, application hosts, and View Connection Server instances from the public-facing Internet. Access Point directs authentication requests to the appropriate server and discards any un-authenticated request. The only remote desktop and application traffic that can enter the corporate data center is traffic on behalf of a strongly authenticated user. Users can access only the resources that they are authorized to access.

Access Point appliances fulfill the same role that was previously played by View security servers, but Access Point provides additional benefits:

An Access Point appliance can be configured to point to either a View Connection Server instance or a load balancer that fronts a group of View Connection Server instances. This design means that you can combine remote and local traffic.

Configuration of Access Point is independent of View Connection Server instances. Unlike with security servers, no pairing password is required to pair each security server with a single View Connection Server instance.

Access Point appliances are deployed as hardened virtual appliances, which are based on a Linux appliance that has been customized to provide secure access. Extraneous modules have been removed to reduce potential threat access.

Access Point uses a standard HTTP(S) protocol for communication with View Connection Server. JMS, IPsec, and AJP13 are not used.

My colleague Mark Richards has also written a great article, with a very detailed overview of Access Point, along with how deployment works with the OVF Tool and using the API.

Access Point Deployment Utility

Pre-requisites:

How it works:

As mentioned earlier this utility is a wrapper for the VMware OVF Tool. It allows you to input settings in a GUI and it will create the properly formatted input string for you. It also allows settings to be saved to an XML file and later imported to reduce how much data needs to be manually entered. It will also take a standard PEM certificate chain and private key and convert them to the proper format.

When you first start the application it reads the registry to see if the VMware OVF tool is installed and it reads where the tool is currently installed. If it is not detected, you will see the message below and you will need to install the OVF Tool to continue.

Once the OVF Tool is installed and detected you will see the following:

If you have previously exported settings, you can import them to populate the form by choosing “Import Settings from XML” and browsing to an export file. This is a quick way to import your settings for redeployment. The only settings that aren’t saved are passwords.

Configuration and Deployment

Let’s assume you don’t have any settings exported and you are running the application for the first time. We will go through all of the parameters and provide information on what they are and how they should be formatted.

Deployment Settings

These are the basic settings required to deploy the Access Point appliance. Let’s go through each of them now.

Note: The OVF Tool is case sensitive for some of the objects in the vCenter locator string. Those entries are called out below

Note that each setting has a button that will show example format.

Virtual Center: This is the FQDN or IP address of the Virtual Center you want to deploy the appliance into. Example: 192.168.1.12This setting is case sensitive!

VC Username: User that has access to deploy a Virtual Appliance in the Virtual Center you are targeting for deployment. This should be in the format user@domain.com. Example: chris@halstead.net or administrator@vsphere.local

VC Password: The password for the user you specified in the previous step.

ESX Host: The FQDN or IP of the ESX host where the appliance will be deployed. It must reside in the Virtual Center you specified earlier. Example: esxhost.company.com or 192.168.1.11This setting is case sensitive!

Datastore: The datastore as defined in Virtual Center / vSphere that you want to place the appliance on. Example: VMFS_1

Folder: The folder you want to place the appliance in. This is optional and can be left blank or set to “/”. Example: ExternalThis setting is case sensitive!

Appliance Name: The name of the Access Point appliance once it is deployed: Example: AP_PROD

VC Datacenter: The name of the Virtual Center datacenter you want to deploy this appliance to. Remember, you must have IP Pools defined for this datacenter with the network(s) you plan to use. Example: HomeThis setting is case sensitive

Cluster Name: The name of the cluster in which the ESX host you are deploying to resides. If you are using clusters in your environment you must enter the cluster name. This is an optional field, and if not using a cluster it must be left blank. If you are using clusters in your environment it is a required field. Example: Prod ClusterThis setting is case sensitive!

#nics: How many nics are defined for the virtual appliance. If one, external, management and back-end traffic flows over the one nic. If two nics are configured, external has a dedicated nic and management and back-end traffic travel over the second nic. If three nics are defined, external traffic, management traffic and back-end traffic each have their own nic. Example: onenic

Use a management IP?: This option is automatically selected if using two or three nics.

Use a back-end IP?: This option is automatically selected when using three nics.

Configure View Settings During Deployment: Checking this box will enable the panel containing View Settings which will be passed into the appliance API via JSON and set during deployment.

Configure Certificates During Deployment: Checking this box will enable the panel containing certificate settings which will be passed into the appliance API via JSON and set during deployment.

External Network: The network label as defined in Virtual Center / vSphere for the port group you want to assign to the external interface. If using just one nic, this will be used by management and back-end traffic as well.

DNS IP: Single DNS Server*
Example: 192.168.1.199*Note: Access Point is not properly accepting multiple DNS entries (event when deployed via vCenter). In SUSE, multiple DNS entries should be placed on separate “nameserver” lines and they are being placed on a single line. This is a known issue and at this time you must use a single DNS IP Address. This will be fixed in a future release of Access Point.

Management IP: IP address that will be bound to the management network if using two or three nics. Example: 192.168.1.51

Management Network: The network label as defined in Virtual Center / vSphere for the port group you want to assign to the management interface. If using two nics, this will be used by management and back-end traffic.

Back-End IP: address that will be bound to the back-end network if using three nics. Example: 192.168.1.52

Back-End Network: The network label as defined in Virtual Center / vSphere for the port group you want to assign to the back-end interface.

root password: Password used when connecting via console to the Access Point appliance. This must be a valid linux password. Example: VMware1

Admin password: Password used to connect to the REST API. This password must be 8 characters long and contain at least one each of the following: upper case letter, lower case letter, number and special character ! @ # $ % * ( ) Example: VMware1!

Path to OVA: This is the path to the OVA for VMware Access Point. You can type in the path or click the … button and browse to the file. Example:\\192.168.1.17\software\Deploy Access Point GUI\euc-access-point-2.0.0.0-2939373_OVF10.ova

At this point, all of the required fields to deploy are complete, but let’s walk through the process of configuring settings for Certificates and View before we deploy.

Certificate Settings

Certificates can also be configured at deployment. This is very important as the Access Point appliance is sitting in the DMZ and acts as a secure gateway for View.

Certificates can be set by selecting the checkbox “Configure Certificates During Deployment”. This will enable the certificates panel.

The certificate input boxes also have a button which will show proper format.

The first thing you need to do is, to export the certificate you want to use for Access Point. In my case, it was the certificate I had on my View Security Server.

Select the certificate you want to place on the Access Point appliance and choose “Export”

Choose “Yes, export the private key”

Select “Export all certificates in the certification path if possible”. This will make sure that any appropriate intermediate and root CA’s are exported. Create a password for the private key and save the exported certificate.

Once you have the exported certificate, use OpenSSL to convert the certificate to a PEM certificate chain and private key by running the following commands.

This will create two PEM certificate files, one for the Private Key and one for the Certificate Chain. Once you have them in PEM format, you need to make sure that the private key is formatted like the example below:

—–BEGIN RSA PRIVATE KEY—–
Private Key data
—–END RSA PRIVATE KEY—–

The certificate chain must be in format of Target Certificate, Intermediate, Root, and must be formatted like the example below:

The certificates for Access Point are set via the API using JSON, so the certificate data must be formatted as a single line string with embedded newline characters. This can be a bit of a pain to do, so the application will format the certificates for you. You just need to have a properly formatted PEM private key and certificate chain. You copy them into the appropriate text boxes, and choose “Format Private Key” and “Format Certificates”

Before Formatting:

After Formatting:

If you need to modify the Private Key or Certificate chain, simply click the “Update Private Key” or “Update Certificates” button and it will clear out the box so you can paste in a new string.

This process will update the certificate on the Access Point appliance when you deploy it. This is much easier than having to go back in later and set it via the API.

View Settings

The primary use case for the Access Point appliance is access View desktops and applications. You can set all of the View configurations at deployment by selecting the “Configure View Settings During Deployment” checkbox and entering the proper information.

Existing View Connection Servers must be configured properly before using Access Point for secure external connections. From the “Configuring and Deploying Access Point” document:

Preparing View Connection Server for Use with Access Point

Administrators must perform specific tasks to ensure that View Connection Server works correctly with Access Point.

If you plan to use a secure tunnel connection for client devices, disable the secure tunnel for View Connection Server. In View Administrator, go to the Edit View Connection Server Settings dialog box and deselect the check box called Use secure tunnel connection to machine. By default, the secure tunnel is enabled on the Access Point appliance.

Disable the PCoIP secure gateway for View Connection Server. In View Administrator, go to the Edit View Connection Server Settings dialog box and deselect the check box called Use PCoIP Secure Gateway for PCoIP connections to machine. By default, the PCoIP secure gateway is enabled on the Access Point appliance.

Disable the Blast secure gateway for View Connection Server. In View Administrator, go to the Edit View Connection Server Settings dialog box and deselect the check box called Use Blast Secure Gateway for HTML Access to machine. By default, the Blast secure gateway is enabled on the Access Point appliance.

To use two-factor authentication with Horizon Client, such as RSA SecurID or RADIUS authentication, you must enable this feature on View Connection Server. See the topics about two-factor authentication in the View Administration document.

One you have these pre-requisites in place, you can deploy an Access Point appliance with View settings by providing in the following information in the application:

Destination URL: URL of a View connection server, or the address of a load balancer in front of View Connection servers. This URL must contain the protocol, FQDN or IP and port. example: https://192.168.1.30:443

View Thumbprints: Specifies a list of View Connection Server thumbprints. If you do not provide a comma separated list of thumbprints, the server certificates must be issued by a trusted CA. The format includes the algorithm (sha1 or md5) and the hexadecimal thumbprint digits. To find these properties, browse to the View Connection Server, click the lock icon in the address bar, and view the certificate details.

Note: The appliance will accept the the space delimited format from Chrome or the colon separated format from Firefox.

Access Point URL: The external URL to be used by clients to connect to the Access Point appliance to tunnel secure connections. Example: view.chrisdhalstead.com:443NOTE: Do NOT start this URL with https://

Blast URL: Specifies an external URL of the Access Point appliance, which allows end users to make secure connections through the Blast Secure Gateway. Example: view.chrisdhalstead.com:8443NOTE: Do NOT start this URL with https://

Deploy an Access Point Appliance

Back up settings

Now that all of the appropriate settings for deploying an Access Point appliance are in place, this is a good time to export out the settings that you have entered. Click the “Export Current Settings” button at the bottom left of the form and select a location to save the settings to.

This will create an XML document with the values you had entered (with the exception of passwords) so they can easily be imported at a later date when deploying additional appliances.

Prior to deploying the appliance, or for troubleshooting, the generated input string can be shown and copied out at any time by clicking the “Show OVF Tool String” button on the bottom right of the form.

This string can be copied directly into the command line after the ovftool.exe for troubleshooting.

Deploy the Access Point Appliance

Once all of the settings are in place, you are ready to deploy the appliance.

Log Level: The ovftool log level can be adjusted prior to deployment by selecting the log level at the bottom right of the form.

Click the “Deploy Access Point Appliance” button when you are ready to deploy. There is a lot of validation that happens before the appliance is actually deployed, so if any fields are mis-formatted or missing you may receive a message similar to the one below:

The final check is for the password strength of the Admin account. This account is used by the REST API service and if the password is not set properly the appliance will not be functional. If the password is not set to the proper strength (detailed above) you will receive the following message:

Once you have all of the required information set, you will see a dialog which shows the progress of the deployment process of the appliance. The data that is shown is the direct output from the OVFTool.

Verifying Deployment

We are going to review the logs to see where our settings are being applied to the access point appliance.

NOTE: This same process is used for troubleshooting. If you notice the appliance is not responding and is CPU pegged, it is likely that an invalid parameter was passed in. This process will allow you to troubleshoot those issues as well.

Type the following to open the API log:

more /opt/vmware/gateway/logs/admin.log

Testing Deployment

The first thing I test is to connect directly to the internal IP address of the Access Point appliance and make sure that it presents the appropriate View Connection server. This is a good way to test basic functionality of the Access Gateway without involving firewalls, etc.

Internal Access Test

The final test is to connect to the external Access Point URL and verify that you are able to connect via all protocols configured (PCOIP, Blast).

This is also where you test that the certificate import was successful by verifying the appliance is using the certificate you set at deployment:

External Access Test

Verify Certificates

External Access Test

Congratulations! You have deployed the VMware Access Point Appliance!

You can download the application below to test in your environment.

Troubleshooting

The most common error is going to be a variant of the following:

Locator does not refer to an object: vi://chris%40halstead.net@192.168.1.12:443/Home Datacenter/host/192.168.1.11

This error means that some part of the OVF Tool locator string is not right. The OVF Tool is case sensitive when it comes to the locators. Check the following for typos or case sensitivity:

Virtual Center

ESX host

Data Center

Cluster

Folder

Also, make sure you are not missing the Cluster name if you are using a cluster in your environment.

Download the Application

This is a 1.x version of the application and I haven’t been able to test all scenarios. I would appreciate if others can test in their environments and provide issues, feedback and suggestions to me. Thanks!

Changelog:

1.0.0.1

Added check for short string such as * in Thumbprint

1.0.0.2

Added ability to input a vCenter cluster name during deployment

1.0.0.3

Fixed issue with Chrome adding a Unicode character (<u+200E>) at beginning of Thumbprint when pasted in which would hang appliance at startup.

Fixed issue when not deploying all View options where those options were still set with invalid values and appliance would hang at startup.

Last week we VMware held it’s Tech Summit for the SE and PSO organizations. As part of the event there was a Hackathon held where employees could develop a “hack” for existing VMware technologies. I participated and decided to develop a Self Service for App Volumes. This application won second place in the Hackathon, and with 17 pretty amazing teams participating, I was very excited and decided to share the application.

App Volumes is great and has a lot of flexibility, but it is primarily a “push” technology. You assign AppStacks to a user, group, ou or computer. This is somewhat static and when a user wants a new application they have to put a request in so an App Volumes admin who can then go into the App Volume manager and assign the AppStack.

With this utility, the process is “pull” where the user can see available AppStacks, select and attach them on demand. AppStacks are filtered automatically, and the user only sees those AppStacks which apply to the Operating System and Architecture they are running. The AppStacks are attached as Machine Assignments and are immediately available.

The user can also remove AppStacks on demand and any current or pending Assignments are visible through an Advanced View. There are also switches that can be used with a solution such as VMware UEM. The user can also right-click an AppStack to see what applications are installed in the AppStack prior to attaching it.

How It Works

The application is written in VB.NET 2013. When first launching the application, if no SQL connection has been established it will prompt the user to enter the address and name of the App Volumes database. The application runs under the security context of the user launching the application, so that user must have at least read access to the database.

When the user clicks the “Refresh AppStacks” button, the application connects to the App Volumes database and filters AppStacks by OS and Architecture and provides the user a list. It also queries the database for an assignments for the current user or computer and will display those along with any currently attached AppStacks.

When a user selects one or more AppStacks to attach, the application first checks to see if there are any user assigned AppStacks attached. In AppVolumes, you cannot have both user assigned and computer assigned stacks attached at the same time.

This application will automatically remap any user assigned AppStacks as computer assigned so additional AppStacks can be attached to the current system. It will even remap a writable volumes if one exists. It attaches and detaches AppStacks using the svservice.exe file which is installed in every AppVolumes agent installation. The database is accessed read-only and any attach and detach actions are handled locally.

Pre-requisites

App Volumes Agent installed and registered on the client

.NET Framework 4.0

Users must have read-only access to the App Volumes SQL Database

Usage

Download Manage_AppStacks.exe from the link below and place on a system running an App Volumes Agent. If you try to run this application on a system without an App Volumes agent, you will get the message below and the app will exit.

Make sure the user that will use this application have at least read-only access to the SQL database for App Volumes. You can accomplish this by creating a SQL login for an AD group, and granting that login access to the App Volumes SQL database. I would recommend only adding that login to the db_datareader role. This will only grant that login read-only access to the AppVolumes SQL database.

Also, make sure the SQL Server Browser Service is running on the SQL Server.

Start the application by opening “Manage_AppStacks.exe”Note – If running on a system with UAC, you may have to set this .exe to run as administrator.

The first time you launch the application you will get a message that no SQL Server is configured and you will be prompted for the SQL server name and Database name. Enter the name and any instances (ex: servername\instancename). If you need to specify a custom port for SQL, you can do that by adding a comma and port after the server name (ex. servername,port). These settings are stored in the registry under HKCU so once you put in a setting it will persist for that user. Settings can also be pre-populated with a solution such as VMware UEM.

|

Once SQL has been configured, you can populate AppStacks. Click the “Refresh AppStacks” button. This will connect to the SQL database, and provide a list back of the AppStacks which can be attached to the system you are running on. You will also see any current attached AppStacks.

You can also browse each AppStack to see what applications are installed inside of it, but right-clicking the AppStack and selecting “Show Installed Applications”.

If you want to attach one or more AppStacks, simply select the AppStacks and choose Attach Selected AppStacks. You will receive a confirmation message showing the AppStacks you have selected. If there are no user assigned AppStacks currently attached to the system, the AppStacks will immediately attach and the applications will be available immediately.

If there are user attached AppStacks you will see the message below with an additional confirmation message:

If you click yes any user assigned AppStacks will be re-attached as machine attached AppStacks. This allows the user to add additional AppStacks to the machine since both user and machine attached AppStacks cannot be mapped at the same time.

Note: Unless removed, there could be two assignments for an AppStack that is maped this way (Machine and User). In the Advanced section of this article I will discuss a /logoff switch that can be called through logoff script or UEM to automatically un-map any machine based assignments at logoff.

AppStacks that are attached through the program can be dynamically removed by selecting any attached AppStacks and selecting “Detach Selected AppStacks”. The selected AppStacks will be immediately detached and unassigned.

To exit the application, you need to select File | Exit or choose Exit from the system tray context menu.

Advanced Features:

Minimize to System Tray – The either minimized or the “X” close button is clicked from the application, it will be minimized to the system tray. There is a right-click context menu available from the tray icon that can be used to maximize the application or exit.

Show Advanced Features –

/logon Switch – This switch can be called from a logon script or UEM to automatically remap any user attached stacks as machine attached.

/logoff Switch – This switch can be called from a logoff script or UEM to automatically un map any machine attached stacks attached during the user session.

RDSH – This utility can also be used with RDSH servers (VMware or Citrix) that have the App Volumes agent installed. The utility will only show those AppStacks that apply to the RDSH host. AppStacks can be browsed by right-clicking to see what they contain. I publish this utility as a hosted application and manage the delivery of AppStacks to my RDSH hosts that way.

Conclusion

Hopefully this utility will be helpful. I will continue to update it – so keep checking this blog for updates. I will be rewriting this utility to use the App Volumes API, but since there is no RBAC for App Volumes yet, anyone who was going to use the application would need to be an App Volumes administrator. The first release with the API will allow the user to select if they want to use SQL or the API to read data.

Please let me know your feedback either as comments on this article, or be reaching out to me on twitter at @chrisdhalstead . A demo of the utility and the download are located below. Enjoy!

Demo of the App Volumes Self Service Utility

Check out the following video to see a demo of the Self Service Utility:

Download the App Volumes Self Service Utility

Click the below link to download and test this utility. I appreciate feedback and suggestions!

I have had several customers ask if it is possible to add restriction tags to Hosted Application pools in the same way they can be applied to desktop pools. Restriction tags allow you to choose which connection server brokers a desktop (or application) is available from. This can be used to restrict certain pools from being available only internally or only externally when paired with a security server. This functionality has existed for a long time for Desktop pools, and there is no current way in the GUI to set this for Application Pools.

One of my colleagues was asking about this again last week, so I decided to dig into the View ADAM (Active Directory Application Mode) LDAP store and see if it was possible to manually add a tag to an application pool. Tags are stored in an attribute called “pae-EntitlementsTagString” for both connection servers, and pools.

Before we get into the process, please note:

ADDING TAGS TO HOSTED APPLICATIONS IS NOT CURRENTLY SUPPORTED!This post and application are just showing what is possible – use at your own risk!

Setting Tags on a Connection Server

Browse to a connection server in the View Administrator

Choose Edit

On the general tab to you will see a Tags section – add one or more tags separated by either semi-color or comma.

Any pools set with this tag can only be accessed from a connection server with the corresponding tag applied.

Once you add the tags they are written into the “pae-EntitlementsTagString” attribute in the ADAM store.

Setting a Tag on a Published Application

Once you have one or more tags added to one or more connection servers, you can apply the tag to a Desktop pool, or in this case an Application pool. Each Application pool has the same “pae-EntitlementsTagString”.

If i manually add the “EXTERNAL” tag to the pool and then log into a connection server via either the Horizon Client or the HTML access portal I will see that the Calculator application as well as a Windows 7 pool which are both tagged “EXTERNAL” are only visible and available on the connection server which is tagged with “EXTERNAL”. Ok, it works perfectly. How to set this value easier and also provide a list of all connection server tags which are available? That why I wrote the application.

VMware Horizon Tag Published Apps Utility

To the the process of adding tags to published applications easier I wrote a small .NET application. You specify the address of a valid connection server and it binds to the ADAM LDAP store and reads and write the appropriate data.

One you successfully connect it first provides a list of all published applications into a dropdown box.

NOTE: You must execute this application under the context of a View Administrator that has proper access to view and edit the ADAM store.

Then select one of the published applications or click the “Refresh Tags” button. This will populate the “Connection Server Tags” list with all available Tags and it will check any tags that are currently applied to that published application.

You can now check or uncheck tags as desired and hit “Save Settings”. This will write the changes into the ADAM LDAP store and the changes are made dynamically. Hit “Refresh Tags” to verify the changes.

That’s it! Pretty simple application that allows you to add tags to published applications in VMware Horizon 6. But remember, this is not currently supported!

The application is available from the link below – please test it out and let me know your feedback and suggestions!

VMware recently announced the release of the User Environment Manager (UEM) product. This is the former Flex+ product from the acquisition of Immidio. I have had the opportunity to test the solution over the past several weeks. I really like the solution and the granularity of application and environment settings that can be managed. I will be releasing several blog articles going over how to use different features within VMware UEM. In this initial article in the VMware UEM blog series I will go through the process of installing and doing initial configuration of the UEM solution.

Overview:

VMware User Environment Manager (UEM) can be used to manage Windows and application settings across hardware types and operating systems. Instead of managing user settings within the monolithic Windows profile, settings are managed individually with .xml files and registry keys. This allows application settings to dynamically roam between physical systems, virtual desktops and RSDH published applications. The solution can also be used on physical systems. This means a user can make an application change on a Windows XP physical desktop, open the same application through RDSH on 2012 server and the settings from XP will persist. This is possible because UEM doesn’t rely on the Windows profile to store and retrieve data. Solutions that rely on the Windows profile cannot easily migrate between older and newer Windows operating systems as there are V1 and V2 profiles. XP and Server 2003 use V1 and Windows 7 and 2008 Server and later use V2. VMware UEM has no issue roaming settings between these operating systems.

What I really like about VMware UEM is that there is no infrastructure required. It utilizes what the customers already have in place today. It simply requires two file shares (configuration data and user data) and Microsoft Group Policy in order to setup the solution. UEM uses Group Policy to execute the process of copying user application settings to the Windows system on logon, and to the file share on logout. There is a management condole which points to the configuration file share and provides an easy to use GUI for modification of settings.

One feature I really like and think will be extremely valuable to customers is DirectFlex profiles. When using a DirectFlex profile, instead of injecting the application setting when the user logs in, UEM will copy the settings when the user launches the application. We can also trigger custom actions when the application launches. Think about how many applications require custom mapped drives or printers. Typically, we would map those drives or printers when the user logs into the setting with a logon script. With DirectFlex profiles we can map the drives or printers only when the user launches the application that needs them and remove them when they exit out the application. Very cool!

That’s it! No database, no dedicated server. Let’t go through the process of initial configuration of VMware UEM now.

Installation:

UEM File Shares

When installing VMware UEM, the file shares should be created first. There are two files shares required, a share for configuration data and one for user data.

UEM Configuration Share – This share contains all of the configuration data for UEM. Administrators will need to be able to read and write to this share and users will need to be able to read from it. The share can be any standard CIFS share. DFS namespaces are supported as well for use across a WAN. The space required on this share is very low – 1GB of space is typically sufficient.

UEM Profile Archive Share – This share is used to store the personal settings for each user. A folder is created for each user under this share. Settings are read from this share and applied at logon or application launch (DirectFlex) and saved on logoff or application exit. Just like the configuration share, this is just a standard CIFS share and DFS namespaces are supported. The space required will depend on what type of data is being captured by UEM, but 100mb per user is a good number for planning purposes.

Now that we have our UEM file shares created we can import the ADMX files required for UEM. As I mentioned earlier, UEM relies on Microsoft Group Policy to apply and save settings. The templates are provided as ADMX files which must be installed into the central Active Directory policy store. There are six .admx and corresponding .adml files located in the installation files “Administrative Templates (ADMX)” folder.

Installing UEM ADMX Templates – Browse to the following path in your domain:

\\yourdomain.net\SYSVOL\yourdomain.net\Policies\PolicyDefinitions

Copy the six VMware UEM .admx files provided into this location

**NOTE – if UAC is turned on, you may note be able to write into this folder even if you have permissions**

Copy the six VMware UEM language resource files into the en-US folder located at:

For a Proof of Concept installation, the ADMX and ADML files can be installed locally to %windir%\PolicyDefinitions and %windir%\PolicyDefinitions\en-us respectively. **NOTE – This is NOT supported for production environments.**

When using this POC method, all of the UEM GPO settings detailed in the next section will be managed via the Local Computer Policy MMC snap-in.

Configuration of UEM Group Policy Settings

Now that we have created the file shares and imported the ADMX files we can configure the Group Policy settings for UEM. VMware UEM only supports User settings in group policy. The GPO settings should be applied at an OU where the users who will be using UEM are located. If this is not possible, the computers the users will log into can be in the OU where the UEM GPO settings are applied, but Loopback Processing must be enabled.

Open the Microsoft Group Policy Management Console and browse to the OU you want to apply the VMware UEM settings to. Right-Click the OU and select “Create a GPO in this domain, and Link it here”. Name the GPO and then right-click the policy and choose edit to modify it.

We will just configure enough to get UEM working for user settings. There are many other settings and features, but I will cover those in future articles.

Let’s configure the minimal settings required at this time.

Flex Config Files: Double-Click the Flex Config Files setting to modify it. Select “Enabled” then specify the path to the configuration share you created earlier. Add a “General” Folder at the end of your file share path. This is where UEM will create the configuration files. Check the “Process folder recursively” box. This allows UEM to read all of the sub-folders in the configuration share. If you wanted certain users to only see specific settings you could restrict them to a single folder by leaving this box unchecked.

example:example:\\home-dc\uemconfiguration\general

Run FlexEngine as a Group Policy Extension: Edit this setting and choose “Enabled”. This is essentially the on/off switch for applying settings with VMware UEM. If this setting is not enabled so settings will be applied on logon.

FlexEngine Logging: This settings allows VMware UEM to write log files into the user profile archive share. This is not required, but highly recommended to make sure you can properly troubleshoot and monitor the environment. Select “Enabled” and then type the path to the user profile archive share created earlier. Append %username%\logs\flexengine.log at the end of the share path and click ok.

example:\\home-dc\uemprofiledata\%username%\logs\flexexgine.log

Profile Archive Backups: This settings is used to maintain backup copies of the user settings which can be restored via self-service or via the help desk. Select “Enabled” and then type the path to the user profile archive share appended with \%username%\backups. Change the “Number of Backups per profile archive” to “5” and check the “Create single backup per day box. Click OK.

example:\\home-dc\uemprofiledata\%username%\backups

Profile Archives: This is the share you created earlier where the user settings will be stored. A subfolder will be created for each user. Select “Enabled” then enter the path to the user profile archive share appended with %username%\archives. Check the “Compress profile archives” box. This will .zip up the configuration data to save space.

example:\\home-dc\uemprofiledata\%username%\archives

These are all the user configuration settings that a necessary for VMware UEM. There are a couple of other general GPO settings that needs to be applied before we are done.

Set Logoff Script: We need to set a logoff script to ensure that all user settings are written out at logoff. It is done through a logoff script as there is no way to process actions on a logoff in Group Policy currently. This script will run the VMware UEM agent with a -s switch to save user settings on logoff.

Double-click “Always wait for the network at computer startup and logon”

Select “Enabled” and click OK

User Group Policy loopback processing mode (OPTIONAL) – This setting only needs to be turned on if you want to apply VMware UEM settings to at OU containing computer objects. If you are applying the policy to an OU with only user objects this policy setting is not required. To enable this setting do the following:

All of the back-end requirements for VMware UEM have been configured – we can proceed to installing the client and the management console.

Installing the VMware UEM client:

In order for VMware UEM to be able to manage settings on a system, the UEM FlexEngine must be installed. The UEM FlexEngine installation is provided as an .msi file. There is an .msi for both 32 and 64-bit systems. Choose the proper installation file, taking the defaults. You will need to specify the license file as part of the installation as well.

You can verify the FlexEngine install process succeeded by looking for the VMware UEM Service. For Virtual Desktops this can be installed within the Gold Image.

At this point we need to install the UEM Management Console.

Installing the VMware UEM Management Console:

The UEM management console is really just a GUI front-end for the configuration share data. The configuration files can be modified inside or outside of the Management console. The same installation file that was used for the FlexEngine is used to install the Management console. Choose the proper installation file for the processor architecture you are going to run the console on and execute the install process.

Select the “Custom” setup type and right-click “VMware UEM FlexEngine” and choose to NOT install on this computer. Right-Click VMware UEM Management Console and choose to install on this computer.

Now that the management console is installed we can connect to the configuration share and test the VMware UEM solution.

Initial configuration of Management Console

When you launch the VMware UEM Management Console the first time, it will prompt for a location of the UEM Configuration Share. This is the share we setup earlier – specify the share name and click OK

example: \\home-dc\UEMConfiguration

Take the defaults and click OK at the settings page.

We will now enable “Easy Start” to pre-populate UEM with many popular Windows and Application settings. This is a great way to start and learn the product.

Select any versions of Office you would like to manage with UEM and click OK.

At this point we have done an initial configuration of UEM – we can verify that the settings have been copied to the configuration share by navigating to it and verifying it matches the settings in the Management Console GUI.

The next step is to test the UEM solution, review logs and start configuring Application, Windows and User Environment settings. These steps are covered in Part 2 of this blog series.

The VMware View Events Database is used to record all events that happen in a View environment. There is a ton of great information in the database, but it can be difficult to extract. I wrote the View Event Notifier last year which provides for real-time alerting, but it doesn’t allow for a lot of filtering. I wrote this utility to allow administrators to easily apply very detailed filtering to the data and export it to .csv. You can filter on time range, event severity, event source, session type (Application or Desktop), Usernames and Event Types. The application allows for extremely granular export of data. The exported columns can also be customized and the application will export data from both the live and the historical tables in the View Events Database.

Prerequisites:

Single application .exe that only requires the Microsoft .NET 3.5 Framework

When you open the application the first time you will need to establish a connection to an existing View Events DB. You can choose to authenticate via SQL or Windows authentication, depending on what your SQL server is currently configured for. This connection data is stored in the registry under HKCU\Software dynamically as you type in the settings. The password if using SQL Authentication is encrypted in the registry.

SQL Auth Type: Choose Windows or SQL. Windows will pass through the credentials of the currently logged in user.

DB Server: The address to the SQL server along with the instance name if required. server\instance

Database: The name of the View Events Database

Username: The SQL username with at least read access to the DB if using SQL Authentication.

Password: The SQL password if using SQL Authentication. The password is encrypted when stored.

Table Prefix: View allows the use of a table prefix so you can monitor multiple view instances from the same database. If table prefix is used, type it in.

Click “Test Connection” – this will initiate a connection to the database and indicate success or any connection issues. If any required data is missing you will be notified as to what information must be entered in order to connect to the SQL DB.

When you are able to connect to the database successfully, click the “Connect to DBServer” button at the bottom of the form. You are now connected to the View Events DB. The button will indicate connection state. You can click the button again to disconnect from the DB server.

Filtering Data:

As mentioned earlier the application allows for many layers of filtering of the View Event data before export.

General Filtering

The general filtering tab allows for high-level filters to be set that will apply to the data as it is queried and exported.

Query Historical Data: By checking the “Query Historical Data” checkbox, the application will use the historical tables in the View Events DB to query older data.

Time Range: The time range will automatically be populated with the time of the first and last event in the View Events DB. You can click the date/time picker to choose a specific start and end date range to query.

Query on Severity: The “Audit Fail”, “Warning”, “Informational” and “Error” checkboxes control what severity of events you want to query and export. They are all selected by default.

Query on Module: The “Broker”, “Admin”, “Agent” and “Vlsi” checkboxes allow you to determine what modules you want events to be exported for.

Session Type: You can query if you want to return only desktop or application session types. NOTE: This entry is only populated for specific events, so use sparingly to avoid limiting search results too much.

User Filtering:

By default the application will return data for all usernames when running a query. The user filtering tab allows you to filter the results returned on specific usernames. The usernames are populated dynamically out of the Events Database, so there is no need to type them in.

To filter on usernames do the following:

Click the “Refresh Users” button to populate the list of usernames.

Select the users you want to filter on and click the down arrow to add to the “Users to Export Data For” list box.

You can de-select a user by selecting it in the “Users to Export Data For” listbox and clicking the Up Arrow.

If you want to reset the filter to all users, click the “Return Data for All Users” checkbox.

NOTE: If you choose to filter on usernames, the application will automatically filter the items in the “Event Filtering” and “Pool Filtering” tabs based on records for those specific users. That way you will see the only records that actually apply to those users.

Pool Filtering:

The application allow filtering of specific desktop pools, you can select as many pools as you want to return. If you filter on any users, you will only see the pools that apply to those specific users.

Select the pool names you want to query on and click the Down Arrow to add to the “Pools to Export Data For” listbox.

Selected Pools can be removed by selecting them in the “Pools to Export Data For” listbox and clicking the Up Arrow.

Event Filtering:

The application allow filtering of specific event types, you can select as many event types as you want to return. These events are also filtered by selecting the “Module” checkboxes on the “General Settings” tab as shown above (Broker, Agent, Admin, Vlsi). In addition, if you filter on any users, you will only see the event types that apply to those specific users.

Select the event types you want to query on and click the Down Arrow to add to the “Events to Export Data For” listbox.

Notice in the screenshot below how events are filtered based on selected users and the label indicates this filtering.

Selected Event Types can be removed by selecting them in the “Events to Export Data For” listbox and clicking the Up Arrow.

All Event Types:

Event Types Filtered by Users:

Exporting Data:

At this point, you are ready to export data based on the filters you selected. Click on the “Export Data” tab.

Columns to ExportYou can choose which columns to export into the .CSV file. Simply check or uncheck the column name to add or remove it from the output. NOTE: certain columns are specific to user events (Pool Name, Username, Session Type and Desktop Name), and the Server Name column is only available when no users are selected. Client IP Address is only available when the BROKER_USERLOGGEDIN event type is selected or All Events are returned.

Select File to Export Data Into

At this point you will select a .csv file to export the data into. You need to use the built-in file save dialog to select the file. Click the button to open the file save dialog. The application will provide a default name – you can chance the name as required. Click the “Save” button when you are satisfied with the file name and location.

Show SQL Command: This is an optional item that will show you the parameterized SQL command to be executed.

Click the button when ready to export the data.

The query will now export and you will see a progress bar and the “Export Status list box will populate with the data as it is written to the .csv file.

When complete, you will receive a message box indicating the path to the file and how many records were exported.

A button below the “Export Status” listbox will also appear which will allow you to open the export file automatically with the application associated with the .csv extention (typically excel).

At this point the data has been exported and you can review as needed and run as many more exports as needed.

I hope this utility will be helpful for organizations running VMware View. Please let me know of any issues, questions or suggestion at chrisdhalstead@gmail.com or on Twitter at @chrisdhalstead.

I actually wrote this utility a few months ago, but it was very useful for the customer I developed it for so I wanted to share it. This utility was developed because a customer of mine was looking at going to VDI, but they application they used wrote configuration data into locations outside of the user profile which would be replicated with a tool like VMware Persona Management. It can be used to sync files both on logon and logoff. It also logs all activity for troubleshooting. This utility can be very useful anytime files not located in the user profile need to be synced or even if only specific files in user profile need to be synced. It will work with any VDI or Published Application solution.

Overview:

The application was developed in .NET 2013 and is a very simple console application. The configuration for the utility is within the .NET configuration XML file. Both the .exe and the config file need to be kept together and can be called from any logon script to run when the user logs into the VDI or Published App session.

Prerequisites:

Create a shared CIFS folder where the synced data will be stored on logoff and then pulled from on logon. Make sure the user has permission to create folders and files in this directory. The application will create a folder based on the username of the user executing the utility.

Create a shared CIFS folder (can be the same one you created earlier), for the log files to be placed if you choose to enable logging. Same as before, the user executing the sync needs to be able to write into this directory to be able to create the log file and update it.

The application requires the Microsoft .NET framework 3.0 be present on the systems where it will run.

Configuration:

All configuration is done within the XML configuration file “SyncFiles.exe.config” This file can be modified in any standard text editor, including notepad.

Configure location to sync files to and from. This should be a standard CIFS file share. Enter the share UNC in the “Sharepath” configuration section inside the value key.

Configure files to sync. Many files can be configured for sync, the file paths need to be separated by a pipe “|”. The full path to the local file that needs to be synced should be specified in the configuration file.

Configure log file path. Enter the path to the CIFS file share you created earlier where the logon and logoff logging will be captured. Note: This is dependent on the EnableLogging value being set to True. If you don’t want to enable logging you can skip configuring this value.

Configure Logging. If set to True, logging events will be captured for Logon and Logoff events. The log file is stored in the following format Username_action. Example: Chris_Logoff.txt

Set direction of file copy. This utility can be used for both logon and logoff events. On a logon event, the specified files will be copied from the network share location to the local computer. On a logoff event, the specified files will be copied from the local computer to the network share.

To configure for a Logon set the IsLogon value to True:

To configure for a Logoff set the IsLogon value to False:

Set ForceCopy value. If set to true the utility will always overwrite the files on either a logon on logoff event. If set to False the utility will examine file timestamps to overwrite only newer files.

Using the Utility:

Once you have the utility configured you can execute the file by running SyncFiles.exe to test the results. You need to keep the SyncFiles.exe and the SyncFiles.exe.config file in the same directory. You can call the utility using an existing logon script or you can set the file directly as a logon script as shown below.

The files will be copied to the designated user folder with the full file path in the file name on the network share as shown below:

Please let me know of any suggestions, comments or issues. I can be reached on Twitter at @chrisdhalstead