The Pulse Services Director makes it easy to manage a fleet of virtual ADC services, with each application supported by dedicated vADC instances, such as Pulse Virtual Traffic Manager. This table summarises the compatibility between supported versions of Services Director and Virtual Traffic Manager.

We have made it easier to see which features are offered in which model of Pulse Virtual Traffic Manager: there are two feature groups, which are common to both fixed-sized licenses using the Pulse vTM, and in the capacity-based licensing scheme using the Pulse Services Director.

Looking for Installation and User Guides for Pulse vADC? User documentation is no longer included in the software download package with Pulse vTM, so the documentation can now be found on the Pulse Techpubs pages

I n a recent conversation, a user wished to use Stingray's rate shaping capability to throttle back the requests to one part of his web site that was particularly sensitive to high traffic volumes (think a CGI, JSP Servlet, or other type of dynamic application). This article describes how you might go about doing this, testing and implementing a suitable limit using Service Level Monitoring, Rate Shaping and some TrafficScript magic.
The problem
Imagine that part of your website is particularly sensitive to traffic load and is prone to overloading when a crowd of visitors arrives. Connections queue up, response time becomes unacceptable and it looks like your site has failed.
If your website were a tourist attraction or a club, you’d employ a gatekeeper to manage entry rates. As the attraction began to fill up, you’d employ a queue to limit entry, and if the queue got too long, you’d want to encourage new arrivals to leave and return later rather than to join the queue.
This is more-or-less the solution we can implement for a web site. In this worked example, we're going to single out a particular application (named search.cgi) that we want to control the traffic to, and let all other traffic (typically for static content, etc) through without any shaping.
The approach
We'll first measure the maximum rate at which the application can process transactions, and use this value to determine the rate limit we want to impose when the application begins to run slowly.
Using Stingray's Service Level Monitoring classes, we'll monitor the performance (response time) of the search.cgi application. If the application begins to run slower than normal, we'll deploy a queuing policy that rate-limits new requests to the application. We'll monitor the queue and send a 'please try later' message when the rate limit is met, rather than admitting users to the queue and forcing them to wait.
Our goal is to maximize utilization (supporting as many transactions as possible), but minimise response time, returning a 'please wait' message rather than queueing a user.
Measuring performance
We first use zeusbench to determine the optimal performance that the application can achieve. We several runs, increasing the concurrency until the performance (responses-per-second) stabilizes at a consistent level:
zeusbench –c 5 –t 20 http://host/search.cgi
zeusbench –c 10 –t 20 http://host/search.cgi
zeusbench –c 20 –t 20 http://host/search.cgi
... etc
Run:
zeusbench –c 20 –t 20 http://host/search.cgi
From this, we conclude that the maximum number of transactions-per-second that the application can comfortably sustain is 100.
We then use zeusbench to send transactions at that rate (100 / second) and verify that performance and response times are stable. Run:
zeusbench –r 100 –t 20 http://host/search.cgi
Our desired response time can be deduced to be approximately 20ms.
Now we perform the 'destructive' test, to elicit precisely the behaviour we want to avoid. Use zeusbench again to send requests to the application at higher than the sustainable transaction rate:
zeusbench –r 110 –t 20 http://host/search.cgi
Observe how the response time for the transactions steadily climbs as requests begin to be queued and the successful transaction rate falls steeply. Eventually when the response time falls past acceptable limits, transactions are timed out and the service appears to have failed.
This illustrates how sensitive a typical application can be to floods of traffic that overwhelm it, even for just a few seconds. The effects of the flood can last for tens of seconds afterwards as the connections complete or time out.
Defining the policy
We wish to implement the following policy:
If all transactions complete within 50 ms, do not attempt to shape traffic.
If some transactions take more than 50 ms, assume that we are in danger of overload. Rate-limit traffic to 100 requests per second, and if requests exceed that rate limit, send back a '503 Too Busy' message rather then queuing them.
Once transaction time comes down to less than 50ms, remove the rate limit.
Our goal is to repeat the previous zeusbench test, showing that the maximum transaction rate can be sustained within the desired response time, and any extra requests receive an error message quickly rather than being queued.
Implementing the policy
The Rate Class
Create a rate shaping class named Search limit with a limit of 100 requests per second.
The Service Level Monitoring class
Create a Service Level Monitoring class named Search timer with a target response time of 50 ms.
If desired, you can use the Activity monitor to chart the percentage of requests that confirm, i.e. complete within 50 ms while you conduct your zeusbench runs. You’ll notice a strong correlation between these figures and the increase in response time figures reported by zeusbench.
The TrafficScript rule
Now use these two classes with the following TrafficScript request rule:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# We're only concerned with requests for /search.cgi
$url = http.getPath();
if ( $url != "/search.cgi" ) break;
# Time this request using the Service Level Monitoring class
connection.setServiceLevelClass( "Search timer" );
# Test if any of the recent requests fell outside the desired SLM threshold
if ( slm.conforming( "Search timer" ) < 100 ) {
if ( rate.getBacklog( "Search limit" ) > 0 ) {
# To minimize response time, always send a 503 Too Busy response if the
# request exceeds the configured rate of 100/second.
# You could also use http.redirect() to a more pleasant 'sorry' page, but
# 503 errors are easier to monitor when testing with ZeusBench
http.sendResponse( "503 Too busy" , "text/html"
"<h1>We're too busy!!!</h1>" ,
"Pragma: no-cache" );
} else {
# Shape the traffic to 100/second
rate. use ( "Search limit" );
}
}
Testing the policy
Rerun the 'destructive' zeusbench run that produced the undesired behaviour previously:
Running:
zeusbench –r 110 –t 20 http://host/search.cgi
Observe that:
Stingray processes all of the requests without excessive queuing; the response time stays within desired limits.
Stingray typically processes 110 requests per second. There are approximately 10 'Bad' responses per second (these are the 503 Too Busy responses generated by the rule), so we can deduce that the remaining 100 (approx) requests were served correctly.
These tests were conducted in a controlled environment, on an otherwise-idle machine that was not processing any other traffic. You could reasonably expect much more variation in performance in a real-world situation, and would be advised to set the rate class to a lower value than the experimentally-proven maximum.
In a real-world situation, you would probably choose to redirect a user to a 'sorry' page rather than returning a '503 Too Busy' error. However, because ZeusBench counts 4xx and 5xx responses as 'Bad', it is easy to determine how many requests complete successfully, and how many return the 'sorry' response.
For more information on using ZeusBench, take a look at the Introducing Zeusbench article.

For versions of the Traffic Manager Appliance before 9.7, we support customers installing software only via our standard APIs/interfaces (using extra files, custom action scripts).
This constraint has been relaxed at version 9.7. We still do not support customers modifying the tested software shipped with the appliance, but we do allow installation of additional software.
Examples of where this might be useful include:
Installing monitoring agents that customers use to monitor the rest of their infrastructure (e.g. Nagios)
Installing software such as BIND to avoid having to deploy an extra host when setting up GLB.
Operating system
Traffic Manager virtual appliances use a customized build of Ubuntu, with an optimized kernel from which some unused features have been removed - check the latest release notes for details of the build included in your version.
What you may change
You may install additional software not shipped with the appliance, but note that some Ubuntu packages may rely on kernel features not available on the appliance.
You may modify configuration not managed by the appliance.
What you may not change
You may not install a different kernel.
You may not install different versions of any debian packages that were installed on the appliance as shipped, nor remove any of these packages (see the licence acknowledgements doc for a list).
You may not directly modify configuration that is managed from the traffic manager (e.g. sysctl values, network configuration).
You may not change configuration explicitly set by the appliance (usually marked with a comment containing ZOLD or BEGIN_STINGRAY_BLOCK).
What happens when you need support
You should mention any additional software you have installed when requesting support, the Technical Support Report will also contain information about it. If the issue is found to be caused by interaction with the additional software we will ask you to remove it, or to seek advice or a remedy from its supplier.
What happens on reset or upgrade
z-reset-to-factory-defaults will not remove additional software but may rewrite some system configuration files.
An incremental upgrade may upgrade some installed packages, and may rewrite system configuration files.
A full upgrade will install a fresh appliance image on a separate disk partition, and will not copy additional software or configuration changes across. The /logs partition will be preserved.
Note that future appliance versions may change the set of installed packages, or even the underlying operating system.

Pulse Secure vADC solutions are supported on Google Cloud Platform, with hourly billing options for applications that need to scale on-demand to match varying workloads. A range of Pulse Secure Virtual Traffic Manager (Pulse vTM) editions are available, including options for the Pulse vTM Developer Edition and Pulse Secure Virtual Web Application Firewall (Pulse vWAF), available as both a virtual machine and as a software installation on a Linux virtual machine. This article describes how to quickly create a new Pulse vTM instance through the Google Cloud Launcher. For additional information about the use and configuration of your Pulse vTM instance, see the product documentation available at www.pulsesecure.net/vadc-docs. Launching a Pulse vTM Virtual Machine Instance To launch a new instance of the Pulse vTM virtual machine, use the GCE Cloud Launcher Web site. Type the following URL into your Web browser: https://cloud.google.com/launcher Browse or use the search tool to locate the Pulse Secure package applicable to your requirements, then click the package icon to see the package detail screen. To deploy a new Pulse vTM instance 1. To start the process of deploying a new instance, click Launch on Compute Engine. 2. Type an identifying name for the instance, select the image version, then select the desired geographic zone and machine type. Individual zones might have differing computing resources available and specific access restrictions. Contact your support provider for further details. 3. Ensure the boot disk correspond to your computing resource requirements. Pulse Secure recommends not changing the default disk size as this might affect the performance of your Pulse vTM. 4. By default, GCE creates firewall rules to allow HTTP and HTTPS traffic, and to allow access to the Web-based Pulse vTM Admin UI on TCP port 9090. To instead restrict access to these services, untick the corresponding firewall checkboxes. Note: If you disable access to TCP port 9090, you cannot access the Pulse vTM Admin UI to configure the instance. 5. If you want to use IP Forwarding with this instance, click More and set IP forwarding to "On". 6. Pulse vTM needs access to the Google Cloud Compute API, as indicated in the API Access section. Keep this option enabled to ensure your instance can function correctly. 7. Click Deploy to launch the Pulse vTM instance. The Google Developer Console confirms that your Pulse vTM instance is being deployed. Next Steps After your new instance has been created, you can proceed to configure your Pulse vTM software through its Admin UI. To access the Admin UI for a successfully deployed instance, click Log into the admin panel. When you connect to the Admin UI for the first time, Pulse vTM presents the Initial Configuration wizard . This wizard captures the networking, date/time, and basic system settings needed by your Pulse vTM software to operate normally. For full details of the configuration process, and for instructions on performing various other administrative tasks, see the Cloud Services Installation and Getting Started Guide .

The Pulse Services Director vADC Analytics Application is intended to be both accessible and intuitive to use, with powerful graphic visualizations and insights into the traffic flows around your application.