7 Overview The following guide has been produced to help educate our customers and partners in deploying CPPM in conjunction with F5 BIG- IP LTM application delivery controllers (ADC). F5 BIG- IP LTM enhanced level of availability and provide a more scalable solution. This guide was written to accompany the CPPM 6.4.x release, but there is no specific feature developed by Aruba in this code release specifically utilized in this guide. Going forward this guide will be updated and republished to reflect new and improved functionality and designs we deliver/develop. Note: Where you see a red- chili this signifies a hot important point and highlights that this point is to be taken as a best- practice recommendation. Why we are doing this? Why are we using F5 BIG- IP LTM to load balance clients to the CPPM Cluster? How does it benefit us and what is the limitation we are trying to overcome? When a cluster of multiple CPPM nodes is deployed we need to be able to route traffic to these nodes with some level of control to ensure that a single node is not overwhelmed with requests and that the licenses across the cluster are used appropriately. AOS itself recently has included some limited functionality to achieve some of this with the addition in AOS 6.4, RADIUS load- balancing (discussed in Appendix B). But for enterprises where the basic level of server- group load- balancing or the AOS RADIUS load- balancing is not enough then what we discuss within this document over the next few pages should be very relevant to the reader. Another reason is so that the nodes within a cluster can be represented by a single IP address simplifying the NAS deployment across a large enterprise spanning large geographic areas where the NAS is not necessarily an Aruba device. F5 BIG- IP LTM can be deployed in a multitude of ways, typically one- armed utilizing Secure (or source) NAT ing (SNAT) and inline. We have chosen to use the inline method. Having the servers (CPPM) inline means they will need the F5 BIG- IP LTM to be their Gateway address. The one major disadvantage with SNAT is the obscurance of the clients source address. With an inline approach, the client s source address is preserved. This become very important for example when you want to send a CoA to an endpoint. Without the original source IP address of the RADIUS packet we are unable to send the CoA to the correct NAS supporting this endpoint. Just as important, if we don t know the exact type of NAS endpoint to send the CoA to, we d be unable to send NAS specific vendor CoA messages. 7

8 So now we have chosen to utilize F5 BIG- IP LTM to traffic engineer our data path to the CPPM nodes. There are a couple of additional points you need to be aware of and the reasons why we have configured the F5 BIG- IP LTM in this particular fashion. Some of the configuration below is necessary to ensure that we track the user s RADIUS request to a specific CPPM node such that when we receive the radius account start and subsequent accounting messages we are able to send these to the same CPPM node to ensure that the same node has persistence of the session data for that endpoint. We accomplish this as described below in more details by the use of a radius attribute called the calling- station- id. 8

9 Background / Introduction As customer deployments become more complex we must ensure that we optimize and expand the solutions and design where availability and scale are critical. This guide has been written and developed with the use of F5 BIG- IP LTM running s/w revision To provide an overview, I have borrowed some content from an F5 Load- Balancing 101 Nuts & Bolts doc, so we give acknowledgment to F5 for the Intro overview below. Load balancing got its start in the form of network- based load balancing hardware. It is the essential foundation on which Application Delivery Controllers (ADCs) operate. The second iteration of purpose- built load balancing (following application- based proprietary systems) materialized in the form of network- based appliances. In essence, these devices would present a virtual server address to the outside world, and when users attempted to connect, they would forward the connection to the most appropriate real server doing bi- directional network address translation (NAT). Figure 1 - Network- based load balancing appliances 9

10 Basic Load Balancing Terminology It would certainly help if everyone used the same terminology; unfortunately, every vendor of load balancing devices (and, in turn, ADCs) seems to use different terminology. Node, Host, Member and Server Most ADC vendors have the concept of a node, host, member, or server; some have all four, but they typically mean different things. There are two basic concepts that they all try to express. One concept usually called a node or server is the idea of the physical server itself (in our topology the CPPM appliance or CPPM VM) that will receive traffic from the ADC. This is synonymous with the IP address of the physical server and, in the absence of a ADC, would be the IP address that the server name (for example, testing.com) would resolve to. For the remainder of this paper, we will refer to this concept as the host. The second concept is a member (sometimes, unfortunately, also called a node by some manufacturers). A member is usually a little more defined than a server/node in that it includes the port of the actual application that will be receiving traffic. For instance, a server named testing.com may resolve to an address of , which represents the server/node, and may have an application (a web server) running on port 80, making the member address :80. Simply put, the member includes the definition of the application port as well as the IP address of the physical server. For the remainder of this paper, we will refer to the application port as the service. Why all the complication? Because the distinction between a physical server and the application services running on it allows the ADC to individually interact with the applications rather than the underlying hardware. A host ( ) may have more than one service available (HTTP, FTP, DNS, and so on). By defining each application uniquely ( :80, :21, and :53), the ADC can apply unique ADC and health monitoring (discussed later) based on the services instead of the host. However, there are still times when being able to interact with the host (like low- level health monitoring or when taking a server offline for maintenance) is extremely convenient. Remember, most ADC uses some concept to represent the host, or physical server, and another to represent the services available on it in this case, simply host and services. 10

11 Pool, Cluster and Farm ADC s allow organizations to distribute inbound traffic across multiple back- end destinations. It is therefore a necessity to have the concept of a collection of back- end destinations. Clusters, as we will refer to them herein, although also known as pools or farms, are collections of similar services available on any number of hosts. For instance, all services that offer the company web page would be collected into a cluster called company web page and all services that offer e- commerce services would be collected into a cluster called e- commerce. The key element here is that all systems have a collective object that refers to all similar services and makes it easier to work with them as a single unit. This collective object a cluster is almost always made up of services, not hosts. Virtual Server Although not always the case, today there is little dissent about the term virtual server, or virtual. It is important to note that like the definition of services, virtual server usually includes the application port was well as the IP address. The term virtual service would be more in keeping with the IP:Port convention; but because most vendors use virtual server, this paper will continue using the term Virtual Server as well. Putting it all together Putting all of these concepts together makes up the basic steps in load balancing. The ADC presents virtual servers to the outside world. Each virtual server points to a cluster of services that reside on one or more physical hosts. Figure 2 SLB comprises four concepts virtual servers, clusters, services, and hosts. 11

12 Load Balancing Basics With this common vocabulary established, let s examine the basic load balancing transaction. As depicted, the ADC will typically sit in- line between the client and the hosts that provide the services the client wants to use. As with most things in load balancing, this is not a rule, but more of a best practice in a typical deployment. Let s also assume that the ADC is already configured with a virtual server that points to a cluster consisting of two service points. In this deployment scenario, the hosts have a return route that points back to the ADC so that return traffic will be processed through it on its way back to the client. The basic load balancing transaction is as follows: 1. The client attempts to connect with the service on the ADC. 2. The ADC accepts the connection, and after deciding which host should receive the connection, changes the destination IP (and possibly port) to match the service of the selected host (note that the source IP of the client is not touched). 3. The host accepts the connection and responds back to the original source, the client, via its default route, the ADC. 4. The ADC intercepts the return packet from the host and now changes the source IP (and possible port) to match the virtual server IP and port, and forwards the packet back to the client. 5. The client receives the return packet, believing that it came from the virtual server, and continues the process. This very simple example is relatively straightforward, but there are a couple of key elements to take note of. Step one, as far as the client knows, it sends packets to the virtual server and the virtual server responds simple. Step two the NAT takes place. This is where the ADC replaces the destination IP (i.e. virtual server address) sent by the client with the destination IP of the host to which it has chosen to load balance the request. Step three is the second half of this process (the part that makes the NAT bi- directional ). The source IP of the return packet from the host will be the IP of the host; if this address were 12

13 not changed and the packet was simply forwarded to the client, the client would be receiving a packet from someone it didn t request one from, and would simply drop it. Instead, the ADC, remembering the connection, rewrites the packet so that the source IP of the return packet is that of the virtual server, thus solving this problem. The Load Balancing Decision Usually at this point, two questions arise: how does the ADC decide which host to send the connection to? And what happens if the selected host isn t working? Let s discuss the second question first. What happens if the selected host isn t working? The simple answer is that it doesn t respond to the client request and the connection attempt eventually times out and fails. This is obviously not a preferred circumstance, as it doesn t ensure high availability. That s why most ADC technology includes some level of health monitoring that determines whether a host is actually available and able to take a connection before attempting to send packets to it. There are multiple levels of health monitoring, each with increasing granularity and focus. A basic monitor would simply PING the host itself. If the host does not respond to PING, it is a good assumption that any services defined on the host are probably down and the host should be removed from the cluster of available services. Unfortunately, even if the host responds to PING, it doesn t necessarily mean the service itself is working. Therefore most devices can do service PINGs of some kind, ranging from simple TCP connections all the way to interacting with the application via a scripted or intelligent interaction. These higher- level health monitors not only provide greater confidence in the availability of the actual services (as opposed to the host), but they also allow the ADC to differentiate between multiple services on a single host. The ADC understands that while one service might be unavailable, other services on the same host might be working just fine and should still be considered as valid destinations for user traffic. This brings us back to the first question: How does the ADC decide which host to send a connection request to? Each virtual server has a specific dedicated cluster of services (listing the hosts that offer that service) that makes up the list of possibilities. Additionally, the health monitoring modifies that list to make a list of currently available hosts that provide the indicated service. It is this modified list from which the ADC chooses the host that will receive a new connection. Deciding the exact host depends on the load- balancing algorithm associated with that particular cluster. The most common is simple round- robin where the ADC simply goes down the list starting at the top and allocates each new connection to the next host; when it reaches the bottom of the list, it simply starts again at the top. While this is simple and very predictable, it assumes that all connections will have a similar load and duration on the back- end host, which is not always true. More advanced algorithms use things like current- connection counts, host utilization, and even real- world response times for existing traffic to the host in order to pick the most appropriate host from the available cluster services. Sufficiently advanced ADC will also be able to synthesize health- monitoring information with load balancing algorithms to include an understanding of service dependency. This is 13

14 the case when a single host has multiple services, all of which are necessary to complete the user s request. A good analogy for CPPM would be where a CPPM host will provide both standard HTTP/s services (port 80/443 Guest Portal) as well as RADIUS (port 1812/ Authentication). In many of these circumstances, it does not matter if a user connects to a host that has only one service operational, but not the other as long as these services are similar. In other words, it is ok to send RADIUS 1812/1813 requests to a host if the HTTP/s services on that host has failed. Similarly it is ok to send HTTP/s requests to host if the RADIUS service on that host has failed. To Load Balance or Not to Load Balance? Load balancing in regards to picking an available service when a client initiates a transaction request is only half of the solution. Once the connection is established, the ADC must keep track of whether the following traffic from that user should be load balanced. There are generally two specific issues with handling follow- on traffic once it has been load balanced: connection maintenance and persistence. Connection maintenance If the user is trying to utilize a long- lived TCP connection (telnet, FTP, and more) that doesn t immediately close, the ADC must ensure that multiple data packets carried across that connection do not get load balanced to other available service hosts. This is connection maintenance and requires two key capabilities: 1) the ability to keep track of open connections and the host service they belong to; and 2) the ability to continue to monitor that connection so the connection table can be updated when the connection closes. This is rather standard fare for most ADCs. Persistence Increasingly more common, however, is when the client uses multiple short- lived TCP connections (for example, HTTP) to accomplish a single task. In some cases, like standard web browsing, it doesn t matter and each new request can go to any of the back- end service hosts; however, there are many more instances (XML, e- commerce shopping cart, HTTPS, and so on) where it is extremely important that multiple connections from the same user go to the same back- end service host and not be load balanced. This concept is called persistence, or server affinity. There are multiple ways to address this depending on the protocol and the desired results. For example, in modern HTTP transactions, the server can specify a keep- alive connection, which turns those multiple short- lived connections into a single long- lived connection that can be handled just like the other long- lived connections. However, this provides little relief. Even worse, as the use of web services increases, keeping all of these connections open longer than necessary would strain the resources of the entire system. In these cases, most ADCs provide other mechanisms for creating artificial server affinity. One of the most basic forms of persistence is source- address affinity. This involves simply recording the source IP address of incoming requests and the service host they were load balanced to, and making all future transaction go to the same host. This is also an easy way 14

15 to deal with application dependency as it can be applied across all virtual servers and all services. In practice however, the wide- spread use of proxy servers on the Internet and internally in enterprise networks renders this form of persistence almost useless; in theory it works, but proxy- servers inherently hide many users behind a single IP address resulting in none of those users being load balanced after the first user s request. Today, the intelligence of ADC devices allows organizations to actually open up the data packets and create persistence tables for virtually anything within it. This enables them to use much more unique and identifiable information, such as browser cookies or user name, to maintain persistence. However, you must take care to ensure that this identifiable client information will be present in every request made, as any packets without it will not be persisted and will be load balanced again, most likely breaking the application. 15

16 Everything NAT What is SNAT in F5 BIG- IP LTM? SNAT vs. Inline. What is a NAT? If you re new to F5 BIG- IP LTM devices and have just started dabbling in the world of Application Delivery and SNAT, you may find yourself asking some questions about address translation. The F5 BIG- IP LTM systems can perform address translation in 3 ways, SNAT, NAT, & Virtual Servers. We ll also cover traffic F5 BIG- IP LTM handle and don t do any address translation for, which we refer to as In- Line communication. What is SNAT? I ve seen SNAT referenced two ways, Source Network Address Translation, and Secure Network Address Translation, both of which are correct. Source makes it s easier to understand, because you are translating the source addresses of the client initiating traffic or as the devices references it the origin. It s Secure because you can t initiate traffic to a SNAT, the translation addresses are never known by the host initiating the traffic. In short a SNAT is made of up three components: Translation Options: an IP address (single address), a SNAT Pool (multiple addresses), or an Automap (self IP(s) of the Local Traffic Manager). This is what the Source address of the client is translated to. Origin Options: All addresses (everything coming in on the VLAN you specify, or an Address list (specific addresses you provide). These are indeed the source addresses of the client. VLAN Traffic Options: All VLANs (every VLAN), Enabled on (only on the vlans specified), or Disabled on (on all VLANs except the ones you specify) The most common misunderstanding is how SNAT can be used. Unlike a traditional NAT, you can t send traffic to a SNAT address. SNATs are either global (i.e. traffic coming through a LTM), or they can be associated with a Virtual Server. The first option is the hardest to get your head around, the second option, associating with a Virtual Server, is a lot easier to grasp and is usually everyone s first exposure to SNAT, using SNAT automap applied to a virtual server. In both examples SNAT is generally used to solve routing issues and can be used with a variety of mappings but not limited to, one to one, many to one, all to one, etc. Let s dive into the first option and see if we can get a better understanding of SNAT not applied to a Virtual Server, but affecting the LTM globally. Global traffic and SNAT Outbound Traffic- Translating the source address of many hosts on an internal non- Internet routable subnet to one external Internet routable address is a common problem solved with SNAT. Think about how your home router works, it s not the same but is a similar concept. When traffic hits the F5 BIG- IP LTM the origin would equate to an address list you specify with all the hosts in it or all addresses for that specific VLAN, the Translation would be one single address (in this example). The destination addresses 16

17 now sees the Translation address as your new source. When traffic returns to the F5 BIG- IP LTM from the destination it is then translated back to the original origin address. It s important to note, by default SNATs are allowed on all VLANs, but you can get more granular and split them out between multiple VLANs. Virtual Servers and SNAT Inbound Traffic- Virtual Servers can have SNATs applied to them effectively changing the source of the Client imitating traffic to the Virtual Service. You see, in most cases, the servers you want to load balance are NOT going to have the F5 BIG- IP LTM as their gateway, so unless you translate the source address to something that belongs to the F5 BIG- IP LTM, you re going to end up routing around the F5 BIG- IP LTM and not through the F5 BIG- IP LTM. Resulting in your VS not working and giving rise to what we call asymmetrical routing, a fancy term for traffic taking a different return path from the original request path. Asymmetrical routing is not always going to break traffic, but when dealing with a statefull device, something that maintains a connection like the F5 BIG- IP LTM, asymmetrical routing can break your communication. What is SNAT automap, a simple explanation Everyone s first exposure to SNAT is usually SNAT automap. A lot of organizations at some point just turn this on without a good understanding of SNAT. Hopefully after reading this article you have a better understanding of the inner workings of SNAT. The SNAT automap feature is going to change the source address of the communication to the physical IP or self- ip of the egress interface/vlan on the F5 BIG- IP LTM that can reach the pool member. Again, this is so the communication comes back to the ADC, otherwise the destination host would route around the ADC when communicating back to the client, unless of course the servers have the F5 BIG- IP LTM as their gateway. Alternative to SNAT, Inline An Alternative to SNAT would be an Inline design. Having the servers in your pool Inline means they will need the ADC as their Gateway address. As inconvenient as this might sound vs. the anything you can route to you can load balance approach, there are definitely reasons why one might choose to go inline vs SNAT. The one major thing you lose with SNAT, or gain depending on your perspective is the clients source address. With an inline approach you preserve the source address. Some applications and logging systems want to see the real source IP of a connection. How do I capture my source address with SNAT? So now you re SNATing, you feel cool, you look cool, well you are cool! You re load balancing anything you can route to, life is good and server administers are happy they didn t have to jack around their servers and change their gateway.. Until they look in their logs and are confused what happened to all the source address information! Fortunately we can still provide this information to them, it s just going to require a little bit of 17

18 reconfiguration on their side as well as yours. Enter the Web Services XFF header option!!!!!!!! The X- Forwarded- For header option when enabled will capture the source address of the client and place it in the HTTP header. The logging server would then need to be configured to grab this value instead of looking at the layer 3 securewirelessource address. What is a NAT? NATs are a one to one mapping between addresses. Unlike SNATs and Virtual servers, NATs can be used for traffic initiated in both Directions. You can Send Traffic to the NAT address or the Origin address can send traffic to any address. NATs are not connection based like SNATs i.e. they are not tracked by the BIG- IP. A NAT is made up of two major components: NAT address Origin Address RADIUS Packet Attributes Regardless of what happens to the IP headers due to SNAT/NAT/Proxy etc. the content of the RADIUS packets remains intact. So we still retain Calling- Station- IP, NAS- Identifier, Framed- IP- Address etc. The IP header manipulation techniques are there to enforce and control packet routing. Specifically to ensure that asymmetric routing is not occurring between client and server. Additional notes to consider on SNAT v Inline As discussed in our introduction we have chosen a deployment strategy where the F5 BIG- IP LTM is deployed inline to ensure the source IP addresses are not amended when the RADIUS auth reaches the CPPM server. However late in the writing of this TechNote we have reason to believe that a SNAT deployment will work with a ClearPass Deployment. Having the option to deploy either inline or offpath and utilize the SNAT feature provides for an extra level of flexibility which my suit some customers networks. The following are what we believe to be the requirements to make this work successfully with SNAT. 1. CPPM MUST have the F5 BIG- IP LTM SNAT address (radius source address) setup as a Network Device in CPPM and set to IETF Vendor Type - This is due to the fact that all Radius will have a source IP of the F5 SNAT address and will need to match against a network device in CPPM to be allowed and to select a Radius Shared Secret to be used 2. CPPM MUST have each individual NAD setup as a Network Device in CPPM using the NAD's configured NAS- IP and set to the specific Vendor Type - This is due to the fact that CoA uses NAS- IP and CPPM will need to lookup in the Network Device config the Radius Shared Secret and Vendor type. 18

19 3. The Radius Source- IP SHOULD NOT be used in policy to determine enforcement profile to be used, use NAS- IP or other Radius elements instead. 4. All Network Devices MUST have the same Radius Shared Secret - This is due to the fact that a single Network Device will be matched against 2 Network Device objects in CPPM, the F5 BIG- IP LTM one and the individual NAD one. We intend to verify this in the coming weeks but feel at this junction due to the demands from our customers and partners we will releases this document to the field and update our finding in a later version of this document. 19

20 Technology Designs The following section covers configuration, design and technology of a specific scenario we have built and tested. Our testing has been performed utilizing CPPM 6.3.x code running on two CP- HW- 5K appliances. The two CPPM nodes have been clustered to provide a Publisher and Subscriber pair following standard CPPM clustering configuration. We have defined the SUB to be a standby- PUB as a good deployment strategy and perform some limited CPPM failover testing. We have also defined a VIP address between the two CPPM nodes, however we are recommending that this VIP only be used for ClearPass admin traffic. Our VIP address is (pre- empt to cppm181) on the MGMT interfaces. Figure 3 Creating VIP groups on a CPPM cluster Our F5 BIG- IP LTM environment: Our F5 BIG- IP LTM is a dual instance HA cluster of 2 x BIG- IP 3600 with Build and Hotfix Version code, released in June Configuration of F5 BIG- IP LTM clustering is beyond the scope of this document, but it is well documented on the F5 support site. Note: Multiple deployment scenarios exist for an F5 BIG- IP LTM. Simplistically an F5 BIG- IP LTM can be thought of as a router, packets/flows come in one interface/vlan and leave on another. They can also be deployed in a L2 or L3 one- armed scenario. Deployment and integration of an F5 BIG- IP LTM into a customer network is beyond the scope of this document and is not covered. Our deployment is based on a simple routed deployment, we have an internal server facing VLAN ( /24) which is the server VLAN or put another way where the CPPM nodes sit. Then we have an external client facing VLAN ( /24) that is effectively where the client traffic is originating from/via and finally we have management VLAN ( /24). In addition as seen in the earlier SLB overview there is a concept of Virtual Servers (see Page ), these are in effect the F5 BIG- IP LTM listening VIP s which receive the incoming data flows and load- balance to the server on the 20

DEPLOYMENT GUIDE Version 1.2 Deploying the BIG-IP System v10 with Microsoft IIS 7.0 and 7.5 Table of Contents Table of Contents Deploying the BIG-IP system v10 with Microsoft IIS Prerequisites and configuration

F5 White Paper Load Balancing 101: Firewall Sandwiches There are many advantages to deploying firewalls, in particular, behind Application Delivery Controllers. This white paper will show how you can implement

F5 Configuring BIG-IP Local Traffic Manager (LTM) - V11 Description This four-day course gives networking professionals a functional understanding of the BIG-IP LTM v11 system as it is commonly used, as

F5 Deployment Guide Deploying the BIG-IP LTM with Microsoft Skype for Business Welcome to the Microsoft Skype for Business Server deployment guide. This document contains guidance on configuring the BIG-

Integrating the F5 BigIP with Blackboard Nick McClure nickjm@uky.edu Lead Systems Programmer University of Kentucky Created August 1, 2006 Last Updated June 17, 2008 Integrating the F5 BigIP with Blackboard

DEPLOYMENT GUIDE Version 1.2 Deploying the BIG-IP System v9.x with Microsoft IIS 7.0 and 7.5 Deploying F5 with Microsoft IIS 7.0 and 7.5 F5's BIG-IP system can increase the existing benefits of deploying

Deployment Guide Accelerating Applications with F5 AAM and SSL Forward Proxy Welcome to the F5 deployment guide for Software as a Service (). This guide shows administrators how to configure the BIG-IP

Deploying F5 to Replace Microsoft TMG or ISA Server Welcome to the F5 deployment guide for configuring the BIG-IP system as a forward and reverse proxy, enabling you to remove or relocate gateway security

Deploying the BIG-IP System for LDAP Traffic Management Welcome to the F5 deployment guide for LDAP traffic management. This document provides guidance for configuring the BIG-IP system version 11.4 and

the Availability Digest Redundant Load Balancing for High Availability July 2013 A large data center can comprise hundreds or thousands of servers. These servers must not only be interconnected, but they

Deployment Guide Deploying F5 with Microsoft Remote Desktop Session Host Servers Important: The fully supported version of this iapp has been released, so this guide has been archived. See http://www.f5.com/pdf/deployment-guides/microsoft-rds-session-host-dg.pdf

Configuring the Edgewater 4550 for use with the Bluestone Hosted PBX NOTE: This is an advisory document to be used as an aid to resellers and IT staff looking to use the Edgewater 4550 in conjunction with

DEPLOYMENT GUIDE CONFIGURING THE BIG-IP LTM SYSTEM WITH FIREPASS CONTROLLERS FOR LOAD BALANCING AND SSL OFFLOAD Configuring the BIG-IP LTM system for use with FirePass controllers Welcome to the Configuring

Microsoft Lync Server 2010 Scale to a Load Balanced Enterprise Edition Pool with WebMux Walkthrough Published: March. 2012 For the most up to date version of the Scale to a Load Balanced Enterprise Edition

vcloud Air This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

Deployment Guide Deploying F5 with IMPORTANT: This guide has been archived. There are two newer deployment guides and downloadable iapp templates available for Remote Desktop Services, one for the Remote

642 523 Securing Networks with PIX and ASA Course Number: 642 523 Length: 1 Day(s) Course Overview This course is part of the training for the Cisco Certified Security Professional and the Cisco Firewall

ForeScout CounterACT: Contents Introduction... 3 What is the vfw?.... 3 Technically, How Does vfw Work?.... 4 How Does vfw Compare to a Real Firewall?.... 4 How Does vfw Compare to other Blocking Methods?...

Deployment Guide Deploying the BIG-IP System for Microsoft Application Virtualization Welcome to the F5 and Microsoft Application Virtualization deployment guide. Use this document for guidance on configuring

VPN Configuration Guide Dell SonicWALL 2013 equinux AG and equinux USA, Inc. All rights reserved. Under copyright law, this manual may not be copied, in whole or in part, without the written consent of

Deployment Guide Deploying the BIG-IP System with Welcome to the F5 and Oracle WebLogic Server deployment guide. F5 provides a highly eective way to optimize and direct traic for WebLogic Server with the

F5 Big-IP LTM Configuration: HTTPS / WSS Offloading Warning This document contains confidential information that is proprietary to CaféX Communications Inc. No part of its contents may be used, disclosed