This document uses practical examples to demonstrate how the Application Services Proxy (ASP) and f5-kube-proxy fit into Kubernetes’ network traffic flow.
While not essential for ASP use, an understanding of how ASP instances and f5-kube-proxy interact with each other and the Kubernetes network framework can be beneficial.

An ASP runs on each node in a Kubernetes Cluster.
Within a Cluster, Pods send network traffic through several layers; one of those layers may be an ASP.
That ASP acts as a client-side proxy, controlling Pod-to-Pod traffic for clients in the Cluster.

Important

An ASP instance sitting in between the client and server inspects and modifies all of the data exchanged between the two.

The ASP intercepts traffic as it leaves the client;

The ASP running on the same Kubernetes node as the client handles the network data, even though it applies policy on behalf of the server.

The Kubernetes Pods themselves don’t really have to worry about any of this.
If Pods need to become clients and make connections to other Kubernetes Services, they just make the connection.

The f5-kube-proxy, ASP, and Linux iptables work together to handle traffic and data for Services, including proxying traffic through an ASP.

In Kubernetes, Services are resources that define a set of network endpoints and a policy via which network traffic can reach those endpoints.
The endpoints are typically Kubernetes Pods (in other words, they’re within the Kubernetes Cluster), but they can also be external. [1]

When a client requests a specific Service by name using DNS, it receives the service IP address.
When the client sends packets to that service IP, the traffic gets routed to one of the endpoints defined in the Service, according to the specified policy.

You have two endpoints – 192.168.155.193:80 and 192.168.36.129:80 – which implement that Service IP.
(In other words, the traffic for the service IP gets directed to one or the other of these two endpoints.)

These two endpoints are the IP addresses of the “my-nginx” pods.

Now, you can start a toolbox pod and test the service from the client side.

First, the pod’s curl process tried to resolve the name “my-nginx” via DNS.
It received the service IP – 10.111.94.71 – from kube-dns.
Then, it connected to the service IP and received a response from one of the Pods.

In a Kubernetes cluster with kube-proxy running in iptables mode (the default), the iptables rules determines the responses and how to handle the client traffic.
You can see these rules on any of the nodes in the Cluster.

Note

This example output is from Kubernetes 1.6.4 with Calico.
Your particular output may differ depending on your environment.

The information displayed in the example is a reorganized version, edited for clarity.

When you asked the client Pod to curlmy-nginx, it sent a TCP SYN packet to the service IP (10.111.94.71).
Here’s how the iptables rules applied to that packet:

Demystifying iprules:

KUBE-SEP-5QJQLOAYBTXEYYW5 and KUBE-SEP-OJZLCJUDW7QMREOS do the same thing.

Each pertains to one of the two endpoints defined for the Service.
If the packet fails to reach the first endpoint via the defined load balancing method (statisticmoderandomprobability0.50000000000), it gets directed to the other endpoint.

The masquerade mark tells linux’s IP masquerading functionality to be ready to un-NAT the packets that come back from the server pod.

The DNAT rule tells linux to rewrite the destination IP address and port in the packet.

The random probability rule application implements equal-weight random load-balancing.
In a Kubernetes Service with 2 endpoints, this means that you’ll go to the first one half of the time and the last one the other half.

The packet hit the PREROUTING table and jumped into KUBE-SERVICES.

The KUBE-SERVICES chain has one rule for each service and a catch-all for nodeports.
You can ignore the latter for the purposes of this example.

Look for a rule that will match the destination IP – 10.111.94.71.

The packet jumps to KUBE-SVC-BEPXDJBUHFCSYIC3, which matches the destination IP.

In the KUBE-SVC-BEPXDJBUHFCSYIC3 chain:

The packet jumps to KUBE-SEP-5QJQLOAYBTXEYYW5 with probability 0.5. –OR –

If KUBE-SEP-5QJQLOAYBTXEYYW5 doesn’t work, the packet jumps to KUBE-SEP-OJZLCJUDW7QMREOS.

In either case, the following rules apply:

Mark that the kernel should enable masquerading for 192.168.155.193 for the packet.

By way of the practical examples provided, this document demonstrated the differences between kube-proxy’s default iptables routing and how f5-kube-proxy uses its own iptables to route traffic for Kubernetes Services through the ASP.
This allows the ASP to function as a full client-side proxy, thereby providing advanced traffic services beyond Kubernetes’ native kube-proxy capabilities.