Joining Project Networks

In the above example, all the pods and services in <project2> and <project3>
can now access any pods and services in <project1> and vice versa. Services
can be accessed either by IP or fully-qualified DNS name
(<service>.<pod_namespace>.svc.cluster.local). For example, to access a
service named db in a project myproject, use db.myproject.svc.cluster.local.

Alternatively, instead of specifying specific project names, you can use the
--selector=<project_selector> option.

To verify the networks you have joined together:

$ oc get netnamespaces

Then look at the NETID column. Projects in the same pod-network will have the
same NetID.

Isolating Project Networks

To isolate the project network in the cluster and vice versa, run:

$ oc adm pod-network isolate-projects <project1> <project2>

In the above example, all of the pods and services in <project1> and
<project2> can not access any pods and services from other non-global
projects in the cluster and vice versa.

Alternatively, instead of specifying specific project names, you can use the
--selector=<project_selector> option.

Making Project Networks Global

To allow projects to access all pods and services in the cluster and vice versa:

$ oc adm pod-network make-projects-global <project1> <project2>

In the above example, all the pods and services in <project1> and <project2>
can now access any pods and services in the cluster and vice versa.

Alternatively, instead of specifying specific project names, you can use the
--selector=<project_selector> option.

Disabling Host Name Collision Prevention For Routes and Ingress Objects

In OpenShift Container Platform, host name collision prevention for routes and ingress
objects is enabled by default. This means that users without the cluster-admin
role can set the host name in a route or ingress object only on creation and
cannot change it afterwards. However, you can relax this restriction on routes
and ingress objects for some or all users.

Because OpenShift Container Platform uses the object creation timestamp to determine the
oldest route or ingress object for a given host name, a route or ingress object
can hijack a host name of a newer route if the older route changes its host
name, or if an ingress object is introduced.

As an OpenShift Container Platform cluster administrator, you can edit the host name in a
route even after creation. You can also create a role to allow specific users
to do so:

You can also disable host name collision prevention for ingress objects. Doing
so lets users without the cluster-admin role edit a host name for ingress
objects after creation. This is useful to OpenShift Container Platform installations that
depend upon Kubernetes behavior, including allowing the host names in ingress
objects be edited.

Controlling Egress Traffic

As a cluster administrator you can allocate a number of static IP addresses to a
specific node at the host level. If an application developer needs a dedicated
IP address for their application service, they can request one during the
process they use to ask for firewall access. They can then deploy an egress
router from the developer’s project, using a nodeSelector in the deployment
configuration to ensure that the pod lands on the host with the pre-allocated
static IP address.

The egress pod’s deployment declares one of the source IPs, the destination IP
of the protected service, and a gateway IP to reach the destination. After the
pod is deployed, you can
create
a service to access the egress router pod, then add that source IP to the
corporate firewall. The developer then has access information to the egress
router service that was created in their project, for example,
service.project.cluster.domainname.com.

When the developer needs to access the external, firewalled service, they can
call out to the egress router pod’s service
(service.project.cluster.domainname.com) in their application (for example,
the JDBC connection information) rather than the actual protected service URL.

You can also assign static IP addresses to projects, ensuring that all
outgoing external connections from the specified project have recognizable
origins. This is different from the default egress router, which is used to send
traffic to specific destinations.

Using an egress firewall allows you to enforce the acceptable outbound traffic
policies, so that specific endpoints or IP ranges (subnets) are the only
acceptable targets for the dynamic endpoints (pods within OpenShift Container Platform) to
talk to.

Using an egress router allows you to create identifiable services to send
traffic to certain destinations, ensuring those external destinations treat
traffic as though it were coming from a known source. This helps with security,
because it allows you to secure an external database so that only specific pods
in a namespace can talk to a service (the egress router), which proxies the
traffic to your database.

In addition to the above OpenShift Container Platform-internal solutions, it is also
possible to create iptables rules that will be applied to outgoing
traffic. These rules allow for more possibilities than the egress
firewall, but cannot be limited to particular projects.

Using an Egress Firewall to Limit Access to External Resources

As an OpenShift Container Platform cluster administrator, you can use egress firewall policy
to limit the external addresses that some or all pods can access from within the
cluster, so that:

A pod can only talk to internal hosts, and cannot initiate connections to the
public Internet.

Or,

A pod can only talk to the public Internet, and cannot initiate connections to
internal hosts (outside the cluster).

Or,

A pod cannot reach specified internal subnets/hosts that it should have no
reason to contact.

Egress policies can be set at the pod selector-level and project-level. For
example, you can allow <project A> access to a specified IP range but deny the
same access to <project B>. Or, you can restrict application developers from
updating from (Python) pip mirrors, and force updates to only come from approved
sources.

If you are using the ovs-networkpolicy plug-in, egress policy is compatible
with only one policy per project, and will not work with projects that share a
network, such as global projects.

Project administrators can neither create EgressNetworkPolicy objects, nor
edit the ones you create in their project. There are also several other
restrictions on where EgressNetworkPolicy can be created:

The default project (and any other project that has been made global via
oc adm pod-network make-projects-global) cannot have egress policy.

If you merge two projects together (via oc adm pod-network join-projects),
then you cannot use egress policy in any of the joined projects.

No project may have more than one egress policy object.

Violating any of these restrictions results in broken egress policy for the
project, and may cause all external network traffic to be dropped.

Use the oc command or the REST API to configure egress policy. You can use
oc [create|replace|delete] to manipulate EgressNetworkPolicy objects. The
api/swagger-spec/oapi-v1.json file has API-level details on how the objects
actually work.

When the example above is added to a project, it allows traffic to IP range
1.2.3.0/24 and domain name www.foo.com, but denies access to all other
external IP addresses. Traffic to other pods is not affected because the policy
only applies to external traffic.

The rules in an EgressNetworkPolicy are checked in order, and the first one
that matches takes effect. If the three rules in the above example were
reversed, then traffic would not be allowed to 1.2.3.0/24 and www.foo.com
because the 0.0.0.0/0 rule would be checked first, and it would match and deny
all traffic.

Domain name updates are polled based on the TTL (time to live) value of the
domain returned by the local non-authoritative servers. The pod should also
resolve the domain from the same local nameservers when necessary, otherwise
the IP addresses for the domain perceived by the egress network policy controller
and the pod will be different, and the egress network policy may not be enforced
as expected. Since egress network policy controller and pod are asynchronously
polling the same local nameserver, there could be a race condition where pod may
get the updated IP before the egress controller. Due to this current limitation,
domain name usage in EgressNetworkPolicy is only recommended for domains with
infrequent IP address changes.

The egress firewall always allows pods access to the external interface of the
node the pod is on for DNS resolution. If your DNS resolution is not handled by
something on the local node, then you will need to add egress firewall rules
allowing access to the DNS server’s IP addresses if you are using domain names
in your pods.

Use the JSON file to create an EgressNetworkPolicy object:

$ oc create -f <policy>.json

Exposing services by creating
routes will ignore
EgressNetworkPolicy. Egress network policy service endpoint filtering is done
at the node kubeproxy. When the router is involved, kubeproxy is bypassed
and egress network policy enforcement is not applied. Administrators can prevent
this bypass by limiting access to create routes.

Using an Egress Router to Allow External Resources to Recognize Pod Traffic

The OpenShift Container Platform egress router runs a service that redirects traffic to a
specified remote server, using a private source IP address that is not used for
anything else. The service allows pods to talk to servers that are set up
to only allow access from whitelisted IP addresses.

The egress router is not intended for every outgoing connection. Creating large
numbers of egress routers can push the limits of your network hardware. For
example, creating an egress router for every project or application could exceed
the number of local MAC addresses that the network interface can handle before
falling back to filtering MAC addresses in software.

Currently, the egress router is not compatible with Amazon AWS, Azure Cloud,
or any other cloud platform that does not support layer 2 manipulations due to
their incompatibility with macvlan traffic.

Deployment Considerations

The Egress router adds a second IP address and MAC address to the node’s primary
network interface. If you are not running OpenShift Container Platform on bare metal, you may
need to configure your hypervisor or cloud provider to allow the additional
address.

Red Hat OpenStack Platform

If you are deploying OpenShift Container Platform on Red Hat OpenStack Platform, you need to
whitelist the IP and MAC addresses on your OpenStack environment, otherwise
communication will fail:

The egress router can run in three different modes:
redirect mode,
HTTP proxy mode and
DNS proxy mode.
Redirect mode works for all services except for HTTP and HTTPS. For HTTP and
HTTPS services, use HTTP proxy mode. For TCP-based services with IP addresses
or domain names, use DNS proxy mode.

Deploying an Egress Router Pod in Redirect Mode

In redirect mode, the egress router sets up iptables rules to redirect traffic
from its own IP address to one or more destination IP addresses. Client pods
that want to make use of the reserved source IP address must be modified to
connect to the egress router rather than connecting directly to the destination
IP.

Creates a Macvlan network interface on the primary network interface, and
moves it into the pod’s network project before starting the egress-router
container. Preserve the quotation marks around "true". Omitting them results
in errors. To create the Macvlan interface on a network interface other than the primary one, set the annotation value to the name of that interface. For example, eth1.

2

IP address from the physical network that the node is on and is reserved by the
cluster administrator for use by this pod. Optionally, you can include the
subnet length, the /24 suffix, so that a proper route to the local subnet can
be set up. If you do not specify a subnet length, then the egress router can
access only the host specified with the EGRESS_GATEWAY variable and no other
hosts on the subnet.

3

Same value as the default gateway used by the node.

4

The external server to direct traffic to. Using this example,
connections to the pod are redirected to 203.0.113.25, with a source IP address
of 192.168.12.99.

5

This tells the egress router image that it is being deployed as an
"init container". Previous versions of OpenShift Container Platform (and the egress
router image) did not support this mode and had to be run as an
ordinary container.

6

The pod is only deployed to nodes with the label site=springfield-1.

Create the pod using the above definition:

$ oc create -f <pod_name>.json

To check to see if the pod has been created:

$ oc get pod <pod_name>

Ensure other pods can find the pod’s IP address by creating a service to point to the egress router:

Your pods can now connect to this service. Their connections are redirected to
the corresponding ports on the external server, using the reserved egress IP
address.

The egress router setup is performed by an "init container" created from the
openshift3/ose-egress-router
image, and that container is run privileged so that it can configure the Macvlan
interface and set up iptables rules. After it finishes setting up
the iptables rules, it exits and the
openshift3/ose-pod
container will run (doing nothing) until the pod is killed.

The environment variables tell the egress-router image what addresses to use; it
will configure the Macvlan interface to use EGRESS_SOURCE as its IP address,
with EGRESS_GATEWAY as its gateway.

NAT rules are set up so that connections to any TCP or UDP port on the
pod’s cluster IP address are redirected to the same port on
EGRESS_DESTINATION.

If only some of the nodes in your cluster are capable of claiming the specified
source IP address and using the specified gateway, you can specify a
nodeName or nodeSelector indicating which nodes are acceptable.

Redirecting to Multiple Destinations

In the previous example, connections to the egress pod (or its corresponding
service) on any port are redirected to a single destination IP. You can also
configure different destination IPs depending on the port:

IP address from the physical network that the node is on and is reserved by the
cluster administrator for use by this pod. Optionally, you can include the
subnet length, the /24 suffix, so that a proper route to the local subnet can
be set up. If you do not specify a subnet length, then the egress router can
access only the host specified with the EGRESS_GATEWAY variable and no other
hosts on the subnet.

2

EGRESS_DESTINATION uses YAML syntax for its values, and can be a multi-line string. See the following for more information.

Each line of EGRESS_DESTINATION can be one of three types:

<port> <protocol> <IP address> - This says that incoming
connections to the given <port> should be redirected to the same
port on the given <IP address>. <protocol> is either tcp or
udp. In the example above, the first line redirects traffic from
local port 80 to port 80 on 203.0.113.25.

<port> <protocol> <IP address> <remote port> - As above, except
that the connection is redirected to a different <remote port> on
<IP address>. In the example above, the second and third lines
redirect local ports 8080 and 8443 to remote ports 80 and 443 on
203.0.113.26.

<fallback IP address> - If the last line of EGRESS_DESTINATION
is a single IP address, then any connections on any other port will be
redirected to the corresponding port on that IP address (eg,
203.0.113.27 in the example above). If there is no fallback IP address
then connections on other ports would simply be rejected.)

Using a ConfigMap to specify EGRESS_DESTINATION

For a large or frequently-changing set of destination mappings, you
can use a ConfigMap to externally maintain the list, and have the egress router
pod read it from there. This comes with the advantage of project administrators
being able to edit the ConfigMap, whereas they may not be able to edit the Pod
definition directly, because it contains a privileged container.

IP address from the physical network that the node is on and is reserved by the
cluster administrator for use by this pod. Optionally, you can include the
subnet length, the /24 suffix, so that a proper route to the local subnet can
be set up. If you do not specify a subnet length, then the egress router can
access only the host specified with the EGRESS_GATEWAY variable and no other
hosts on the subnet.

The egress router does not automatically update when the ConfigMap changes.
Restart the pod to get updates.

Deploying an Egress Router HTTP Proxy Pod

In HTTP proxy mode, the egress router runs as an HTTP proxy on port 8080.
This only works for clients talking to HTTP or HTTPS-based services, but usually
requires fewer changes to the client pods to get them to work. Programs can be
told to use an HTTP proxy by setting an environment variable.

Creates a Macvlan network interface on the primary network interface, then
moves it into the pod’s network project before starting the egress-router
container. Preserve the quotation marks around "true". Omitting them results
in errors.

2

IP address from the physical network that the node is on and is reserved by the
cluster administrator for use by this pod. Optionally, you can include the
subnet length, the /24 suffix, so that a proper route to the local subnet can
be set up. If you do not specify a subnet length, then the egress router can
access only the host specified with the EGRESS_GATEWAY variable and no other
hosts on the subnet.

3

Same value as the default gateway used by the node itself.

4

This tells the egress router image that it is being deployed as
part of an HTTP proxy, and so it should not set up iptables
redirecting rules.

5

A string or YAML multi-line string specifying how to configure the
proxy. Note that this is specified as an environment variable in the
HTTP proxy container, not with the other environment variables in the
init container.

You can specify any of the following for the EGRESS_HTTP_PROXY_DESTINATION
value. You can also use *, meaning "allow connections to all remote
destinations". Each line in the configuration specifies one group of connections
to allow or deny:

Using the http_proxy and https_proxy environment variables is not necessary
for all setups. If the above does not create a working setup, then consult the
documentation for the tool or software you are running in the pod.

Deploying an Egress Router DNS Proxy Pod

In DNS proxy mode, the egress router runs as a DNS proxy for TCP-based
services from its own IP address to one or more destination IP addresses. Client
pods that want to make use of the reserved, source IP address must be modified
to connect to the egress router rather than connecting directly to the
destination IP. This ensures that external destinations treat traffic as though
it were coming from a known source.

Using pod.network.openshift.io/assign-macvlan annotation creates a Macvlan
network interface on the primary network interface, then moves it into the
pod’s network name space before starting the egress-router-setup container. Preserve
the quotation marks around "true". Omitting them results in errors.

2

IP address from the physical network that the node is on and is reserved by the
cluster administrator for use by this pod. Optionally, you can include the
subnet length, the /24 suffix, so that a proper route to the local subnet can
be set up. If you do not specify a subnet length, then the egress router can
access only the host specified with the EGRESS_GATEWAY variable and no other
hosts on the subnet.

3

Same value as the default gateway used by the node itself.

4

This tells the egress router image that it is being deployed as
part of a DNS proxy, and so it should not set up iptables
redirecting rules.

This uses the YAML syntax for a multi-line string. See below for
details.

Each line of EGRESS_DNS_PROXY_DESTINATION can be set in one of two ways:

<port> <remote address> - This says that incoming connections to the given
<port> should be proxied to the same TCP port on the given <remote
address>. <remote address> can be an IP address or DNS name. In case of DNS
name, DNS resolution is done at runtime. In the example above, the first line
proxies TCP traffic from local port 80 to port 80 on 203.0.113.25. The second
line proxies TCP traffic from local port 100 to port 100 on example.com.

<port> <remote address> <remote port> - As above, except
that the connection is proxied to a different <remote port> on
<remote address>. In the example above, the third line
proxies local port 8080 to remote port 80 on 203.0.113.26 and the fourth line
proxies local port 8443 to remote port 443 on foobar.com.

Ensure other pods can find the pod’s IP address by creating a service to point to the egress router:

Ensure replicas is set to 1, because only one pod can be using a given
EGRESS_SOURCE value at any time. This means that only a single copy of the
router will be running, on a node with the label site=springfield-1.

2

IP address from the physical network that the node is on and is reserved by the
cluster administrator for use by this pod. Optionally, you can include the
subnet length, the /24 suffix, so that a proper route to the local subnet can
be set up. If you do not specify a subnet length, then the egress router can
access only the host specified with the EGRESS_GATEWAY variable and no other
hosts on the subnet.

Create the pod using the definition:

$ oc create -f <replication_controller>.json

To verify, check to see if the replication controller pod has been created:

$ oc describe rc <replication_controller>

Using iptables Rules to Limit Access to External Resources

Some cluster administrators may want to perform actions on outgoing
traffic that do not fit within the model of EgressNetworkPolicy or the
egress router. In some cases, this can be done by creating iptables
rules directly.

For example, you could create rules that log traffic to particular
destinations, or to prevent more than a certain number of outgoing
connections per second.

OpenShift Container Platform does not provide a way to add custom iptables rules
automatically, but it does provide a place where such rules can be
added manually by the administrator. Each node, on startup, will
create an empty chain called OPENSHIFT-ADMIN-OUTPUT-RULES in the
filter table (assuming that the chain does not already exist). Any
rules added to that chain by an administrator will be applied to all
traffic going from a pod to a destination outside the cluster (and not
to any other traffic).

There are a few things to watch out for when using this functionality:

It is up to you to ensure that rules get created on each node;
OpenShift Container Platform does not provide any way to make that happen
automatically.

The rules are not applied to traffic that exits the cluster via an
egress router, and they run after EgressNetworkPolicy rules are applied
(and so will not see traffic that is denied by an
EgressNetworkPolicy).

The handling of connections from pods to nodes or pods to the master
is complicated, because nodes have both "external" IP addresses and
"internal" SDN IP addresses. Thus, some pod-to-node/master traffic may
pass through this chain, but other pod-to-node/master traffic may
bypass it.

Enabling Static IPs for External Project Traffic

As a cluster administrator, you can assign specific, static IP addresses to
projects, so that traffic is externally easily recognizable. This is different
from the default egress router, which is used to send traffic to specific
destinations.

Recognizable IP traffic increases cluster security by ensuring the origin is
visible. Once enabled, all outgoing external connections from the specified
project will share the same, fixed source IP, meaning that any external
resources can recognize the traffic.

Unlike the egress router, this is subject to EgressNetworkPolicy firewall
rules.

The egressIPs field is an array. While in earlier releases it could only
contain a single IP address, as of OpenShift Container Platform version 3.10 egressIPs can
be set to two or more IP addresses on different nodes to provide high
availability. If multiple egress IP addresses are set, pods use the first IP in
the list for egress, but if the node hosting that IP address fails, pods will
switch to using the next IP in the list after a short delay.

Manually assign the egress IP to the desired node hosts. Set the egressIPs
field on the HostSubnet object on the node host. Include as many IPs as you
want to assign to that node host:

Egress IPs are implemented as additional IP addresses on the primary network
interface, and must be in the same subnet as the node’s primary IP.
Additionally, any external IPs should not be configured in any Linux network
configuration files, such as ifcfg-eth0.

Allowing additional IP addresses on the primary network interface might require
extra configuration when using some cloud or VM solutions.

If the above is enabled for a project, all egress traffic from that project will
be routed to the node hosting that egress IP, then connected (using NAT) to that
IP address. If egressIPs is set on a NetNamespace, but there is no node
hosting that egress IP, then egress traffic from the namespace will be dropped.

Enabling Multicast

At this time, multicast is best used for low bandwidth coordination or service
discovery and not a high-bandwidth solution.

Multicast traffic between OpenShift Container Platform pods is disabled by default. If you
are using the ovs-multitenant or ovs-networkpolicy plugin, you can enable
multicast on a per-project basis by setting an annotation on the project’s
corresponding netnamespace object:

In an isolated project, multicast packets sent by a pod will be delivered to
all other pods in the project.

If you have
joined
networks together, you will need to enable multicast in each project’s
netnamespace in order for it to take effect in any of the projects. Multicast
packets sent by a pod in a joined network will be delivered to all pods in all
of the joined-together networks.

To enable multicast in the default project, you must also enable it in the
kube-service-catalog project and all other projects that have been
made
global. Global projects are not "global" for purposes of multicast; multicast
packets sent by a pod in a global project will only be delivered to pods in
other global projects, not to all pods in all projects. Likewise, pods in global
projects will only receive multicast packets sent from pods in other global
projects, not from all pods in all projects.

When using the ovs-networkpolicy plugin:

Multicast packets sent by a pod will be delivered to all other pods in the
project, regardless of NetworkPolicy objects. (Pods may be able to communicate
over multicast even when they can’t communicate over unicast.)

Multicast packets sent by a pod in one project will never be delivered to pods
in any other project, even if there are NetworkPolicy objects allowing
communication between the to projects.

Enabling NetworkPolicy

The ovs-subnet and ovs-multitenant plug-ins have their own legacy models of
network isolation and do not support Kubernetes NetworkPolicy. However,
NetworkPolicy support is available by using the ovs-networkpolicy plug-in.

The v1 NetworkPolicy features are available only in OpenShift Container Platform. This
means that egress policy types, IPBlock, and combining podSelector and
namespaceSelector are not available in OpenShift Container Platform.

Do not apply NetworkPolicy features on default OpenShift Container Platform projects, because they can disrupt communication with the cluster.

In a cluster
configured
to use the ovs-networkpolicy plug-in,
network isolation is controlled entirely by
NetworkPolicy
objects. By default, all pods in a project are accessible from other pods and
network endpoints. To isolate one or more pods in a project, you can create
NetworkPolicy objects in that project to indicate the allowed incoming
connections. Project administrators can create and delete NetworkPolicy
objects within their own project.

Pods that do not have NetworkPolicy objects pointing to them are fully
accessible, whereas, pods that have one or more NetworkPolicy objects pointing
to them are isolated. These isolated pods only accept connections that are
accepted by at least one of their NetworkPolicy objects.

Following are a few sample NetworkPolicy object definitions supporting
different scenarios:

Deny All Traffic

To make a project "deny by default" add a NetworkPolicy object that
matches all pods but accepts no traffic.

NetworkPolicy objects are additive, which means you can combine multiple
NetworkPolicy objects together to satisfy complex network requirements.

For example, for the NetworkPolicy objects defined in previous samples, you
can define both allow-same-namespace and allow-http-and-https policies
within the same project. Thus allowing the pods with the label role=frontend,
to accept any connection allowed by each policy. That is, connections on any
port from pods in the same namespace, and connections on ports 80 and
443 from pods in any namespace.

Using NetworkPolicy Efficiently

NetworkPolicy objects allow you to isolate pods that are differentiated from
one another by labels, within a namespace.

It is inefficient to apply NetworkPolicy objects
to large numbers of individual pods in a single namespace.
Pod labels do not exist at the IP level, so NetworkPolicy objects generate
a separate OVS flow rule for every single possible link between every pod
selected with podSelector.

For example, if the specpodSelector and
the ingresspodSelector within a NetworkPolicy object each match 200
pods, then 40000 (200*200) OVS flow rules are generated.
This might slow down the machine.

To reduce the amount of OVS flow rules, use namespaces to contain groups
of pods that need to be isolated.

NetworkPolicy objects that select a whole namespace, by using
namespaceSelectors
or empty podSelectors, only generate a single OVS flow rule that matches the
VXLAN VNID of the namespace.

Keep the pods that do not need
to be isolated in their original namespace, and
move the pods that require isolation into one or more different namespaces.

Create additional
targeted cross-namespace policies to allow the specific traffic that you do want
to allow from the isolated pods.

NetworkPolicy and Routers

When using the ovs-multitenant plug-in, traffic from the routers is automatically allowed into all namespaces. This is because the routers are
usually in the default namespace, and all namespaces allow connections from
pods in that namespace. With the ovs-networkpolicy plug-in, this does not
happen automatically. Therefore, if you have a policy that isolates a namespace
by default, you need to take additional steps to allow routers to access it.

One option is to create a policy for each service, allowing access from all sources. for example,

This allows routers to access the service, but will also allow pods in other
users' namespaces to access it as well. This should not cause any issues, as
those pods can normally access the service by using the public router.

Alternatively, you can create a policy allowing full access from the default namespace, as in the ovs-multitenant plug-in:

Add a label to the default namespace.

If you labeled the default project with the default label in a previous
procedure, then skip this step. The cluster administrator role is required to
add labels to namespaces.

$ oc label namespace default name=default

Create policies allowing connections from that namespace.

Perform this step for each namespace you want to allow connections into. Users with the Project Administrator role can create policies.

Enabling HTTP Strict Transport Security

HTTP Strict Transport Security (HSTS) policy is a security enhancement, which
ensures that only HTTPS traffic is allowed on the host. Any HTTP requests are
dropped by default. This is useful for ensuring secure interactions with
websites, or to offer a secure application for the user’s benefit.

When HSTS is enabled, HSTS adds a Strict Transport Security header to HTTPS
responses from the site. You can use the insecureEdgeTerminationPolicy value
in a route to redirect to send HTTP to HTTPS. However, when HSTS is enabled, the
client changes all requests from the HTTP URL to HTTPS before the request is
sent, eliminating the need for a redirect. This is not required to be supported
by the client, and can be disabled by setting max-age=0.

HSTS works only with secure routes (either edge terminated or re-encrypt). The
configuration is ineffective on HTTP or passthrough routes.

To enable HSTS to a route, add the haproxy.router.openshift.io/hsts_header
value to the edge terminated or re-encrypt route:

Ensure there are no spaces and no other values in the parameters in the haproxy.router.openshift.io/hsts_header value. Only max-age is required.

The required max-age parameter indicates the length of time, in seconds, the
HSTS policy is in effect for. The client updates max-age whenever a response
with a HSTS header is received from the host. When max-age times out, the
client discards the policy.

The optional includeSubDomains parameter tells the client that all subdomains
of the host are to be treated the same as the host.

If max-age is greater than 0, the optional preload parameter allows external
services to include this site in their HSTS preload lists. For example, sites
such as Google can construct a list of sites that have preload set. Browsers
can then use these lists to determine which sites to only talk to over HTTPS,
even before they have interacted with the site. Without preload set, they need
to have talked to the site over HTTPS to get the header.

Troubleshooting Throughput Issues

Sometimes applications deployed through OpenShift Container Platform can cause
network throughput issues such as unusually high latency between specific services.

Use the following methods to analyze performance issues if pod logs do not reveal any cause of the problem:

Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node.

For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue.
Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to/from a pod.
Latency can occur in OpenShift Container Platform if a node interface is overloaded with traffic from other pods, storage devices, or the data plane.

podip is the IP address for the pod. Run the following command to get the IP address of the pods:

# oc get pod <podname> -o wide

tcpdump generates a file at /tmp/dump.pcap containing all traffic between these two pods. Ideally, run the analyzer shortly
before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file.
You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with:

# tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789

Use a bandwidth measuring tool, such as iperf, to measure streaming throughput and UDP throughput. Run the tool from the pods first, then from the nodes
to attempt to locate any bottlenecks. The iperf3 tool is included as part of RHEL 7.