What gets proxied

Networking access from the cluster

If you use the --expose option for telepresence with a given port the pod will forward traffic it receives on that port to your local process.
This allows the Kubernetes or OpenShift cluster to talk to your local process as if it was running in the pod.

By default the remote port and the local port match.
Here we expose port 8080 as port 8080 on a remote Deployment called example:

You can't expose ports <1024 on clusters that don't support running images as root.
This limitation is the default on OpenShift.

Networking access to the cluster

The locally running process wrapped by telepresence has access to everything that a normal Kubernetes pod would have access to.
That means Service instances, their corresponding DNS entries, and any cloud resources you can normally access from Kubernetes.

To see this in action, let's start a Service and Deployment called "helloworld" in Kubernetes in the default namespace "default", and wait until it's up and running.
The resulting Service will have three DNS records you can use:

helloworld, from a pod in the default namespace.

helloworld.default anywhere in the Kubernetes cluster.

helloworld.default.svc.cluster.local anywhere in the Kubernetes cluster.
This last form will not work when using telepresence with --method=vpn-tcp on Linux (see the relevant ticket for details.)

We'll check the current Kubernetes context and then start a new pod:

$ kubectl run --expose helloworld --image=nginx:alpine --port=80

Wait 30 seconds and make sure a new pod is available in Running state:

Networking access to cloud resources

When using --method=inject-tcp, the subprocess run by telepresence will have all of its traffic routed via the cluster.
That means transparent access to cloud resources like databases that are accessible from the Kubernetes cluster's private network or VPC.
It also means public servers like google.com will be routed via the cluster, but again only for the subprocess run by telepresence via --run or --run-shell.

When using --method=vpn-tcpall processes on the machine running telepresence will have access to the Kubernetes cluster.
Cloud resources will only be routed via the cluster if you explicitly specify them using --also-proxy <ip | ip range | hostname>.
Access to public websites should not be affected or changed in any way.

Environment variables

Environment variables set in the Deployment pod template will be available to your local process.
You also have access to all the environment variables Kubernetes sets automatically.
For example, here you can see the environment variables that get added for each Service:

Volumes

Volumes configured in the Deployment pod template will also be made available to your local process.
This will work better with read-only volumes with small files like Secret and ConfigMap; a local database server writing to a remote volume will be slow.

Volume support requires a small amount of work on your part.
The root directory where all the volumes can be found will be set to the TELEPRESENCE_ROOT environment variable in the shell or subprocess run by telepresence.
You will then need to use that env variable as the root for volume paths you are opening.

--method inject-tcp

The standard DNS entries for services.
E.g. redis-master and redis-master.default.svc.cluster.local will resolve to a working IP address.
These will work regardless of whether they existed when the proxy started.

TCP connections to other Service instances, regardless of whether they existed when the proxy was started.

Any environment variables that the Deployment explicitly configured for the pod.

TCP connections to any hostname/port; all but localhost will be routed via Kubernetes.
Typically this is useful for accessing cloud resources, e.g. a AWS RDS database.

TCP connections from Kubernetes to your local machine, for ports specified on the command line using --expose

Access to volumes, including those for Secret and ConfigMap Kubernetes objects.