Kubernetes Dashboard, Authentication, Isolation

Hello All. Today we are going to look at Kubernetes Dashboard, Authentication, and Isolation.

The Code

Let’s put the code up front; that way, if you don’t want to bother with the article you can start by poking around on your own. Example scripts and manifests are located at the kube-dex-dashboard GitHub repo.

Create a deployment – it would be denied to the user when running kubectl from the CLI. But this same deployment is permitted under the dashboard.

So that is what – now for why. The problem occurs because – out-of-the-box – Kubernetes Dashboard runs as a system-level process, normally with full cluster permissions. The issue arises when a user wants to authenticate and use the Dashboard – the user effectively runs as the same system identity that Dashboard uses.

Effectively we could not let our users take advantage of the Kubernetes dashboard due to privilege escalation. Bummer.

The Solution

Our solution works around this problem by creating multiple Dashboards – one for each authorized user. It’s not pretty, it’s not particularly scalable, but it works.

Let’s Look at dex, AUTHN, and AUTHZ

Before we jump into the specific multi-dashboard setup, let’s start by looking at authentication for our cluster. Kubernetes Authentication is implemented by the Kubernetes API Server; this makes sense because commands issued via kubectl (the Kubernetes CLI) execute against the API Server. So it follows that to configure authentication within Kubernetes, you will have specific options in your /etc/kubernetes/manifests/kube-apiserver.yaml manifest.

Kubernetes API Server Configuration and dex

The following is how we configured the API server to delegate authentication and authorization to dex:

--oidc-client-id – shared secret that permits the dex helper app to communicate with the Kubernetes API Server during delegation

--oidc-ca-file – the CA that issues our certificates

--oidc-username-claim – as users are authenticated using the dex helper, a set of “claims” are returned. In our case, we map the sub claim to the username within the backing FreeIPA.

--oidc-groups-claim – we map the groups claim to the list of groups the authenticated user is a member of on the backing FreeIPA

So the reason all of this matters is that our approach leverages permissions and group memberships to control access to Kubernetes API functions.

What an authentication looks like with dex

Authentication using dex requires us to go through quite a few steps – all of which deserve an article of their own. Suffice it to say that a lot of curl commands are used in our shell script to setup the initial login, indicate the authorizations to use, and extract the all-important bearer ID. In our case, we have it all wrapped up so that we issue:

That last item is the bearer token – this must be injected into every kubectl call so that the Kubernetes API Server can apply authorizations to the invoked query. Here’s an example of a denied query (my user does not have permissions to list all cluster namespaces):

Our dex Users and Kubernetes Permissions

Each authorized user on our Kubernetes cluster is assigned a unique namespace; the user is given “owner” permissions on that namespace. Our users by default have the most minimal privileges elsewhere in the cluster.

We’ll cover roles and roleBindings below.

A Note: FreeIPA Users vs. Kubernetes API Server Users

As you review the GitHub repo code the keen-eyed will observe that in the dex/rbac/sab-users.yaml file we define some variables in this template to setup a new user. Specifically:

<%= kubeadm_dex_user %> – This is the name on the FreeIPA domain (such as “abruce” or “fbar”). In other words, it is in plaintext.

<%= scope.function_k8s_dex_uid([kubeadm_dex_user, @kubeadm_dex_login['connector-id']]) -%> – This is actually a function that translates the plaintext kubeadm_dex_user to a base64-encoded value.

So what does that mean? It means that, internally to the Kubernetes API Server, a “user” is actually a reference to the provider and a provider-specific encoding of the user ID. For example: my FreeIPA user ID abruce actually becomes https://kubeadm-clu1.hlsdev.local:32000#CmpicnVjZRIRbGRhcF9obHNkZXZfbG9jYWw= when represented by dex to the Kubernetes API Server. That presented a problem to us because we use Puppet to create roleBindings dynamically, because we had to translate the plaintext kubeadm_dex_userPuppet Hiera variable to the fully-encoded value expected by Kubernetes API Server.

For the sake of completeness, here is the Puppet ERB function that performs this encoding. (We do not provide more Puppet settings because that would be a non-trivial task – we have lots of Puppet manifests and functions as part of our Kubernetes auto-deployment.)

You are correct to be mystified; prior to dex version 2.4.1, users were written as plaintext (e.g. abruce as a user ID) rather than the base64-encoded value. In fact, this dex GitHub issue discusses the problem. And the fact that the same logic is not applied to group names is, well, confusing. But the short answer is that group claims are presented as-is (plaintext) while user IDs are encoded as we do above.

Of course – all of this is subject to change as soon as we upgrade our dex. Because why not…

So dex works – what is the problem?

So the above digression shows that our RBAC implementation works at the Kubernetes API layer. The problem arises because the Kubernetes Dashboard doesn’t actually use the bearer token or authentication / authorization strategy. Instead, you create a single Dashboard instance, normally running as a system account, and then blithely tell your users: “Use kubectl proxy to access the Dashboard.” Because doing that – loses user isolation and privileges as your single Dashboard instance executes all Kubernetes API commands in its own service account’s context.

Brief Talk about Kubernetes Authorizations

We need to discuss Kubernetes Authorizations because they are at the heart of our Dashboard solution. Kubernetes authorization consists of roles and roleBindings. Roles have one or more rules defining permitted API verbs, while roleBindings do exactly what they sound like – they bind an existing role to Kubernetes entities.

roles and clusterRoles

Here is a sample role we developed as part of the isolated Dashboard effort:

NB: The <%= @kubeadm_clu -%> is because we use Puppet ERB as part of our solution. It may not apply in your case.

NB #2: The keen-eyed will notice triumphantly that we use a clusterRoleBinding and that we provide a namespace – which is stupid because a cluster-wide role…has no namespace. We put it in there because it made it read more easily to us during development, but feel free to remove it from your own implementation.

Be sure to notice that we leverage group memberships in the solution. Basically, if a given FreeIPA user is a member of a given group, then that is enough to provide access to that user’s dashboard (as well as the cluster-wide role which permits a user to access the service endpoint defined in the kube-system namespace). You can use this same type of approach to setup your own security policies based on RBAC.

“Shadow” Accounts and Dashboard

Let’s tie the above together for a solution. Basically, we want to run not a single Dashboard – but multiple Dashboards, where each Dashboard runs as a “shadow” Kubernetes service account that has the same privileges as the corresponding user.

Accessing the Dashboard

We started the article above with an example of calling our sab-k8s.sh shell script to login. That shell script also wraps access to kubectl, so we use it to run a local proxy:

sab-k8s.sh kubectl proxy

As with a “normal” kubectl proxy, this permits local forwarding to the Dashboard instance (this, in fact, is exactly the same way that one would normally access a Dashboard). However, because we run proxied instances of each Dashboard where each instance has specialized permissions only for a particular dex user, the URI used to access the Dashboard is different from the Kubernetes standard one.

The fact that we got a response indicates that the Dashboard instance is up and running.

Still More Problems!

The biggest problems with this approach include:

The approach is kludgy – Duplicating user accounts with a “shadow” service account does not scale. In our case, we use automated shell scripts to detect new user accounts and – if the accounts are members of a particular FreeIPA group – we auto-create the corresponding shadow service account and provision the Dashboard.

Despite the problems, the approach used by this article at least solves the problem of a single, monolithic Dashboard. And, future Kubernetes Dashboard deployments will no doubt address these shortcomings and obviate the need to have multiple Dashboard instances running.