Kubernetes Tidbits

Let’s face it. Most of us are not using the Kubernetes CLI every day. This posting is more a reminder for myself; I’d like to list some little helpers that help to improve your Kubernetes command-line skills:

Kubernetes Contexts

Show all available contexts (e.g. Minikube, GKE, Oracle Wercker):

$ kubectl config view --minify

YAML Output

Get the output of a deployment as more readable as YAML.

$ kubectl get deployment my-nginx -o yaml

Set custom namespace as default

This one is again more a note to myself. Altogether I have spent way too much time to discover how to talk to containers running in Kubernetes from outside the Kubernetes cluster (Kubernetes for Docker, Oracle, GKE).

Run Pod via kubectl run then use kubectl expose deployment to expose via NodePort

deploy the container with kubectl run microg --image=fmunz/microg --port 5555

you will not see it as a service, e.g kubectl get services

expose the pod with kubectl expose deployment microg --type=NodePort. Note that other types are possible, see section below for deployment with a YAML file.

get the NodePort with kubectl describe service microg | grep NodePort

you will see the new service exposed as kubectl describe service microg

Deploy with Service YAML

For “kind” in the service YAML specify either (partly taken from K8s doc):

ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType. To be able to talk to your service, start a Proxy server: kubectl proxy --port=8080

NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <nodeip>:<nodeport>. To find the NodePort, use the following command: kubectl describe service microg | grep NodePort

LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.

.

Other resources

I will add more here for sure. These days I am using K8s a lot 🙂 Also check out the following resources:

OCE with Wercker

About using Kubernetes together with Wercker I will present at the CODE conference in NY city. So stay tuned for slides and possibly a recording.

OCE with standard kubectl CLI

OCE is a standard upstream Kubernetes. So with an existing kubectl client that is correctly pointing to your OCE instance you can try your first K8s steps from the CLI. So here is a quick primer.

The first thing to note is that you should set your namespace if you are using the OCE trial. The reason is, that there is shared cluster for trial and different users are assigned different namespaces. Don’t worry, if you are following the example with Wercker, given to the trial participants the namespace will be set correctly.

Set your namespace, replace fmunz with your namespace in the command below:

In more detail: I will present about the evolution of containers. From Docker, to Swarm, to container orchestrations systems, Kubernetes and managed Kubernetes (such as Oracle Container Engine or others). At the end I guess you will agree that Kubernetes is great and getting better every day, but you won’t like to manage your own Kubernetes cluster. Interesting enough, Bob Quillin summarised my CODE presentation as the new Oracle strategy really well.

Of course we will have a lot fun fun live coding with Mini, the Raspi cluster again. I plan to demo the setup of the cluster, service deployment, load balancing, failover etc. All this live on stage with hopefully a really big screen for the projection.

Today I realised that my Serverless with Fn on Kubernetes article was published on DZone. That is great news. Not sure why, but I didn’t pay too much attention to DZone but realised lately that so many good content is published there. E.g. check out the refcards!

To reproduce the steps, first of all make sure the latest version of Docker with Kubernetes support is installed properly and Kubernetes is enabled (in my case this is 17.12.0-ce-mac45 from the edge channel) .

Prerequisites and Checks

List the images of running Docker containers. This should show you the containers required for K8s if you enabled it in the Docker console under preferences:

$ docker container ls --format "table{\t{{.Image }}}"

Next, check if there are existing contexts. For example I have minikube and and GKE configured as well. Make sure the * (astericks) is set to docker-for-desktop:

Microservice smoke test

The following steps are not necessary to run Fn project. However, I first deployed a small microservice to see if Kubernetes was running fine for me on my Mac. Feel free to skip that entirely. To copy what I did, you could follow the steps for load balancing a microservice with K8s

Fn on Kubernetes

Helm

Make sure your Kubernetes cluster is up and running and working correctly. We will use the K8s package manager Helm to install Fn.

Install Helm

Follow the instructions to [install Helm(https://docs.helm.sh/using_helm/#installing-helm) on your system, e.g. on a Mac it can be done with with brew. Helm will talk to Tiller, a deployment on the K8s cluster.

Init Helm and provision Tiller

$ helm init
$HELM_HOME has been configured at /Users/frank/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!

Install Fn

You can simply follow the instructions about installing Fn on Kubernetes. I put the steps here for completeness. First, let’s clone the fn-helm repo from github:

$ git clone https://github.com/fnproject/fn-helm.git && cd fn-helm

Install chart dependencies (from requirements.yaml):

$ helm dep build fn

Then install the chart. I chose the release name fm-release:

$ helm install --name fm-release fn

Then make sure to set the FN_API_URL as described in the output of the command above.

This should be it! You should see the following deployment from the K8s console.

Try to run a function. For more details checke the Fn Helm instruction on github.

Summary

Installing Fn on K8s with Helm should work on any Kubernetes cluster. Give it a try yourself, code some functions and run them on Fn / Kubernetes. Feel free to check out my Serverless slides.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

You can adjust all of your cookie settings by navigating the tabs on the left hand side.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.