Topic navigation

Blog Articles

Connecting Kubernetes and Docker

This blog to help my fellow developers who are working on “Docker” and “Kubernetes” simultaneously. Because, they know that Kubernetes runs over docker-engine but there is a catch, that:

1. Containers running on Docker, and

2. Containers running inside of a Kubernetes Pod,

are absolutely running isolated, without even knowing each other’s existence. But, we are developers, so there are chances where we might come across a situation where we desperately need to let the Docker formatted containers communicate with K8s Pod. By communication, I mean the transmission of data from Pod-to-Container and vice-versa, using protocols like TCP, Http, Https, UDP, Sockets, web-sockets and much more.

Now, at this point, some good Kubernetes user find this blog not so great and must be thinking that K8s lets the user to expose the Pod to a NodePort and make it accessible for anyone who is authorized to access the Pod, no matter if it’s a Docker formatted container or something. That’s correct but,with the following constraints, it was a challenge:

1. You can’t expose the POD to a NodePort, (security concerns).

2. Even if you did expose to NodePort, then while automating the cluster deployment, the Pod created a container and now that container needs to send an acknowledgment message to get execute a successful deployment. How will the container find the Pod?

Actually, these are the following constraints I was dealing with while doing a project called Hyperledger, it’s a Blockchain Technology running on Docker and my part was to deploy it on Kubernetes.

After a long research, discussions over the net, there was only one way that Docker-container must use the DNS of K8s to find the K8s Pod and support a smooth communication channel.

Now, exposing Kube-DNS is not a good idea, because for now, this is the best option we have. Exposing the Kube-DNS to a NodePort is not a hefty task; currently it is running on ClusterIP by default, using the service.yaml will expose it to a NodePort 30053.

Next catch in the story is, DNS search works on port 53 since it runs on UDP. Now, the new constraint is we can’t expose the Kube-DNS to port 53 of Node since; it’s a private port, not open for public use.

Now, we need to find a solution, which could listen to the DNS UDP packets on port 53 and send them to port 30053, where Kube-DNS is exposed. So, in short, we are thinking to create a proxy server that listens to DNS requests and forwards them to Kube-DNS. For this purpose, Nginx Proxy was a good option. Using this conf file of nginx, we can easily create a Nginx Proxy. This conf is absolutely verified, tried and tested as well.

Add this DNS address to Docker-DNS and finish.

Finally, the solution worked very well. Although, there is a scope better solution in the future. For now, we can make things work.