A Docker Captain's Blog

Docker | Kubernetes | Cloud

Apache Kafka is a distributed streaming platform. It is an open-source stream-processing software platform developed by LinkedIn and donated to the Apache Software Foundation. It is written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.

Let us spend few minutes in understanding what is streaming all about? A streaming platform has three major capabilities:

Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.

Store streams of records in a fault-tolerant durable way.

Process streams of records as they occur.

Said that, Kafka is generally used for two broad classes of applications:

Building real-time streaming data pipelines that reliably get data between systems or applications

Building real-time streaming applications that transform or react to the streams of data

Kafka is run as a cluster on one or more servers that can span multiple datacenters. The Kafka cluster stores streams of records in categories called topics. Each record consists of a key, a value, and a timestamp.

Core APIs of Apache Kafka

The Producer API allows an application to publish a stream of records to one or more Kafka topics.

The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them.

The Streams API allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.

The Connector API allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to a table.

Under this blog post, I will showcase how to implement Apache Kafka on 2-Node Docker Swarm Cluster running on AWS via Docker Desktop.

Initialiating Docker Swarm Manager Node

ubuntu@kafka-swarm-node1:~$ sudo docker swarm init --advertise-addr 172.31.53.71 --listen-addr 172.31.53.71:2377
Swarm initialized: current node (yui9wqfu7b12hwt4ig4ribpyq) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-xxxxxmr075to2v3k-decb975h5g5da7xxxx 172.31.53.71:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Adding Worker Node

ubuntu@kafka-swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-2xjkynhin0n2zl7xxxk-decb975h5g5daxxxxxxxxn 172.31.53.71:2377
This node joined a swarm as a worker.

DockerHub is a service provided by Docker for finding and sharing container images with your team. It is the world’s largest repository of container images with an array of content sources including container community developers, open source projects and independent software vendors (ISV) building and distributing their code in containers. Users get access to free public repositories for storing and sharing images or can choose subscription plan for private repos.

One can easily pull the Docker Image from Dockerhub and run application in their environment flawlessly. But in case you want to setup a private registry, it is still possible to accomplish. Under this blog post, I will demonstrate how to build a private registry on Play with Docker in just 5 minutes.

Caution – Please note that Play with Docker platform is just for demo or training purpose. As the instance wipe away after 4 hours of limited time, it’s better to save your data before it gets wiped up.

List stored images.

Shared local registry

Create a directory to permanently store images.

$ mkdir -p /srv/registry/data

Create a directory to permanently store certificates and authentication data.

$ mkdir -p /srv/registry/security

Store domain and intermediate certificates using /srv/registry/security/registry.crt file, private key using /srv/registry/security/registry.key file. Use valid certificate and do not waste time with self-signed one. This step is required do use basic authentication.

2 week back, I wrote a blog post on how Developers can now build ARM containers on Docker Desktop using docker buildx CLI Plugin. Usually developers are restricted to build Arm-based application right on top of Arm-based system.Using this plugin, developers can build their application for Arm platform right on their laptop(x86) and then deploy onto the Cloud flawlessly without any cross-compilation pain anymore.

Wait…Did you say “ARM containers on Cloud?”

Yes, you heard it right. It is possible to deploy Arm containers on Cloud. Thanks to new Amazon EC2 A1 instances powered by custom AWS Graviton processors based on the Arm architecture, which brings Arm to the public cloud as a first class citizen. Docker Developers can now build ARM containers on AWS Cloud Platform.

A Brief about AWS Graviton Processors..

Amazon announced the availability of EC2 instances on its Arm-based servers during AWS re:Invent(December 2018). AWS Graviton processors are a new line of processors that are custom designed by AWS targeted in building platform solutions for cloud applications running at scale.The Graviton based instances are known as EC2 A1. These instances are targeted at scale-out workloads and applications such container based microservices, web sites, and scripting language-based applications (e.g., Ruby, Python, etc.)

EC2 A1 instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which maximizes resource efficiency for customers while still supporting familiar AWS and Amazon EC2 instance capabilities such as EBS, Networking, and AMIs. Amazon Linux 2, Red Hat Enterpise Linux (RHEL), Ubuntu and ECS optimized AMIs are available today for A1 instances. Built around Arm cores and making extensive use of custom-built silicon, the A1 instances are optimized for performance and cost.

Under this blog post, I will showcase how to deploy Containers on AWS EC2 A1 instance using Docker Machine running on Docker Desktop for Windows.

Initialising Docker Swarm Manager

PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node1 sudo docker swarm init
Swarm initialized: current node (oqk875mcldbn28ce2rip31fg5) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-6bw0zfd7vjpXX17usjhccjlg3rs 172.31.50.5:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node2 sudo docker swarm join --token SWMTKN-1-6XX23ye817usjhccjlg3rs 172.31.50.5:2377
This node joined a swarm as a worker.

2 weeks back in Dockercon 2019 San Francisco, Docker & ARM demonstrated the integration of ARM capabilities into Docker Desktop Community for the first time. Docker & ARM unveiled go-to-market strategy to accelerate Cloud, Edge & IoT Development. These two companies have planned to streamline the app development tools for cloud, edge, and internet of things environments built on ARM platform. The tools include AWS EC2 A1 instances based on AWS’ Graviton Processors (which feature 64-bit Arm Neoverse cores). Docker in collaboration with ARM will make new Docker-based solutions available to the Arm ecosystem as an extension of Arm’s server-tailored Neoverse platform, which they say will let developers more easily leverage containers — both remote and on-premises which is going to be pretty cool.

This integration is today available to the approximately 2 million developers using Docker Desktop Community Edition . As part of Docker Captain’s programme, we were lucky to get an early access to this build during Docker Captain Summit which took place on the first day of Dockercon 2019.

Introducing buildx

Under Docker 19.03.0 Beta 3, there is a new experimental CLI plugin called “buildx”. It is a pretty new Docker CLI plugin that extends the docker build command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently. As per the discussion with Docker staff, the “x” under buildx might get dropped in near future and features and flags are too subjected to change when the stable release is announced.

Buildx always build using the BuildKit engine and does not require DOCKER_BUILDKIT=1 environment variable for starting builds. Buildx build command supports the features available for docker build including the new features in Docker 19.03 such as outputs configuration, inline build caching or specifying target platform. In addition, buildx supports new features not yet available for regular docker build like building manifest lists, distributed caching, exporting build results to OCI image tarballs etc.

How does builder Instance work?

Buildx allows you to create new instances of isolated builders. This can be used for getting a scoped environment for your CI builds that does not change the state of the shared daemon or for isolating the builds for different projects. You can create a new instance for a set of remote nodes, forming a build farm, and quickly switch between them.

New instances can be created with docker buildx create command. This will create a new builder instance with a single node based on your current configuration. To use a remote node you can specify the DOCKER_HOST or remote context name while creating the new builder. After creating a new instance you can manage its lifecycle with the inspect, stop and rm commands and list all available builders with ls. After creating a new builder you can also append new nodes to it.

To switch between different builders use docker buildx use <name>. After running this command the build commands would automatically keep using this builder.

Docker 19.03 also features a new docker context command that can be used for giving names for remote Docker API endpoints. Buildx integrates with docker context so that all of your contexts automatically get a default builder instance. While creating a new builder instance or when adding a node to it you can also set the context name as the target.

Enough theory !!! Do you really want to see it in action? Under this blog post, I will showcase how I built ARM-based Docker Images for my tiny Raspberry cluster using `docker buildx’ utility which runs on my Docker Desktop for Mac.

Listing all builder instances and the nodes for each instance

We are currently using the default builder, which is basically the old builder.

Creating a New Builder Instance

The `docker buildx create` makes a new builder instance pointing to a docker context or endpoint, where context is the name of a context from docker context ls and endpoint is the address for docker socket (eg. DOCKER_HOST value).

By default, the current docker configuration is used for determining the context/endpoint value.

Builder instances are isolated environments where builds can be invoked. All docker contexts also get the default builder instance.

Let’s create a new builder, which gives us access to some new multi-arch features.

[Captains-Bay]🚩 > docker buildx create --help
Usage: docker buildx create [OPTIONS] [CONTEXT|ENDPOINT]
Create a new builder instance
Options:
--append Append a node to builder instead of changing it
--driver string Driver to use (eg. docker-container)
--leave Remove a node from builder instead of changing it
--name string Builder instance name
--node string Create/modify node with given name
--platform stringArray Fixed platforms for current node
--use Set the current builder instance

Here I created a new builder instance with the name mybuilder, switched to it, and inspected it. Note that –bootstrap isn’t needed, it just starts the build container immediately. Next we will test the workflow, making sure we can build, push, and run multi-arch images.

What is –bootstrap all about?

The `docker buildx inspect –bootstrap ensures that the builder is running before inspecting it. If the driver is docker-container, then --bootstrap starts the buildkit container and waits until it is operational. Bootstrapping is automatically done during build, it is thus not necessary. The same BuildKit container is used during the lifetime of the associated builder node (as displayed in buildx ls).

Authenticating with Dockerhub

[Captains-Bay]🚩 > docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: ajeetraina
Password:
Login Succeeded

Awesome. It worked ! The –platform flag enabled buildx to generate Linux images for Intel 64-bit, Arm 32-bit, and Arm 64-bit architectures. The –push flag generates a multi-arch manifest and pushes all the images to Docker Hub. Cool, isn’t it?

What is this ImageTools all about?

Imagetools contains commands for working with manifest lists in the registry. These commands are useful for inspecting multi-platform build results. It creates a new manifest list based on source manifests. The source manifests can be manifest lists or single platform distribution manifests and must already exist in the registry where the new manifest is created. If only one source is specified create performs a carbon copy.

The image is now available on Docker Hub with the tag ajeetraina/docker-cctv-raspbian:latest. You can run a container from that image on Intel laptops, Amazon EC2 A1 instances, Raspberry Pis, and more. Docker pulls the correct image for the current architecture, so Raspberry Pis run the 32-bit Arm version and EC2 A1 instances run 64-bit Arm.

Docker is the leading container platform which provides both hardware and software encapsulation by allowing multiple containers to run on the same system at the same time each with their own set of resources (CPU, memory, etc) and their own dedicated set of dependencies (library version, environment variables, etc.). Docker can now be used to containerize GPU-accelerated applications. In case you’re new to GPU-accelerated computing, it is basically the use of graphics processing unit to accelerates high performance computing workloads and applications. This means you can easily containerize and isolate accelerated application without any modifications and deploy it on any supported GPU-enabled infrastructure.

Yes, you heard it right. Today Docker does natively support NVIDIA GPUs within containers. This is possible with the latest Docker 19.03.0 Beta 3 Release which is the latest pre-release and is available for download here. With this release, Docker can now be flawlessly be used to containerize GPU-accelerated applications.

Let’s go back to 2017…

2 year back, I wrote a blog post titled “Running NVIDIA Docker in a GPU Accelerated Data center”. The nvidia-docker is an open source project hosted on GITHUB and it provides driver-agnostic CUDA images & docker command line wrapper that mounts the user mode components of the driver and the GPUs (character devices) into the container at launch. With this enablement, the NVIDIA Docker plugin enabled deployment of GPU-accelerated applications across any Linux GPU server with NVIDIA Docker support. Under the same blog post, I showcased how to get started with nvidia-docker to interact with NVIDIA GPU system and then look at few of interesting applications which can be build for GPU-accelerated data center.

With the recent 19.03.0 Beta Release, now you don’t need to spend time in downloading the NVIDIA-DOCKER plugin and rely on nvidia-wrapper to launch GPU containers. All you can now use –gpus option with docker run CLI to allow containers to use GPU devices seamlessly.

Under this blog post, I will showcase how to get started with this new CLI API for NVIDIA GPU.

The above error means that Nvidia could not properly register with Docker. What it actually mean is the drivers are not properly installed on the host. This could also mean the nvidia container tools were installed without restarting the docker daemon: you need to restart the docker daemon.

I suggest you to go back and verify if nvidia-container-runtime is installed or not OR restart the Docker daemon.

A Quick Look at NVIDIA Deep Learning..

The NVIDIA Deep Learning GPU Training System, a.k.a DIGITS is a webapp for training deep learning models. It puts the power of deep learning into the hands of engineers & data scientists. It can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.The currently supported frameworks are: Caffe, Torch, and Tensorflow.

DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. DIGITS is completely interactive so that data scientists can focus on designing and training networks rather than programming and debugging.

To test-drive DIGITS, you can get it up and running in a single Docker container:

$docker run -itd --gpus all -p 5000:5000 nvidia/digits

You can open up web browser and verify if its running on the below address:

Docker CE 19.03.0 Beta 1 went public 2 week back. It was the first release which arrived with sysctl support for Docker Swarm Mode for the first time. This is definitely a great news for popular communities like Elastic Stack, Redis etc. as they rely on tuning the kernel parameter to get rid of memory exceptions. For example, Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions, hence one need to increase the limits everytime using sysctl tool. Great to see that Docker Inc. acknowledged the fact that kernel tuning is required sometimes and provides explicit support under Docker 19.03.0 Pre-Release. Great Job !

Wait..Do I really need sysctl?

Say, you have deployed your application on Docker Swarm. it’s pretty simple and it’s working great. Your application is growing day by day and now you just need to scale it. How are you going to do it? The simple answer is: docker service scale app=<number of tasks>.Surely, it is possible today but your containers can quickly hit kernel limits. One of the most popular kernel parameter is net.core.somaxconn. This parameter represents the maximum number of connections that can be queued for acceptance. The default value on Linux is 128, which is rather low.

The Linux kernel is flexible, and you can even modify the way it works on the fly by dynamically changing some of its parameters, thanks to the sysctl command. The sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users. Sysctl basically provides an interface that allows you to examine and change several hundred kernel parameters in Linux or BSD. Changes take effect immediately, and there’s even a way to make them persist after a reboot. By using sysctl judiciously, you can optimize your box without having to recompile the kernel and get the results immediately.

Please note that Not all sysctls are namespaced as of Docker 19.03.0 CE Pre-Release. Docker does not support changing sysctls inside of a container that also modify the host system.

Docker does support setting namespaced kernel parameters at runtime & runc honors this. Have a look:

Method:I

Downloading the static binary archive. Go to https://download.docker.com/linux/static/stable/ (or change stable to nightly or test), choose your hardware platform, and download the .tgz file relating to the version of Docker CE you want to install.

Testing with hello-world

Captain'sBay==>sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
INFO[2019-04-10T09:26:23.338596029Z] shim containerd-shim started address="/containerd-shim/m
oby/5b23a7045ca683d888c9d1026451af743b7bf4005c6b8dd92b9e95e125e68134/shim.sock" debug=false pid=2953
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
## Verifying the new `docker context` command

Creating a 2-Node Docker Swarm Mode Cluster

swarm-node-1:~$ sudo docker swarm init --advertise-addr 10.140.0.6 --listen-addr 10.140.0
.6:2377
Swarm initialized: current node (c78wm1g99q1a1g2sxiuawqyps) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxfztwgj2r443337mja-cmhuu258lu0327
e32l0g4pl47 10.140.0.6:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Running Multi-service Docker Compose for Redis

Redis is an open source, in-memory data structure store, used as a database, cache and message broker. Redis Commander is an application that allows users to explore a Redis instance through a browser. Let us look at the below Docker compose file for Redis as well as Redis Commander shown below:

Last week Docker Community Edition 19.03.0 Beta 1 was announced and release notes went public here.Under this release, there were numerous exciting features which were introduced for the first time. Some of the notable features include – fast context switching, rootless docker, sysctl support for Swarm services, device support for Microsoft Windows.

Not only this, there were numerous enhancement around Docker Swarm, Docker Engine API, networking, Docker client, security & Buildkit. Below are the list of features and direct links to GitHub.

Let’s talk about Context Switching..

A context is essentially the configuration that you use to access a particular cluster. Say, for example, in my particular case, I have 4 different clusters – mix of Swarm and Kubernetes running locally and remotely. Assume that I have a default cluster running on my Desktop machine , 2 node Swarm Cluster running on Google Cloud Platform, 5-Node Cluster running on Play with Docker playground and a single-node Kubernetes cluster running on Minikube and that I need to access pretty regularly. Using docker context CLI I can easily switch from one cluster(which could be my development cluster) to test to production cluster in seconds.

Under this blog post, I will focus on fast context switching feature which was introduced for the first time. Let’s get started:

Method:I

Downloading the static binary archive. Go to https://download.docker.com/linux/static/stable/ (or change stable to nightly or test), choose your hardware platform, and download the .tgz file relating to the version of Docker CE you want to install.

Testing with hello-world

Captain'sBay==>sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
INFO[2019-04-10T09:26:23.338596029Z] shim containerd-shim started address="/containerd-shim/m
oby/5b23a7045ca683d888c9d1026451af743b7bf4005c6b8dd92b9e95e125e68134/shim.sock" debug=false pid=2953
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
## Verifying the new `docker context` command

Method:II

If you have less time and want a single liner command to handle this, check this out –

Verifying docker context CLI

$ sudo docker context --help
Usage: docker context COMMAND
Manage contexts
Commands:
create Create a context
export Export a context to a tar or kubeconfig file
import Import a context from a tar file
inspect Display detailed information on one or more contexts
ls List contexts
rm Remove one or more contexts
update Update a context
use Set the current docker context
Run 'docker context COMMAND --help' for more information on a command.

Creating a 2 Node Swarm Cluster

Install Docker 19.03.0 Beta 1 on both the nodes(using the same method discussed above). You can use GCP Free Tier account to create 2-Node Swarm Cluster.

Configuring remote access with systemd unit file

Use the command sudo systemctl edit docker.service to open an override file for docker.service in a text editor.

Reload the systemctl configuration

Repeat it for other nodes which you are planning to include for building Swarm Mode cluster.

swarm-node-1:~$ sudo docker swarm init --advertise-addr 10.140.0.6 --listen-addr 10.140.0
.6:2377
Swarm initialized: current node (c78wm1g99q1a1g2sxiuawqyps) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxfztwgj2r443337mja-cmhuu258lu0327
e32l0g4pl47 10.140.0.6:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Context Switching to remotely running Play with Docker(PWD) Platform

This is one of the most exciting part of this blog. I simply love PWD platform as I find it perfect playground for test driving Docker Swarm cluster. Just a click and you get 5-Node Docker Swarm Cluster in just 5 seconds.

Just click on 3-Manager and 2 workers and you get 5-Node Docker Swarm cluster for FREE.

To implement a microservice architecture and a multi-cloud strategy, Kubernetes today has become a key enabling technology. The bedrock of Kubernetes remains the orchestration and management of Linux containers, to create a powerful distributed system for deploying applications across a hybrid cloud environment. Said that, Kubernetes has become the de-facto standard container orchestration framework for cloud-native deployments. Development teams have turned to Kubernetes to support their migration to new microservices architectures and a DevOps culture for continuous integration and continuous deployment.

Why Docker & Kubernetes on IoT devices?

Today many organizations are going through a digital transformation process. Digital transformation is the integration of digital technology into almost all areas of a business, fundamentally changing how you operate and deliver value to customers. It’s basically a cultural change. The common goal for all these organization is to change how they connect with their customers, suppliers and partners. These organizations are taking advantage of innovations offered by technologies such as IoT platforms, big data analytics, or machine learning to modernize their enterprise IT and OT systems. They realize that the complexity of development and deployment of new digital products require new development processes. Consequently, they turn to agile development and infrastructure tools such as Kubernetes.

At the same time, that there has been a major increase in the demand for Kubernetes outside the datacenter. Kubernetes is pushing out of the data center into stores and factories. DevOps teams do find Kubernetes quite interesting as it provides predictable operations and a cloud-like provisioning experience on just about any infrastructure.

Docker containers & Kubernetes are an excellent choice for deploying complex software to the Edge. The reasons are listed below:

Containers are awesome

Consistent across a wide variety of Infrastructure

Capable of standalone or clustered operations

Easy to upgrade and/or replace containers

Support for different infrastructure configs(storage,CPU etc.)

Strong Ecosystem(Monitoring, logging, CI, management etc.)

Introducing K3s – A Stripped Down version of Kubernetes

K3s is a brand new distribution of Kubernetes that is designed for teams that need to deploy applications quickly and reliably to resource-constrained environments. K3s is a Certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

K3s is lightweight certified K8s distribution built for production operations. It is just 40MB binary with 512MB memory consumption. It is based on a single process w/t integrated K8s master, Kubelet and containerd. It includes SQLite in addition to etcd. Simultaneously released for x86_64, ARM64 and ARMv7. It is an open Source project, not yet Rancher product. It is wrapped in a simple package that reduces the dependencies and steps needed to run a production Kubernetes cluster. Packaged as a single binary, k3s makes installation and upgrade as simple as copying a file. TLS certificates are automatically generated to ensure that all communication is secure by default.

k3s bundles the Kubernetes components (kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy) into combined processes that are presented as a simple server and agent model. Running k3s server will start the Kubernetes server and automatically register the local host as an agent. This will create a one node Kubernetes cluster. To add more nodes to the cluster just run k3s agent --server ${URL} --token ${TOKEN} on another host and it will join the cluster. It’s really that simple to set up a Kubernetes cluster with k3s.

Minimum System Requirements

Linux 3.10+

512 MB of ram per server

75 MB of ram per node

200 MB of disk space

x86_64, ARMv7, ARM64

Under this blog post, I will showcase how to get started with K3s on 2-Node Raspberry Pi’s cluster.

Prerequisite:

Hardware:

Raspberry Pi 3 ( You can order it from Amazon in case you are in India for 2590 INR)

Software:

SD-Formatter – to format microSD card (in case of
Windows Laptop)

Win32DiskImager(in case you have Windows OS running on
your laptop) – to burn Raspbian format directly into microSD card.(No need
to extract XZ using any tool). You can use Etcher tool if you are using
macbook.

Steps to Flash Raspbian OS on Pi Boxes:

Format the microSD card using SD Formatter as shown below:

2. Download Raspbian OS from here and use Win32 imager(in case you are on
Windows OS running on your laptop) to burn it on microSD card.

3. Insert the microSD card into your
Pi box. Now connect the HDMI cable from one end of Pi’s HDMI slot to your
TV or display unit and mobile charger(recommended 5.1V@1.5A).

containerd and Docker

k3s by default uses containerd. If you want to use it with Docker, all you just need to run the agent with the --docker flag

k3s agent -s ${SERVER_URL} -t ${NODE_TOKEN} --docker &

Running Nginx Pods

To launch a pod using the container image nginx and exposing a HTTP API on port 80, execute:

root@raspberrypi:~# k3s kubectl run mynginx --image=nginx --replicas=3 --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/mynginx created

Listing the k3s Nodes

Setting up Nginx

As shown earlier, we will go ahead and test Nginx application on top of K3s cluster nodes

root@raspberrypi:~# k3s kubectl run mynginx --image=nginx --replicas=3 --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/mynginx created

Verifying the endpoints controller for Pods

Test driving Kubernetes Dashboard

root@node1:/home/pi# k3s kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
root@node1:/home/pi#

We require kubectl proxy to get it accessible on our Web browser. This command creates a proxy server or application-level gateway between localhost and the Kubernetes API Server. It also allows serving static content over specified HTTP path. All incoming data enters through one port and gets forwarded to the remote kubernetes API Server port, except for the path matching the static content path.

Docker’s Birthday Celebration is not just about cakes, food and party. It’s actually a global tradition that is near and dear to our heart because it gives each one of us an opportunity to express our gratitude to our huge community of contributors. The goal of this global celebration is to welcome every single Docker community users who are keen to understand and adopt this technology and influence others to grow this amazing community.

This year, celebrations all over the world took place during March 18-31, 2019 at across 75 Users groups events worldwide. Interestingly, Docker Inc. came up with really good idea of “Show-And-Tell: How do you Docker?” theme this time. Docker User Groups all over the world hosted local birthday show-and-tell celebrations. Each speaker got chance for 15-20 minutes of stage time to present how they’ve been using Docker. Every single speaker who presented their work got a Docker Birthday #6 T-shirt and have the opportunity to submit their Docker Birthday Show-and-tell to present at DockerCon. We celebrated Docker’s 6th Birthday in Bangalore at DellEMC Office, Mahadevapura on 30th March. Around 100+ audience participated for this Meetup event out of which 70% of the audience were beginners.

An Early Preparation..

Planning for Docker Birthday #6 celebration all started during the early first week of February. First of all, I came up with Docker Bangalore Meetup event page as to keep Community aware of upcoming “Show-And-Tell” event.

Soon after posting this event, my Google Forms got flooded with project stories. Received 30+ entries on the first 2 weeks which was just amazing. Out of overall 60 projects, I found 10 of the projects really powerful and hence started working with individuals to come up with better way to present it on the stage. In parallel, I placed an early order for a Birthday banner, Birthday stickers, T-shirts(Men & Women) for speakers, T-shirts(Men & Women) for the audience. Thanks to Docker, Inc for shipping it to Bangalore on-time.

Let’s talk about #7 Cool Projects…

After 3-4 week of continuous interaction, I finalized the list of 7 projects which looked really promising. In case you missed out this event, here is the short brief around each of these project work –

Project #1: Box-Exec

Akshit Grover, student of ACM VIT Student Chapter was one among the young engineer who came up with cool project idea titled “Box-Exec”.

Box Execute is an npm package to compile/run codes (like C,CPP, Python) in a virtualized environment, Here virtualized environment used is a docker container. This packages is built to ease the task of running a code against test cases as done by websites used to practice algorithmic coding. It supports automatic CPU time sharing configuration between containers and load balancing between multiple containers allocated for same language.

Project #2: Flipper

Flipper is a Docker playground which allows users to run Docker-in-Docker-In-Docker commands ~ all on a web browser in a matter of seconds. It gives the experience of having a free RHEL Virtual Machine in browser, where you can build and run Docker containers.

Highlights:

Flipper uses Python as a backend scripting language

Flipper uses Python-CGI in order to interact through a Web server with a client running a Web browser.

It uses HTML/CSS as a Front-end language.

It can be hosted on your Laptop flawlessly

I got chance to interlock with Shivam during my Docker session in VIT Campus where we discussed around this idea. I liked the idea and suggested him to talk around it in the upcoming Birthday Meetup. In his own words…

“…. I use Docker to provide Cloud Virtualization services like Software-As-A-Service(SaaS) and Container-As-A-Service(CaaS). The Hybrid Cloud is made and deployed on my own workstation, No commercial platforms have been used. In SaaS, whenever a user launches a particular software, in the background a docker shell is created with the particular software installed in it and the user gets to access the service. In CaaS, if a user wants a quick Linux terminal, all he need is to click on the start button after he enters his username and a random number. A docker shell is created in the background and is displayed on the browser itself. So the user can quickly do his tasks. I have used Ansible to execute docker commands but a major portion of SaaS and CaaS is done on python36 and python-cgi to integrate front-end and back-end…”

Project #3: GigaHex

He started his talk around the challenges around existing Sandbox tools like VirtualBox, bloated Docker images for Big Data applications. He claimed a smaller footprint, low CPU & system Overhead and automation with his promising “GigaHex” platform.

Project #4: Z10N : Device simulation at scale using Docker Swarm

Hemanth Gaikwad, Validation Architect from DellEMC, headed over to the stage to talk around his active project titled ” Z10N : Device simulation at scale using Docker Swarm “.

Hemanth initiated his presentation talking about the existing challenges in delivering products to the customer on time with high quality. He stated that the major challenge is to develop and test as thoroughly and efficiently as we can, given our time and resource constraints. Essentially a company needs improved quality and reduced software lifecycle time to be able to survive in the competitive software landscape and reap the benefits of being early to market with high quality software features. Hardware availability happens to be scarce which results in design, development and tests getting pushed right. Products are developed and tested under non-scaled environments for just a few finite states, again impacting quality. Reduced quality would inherently further increase the costs and efforts.

An in-house tool called “Z10N” (pronounced zee-on) can help create real world lab environment with thousands of hardware devices at the fraction of cost for physical devices. Z10N would help the organization to:

Seamlessly design & develop products, execute automation & non-functional tests at will without worrying about the hardware availability Learn how you could simulate/emulate a hardware device and create thousands of clones for the same in just a few minutes with 99% reduction in expenditure.

He claimed that Z10N is already helping make better, faster products and with its capabilities it’s surely getting you the “Power to do more”.

Project #5: JAAS : Distributed WorkLoad Testing using Containers

Vishnu initiated his talk around challenges with existing Loading testing tools for various workloads like FTP, Web, Database, Mail etc. He stated that Load testing tools available in market comes with its own challenges like Cost, Learning Curve and Workloads Support. To cope with these challenges, he started looking at possible solution and hence JAAS (JMeter As A Service) was born. JAAS uses Containers and open source tools to deliver servers validation efforts.

Tech Stack behind JAAS:

Containers and Docker SWARM: For auto deploying of JMeter Apps, we use Docker containers. We use Docker SWARM service for creating Virtual JMETER Users for Generating the Load.

JMeter: Performance/Load testing framework from Apache, has been widely accepted as a Performance/Load testing tool for multiple applications.

His talk has been selected for Containerday happening June 24-26 at Hamburg https://www.containerdays.io/ . Don’t miss out his talk if you get chance to attend this conference.

Project #6: Comparative Study of Hadoop over VMs Vs Docker containers

Shivankit Bagla was our next young speaker who talked about his recent International Journal of Applied Engineering Research document https://www.ripublication.com/ijaer18/ijaerv13n6_166.pdf and his talk was titled around “Comparative study of Hadoop over VM Vs Docker containers.

He talked about his project which was a comparative study of the performance of Hadoop cluster in a containerised environment Vs virtual machine. He demonstrated on how running a Hadoop cluster in a Docker environment actually increases the performance of a Hadoop cluster and decreases the time taken by Hadoop system to perform certain actions.

If you are looking out for Desktop Enterprise software solution for creating & delivering production-ready containerized applications in a simplified & secure way, Docker Desktop Enterprise is the right tool for you.

Last Dockercon, Docker announced the release of the new Docker Desktop Enterprise which is a new commercial Desktop offering from Docker, Inc. It is the only enterprise-ready Desktop platform that enable IT organizations automate the delivery of legacy and modern applications using an agile operating model with integrated security. With work performed locally, developers can leverage a rapid feedback loop before pushing code or docker images to shared servers / continuous integration infrastructure.

Imagine you are a developer & your organization have a production-ready environment running Docker Enterprise 2.1. To ensure that you don’t use any APIs or incompatible features that will break when you push an application to production environment, you would like to be certain your working environment exactly matches what’s running in Docker Enterprise production systems. With Docker Desktop Enterprise you can easily bridge such kind of gaps. It is basically a cohesive extension of the Docker Enterprise container platform that runs right on developers’ systems. Developers code and test locally using the same tools they use today and Docker Desktop Enterprise helps to quickly iterate and then produce a containerized service that is ready for their production Docker Enterprise clusters.

The Enterprise-Ready Solution for Dev & Ops

Docker Desktop Enterprise is a perfect devbed for Enterprise Developers. It allows developers to select from a variety of their favourite frameworks languages and IDE. Because of those options, it can also help organizations target every platform. So basically, your organization can provide application templates that include production-approved application configurations. And developers can take those templates and quickly modify and replicate them right from their desktop.

With Docker Desktop Enterprise, IT organizations can ensure developers are working with the same version of Docker Desktop Enterprise and can easily distribute Docker Desktop Enterprise to large teams using a number of third-party endpoint management applications. With the Docker Desktop Enterprise graphical user interface (GUI), developers are no longer required to work with lower-level Docker commands and can auto-generate Docker artifacts.

A Flawless Integration with 3rd Party Developer Tool

Docker Desktop Enterprise is designed to integrate with existing development environments (IDEs) such as Visual Studio and IntelliJ. And with support for defined application templates, Docker Desktop Enterprise allows organizations to specify the look and feel of their applications.

Exclusive features of Docker Desktop Enterprise

Let us talk about the various features of Docker Desktop Enterprise 2.0.0.0 which is discussed below:

Version selection: Configurable version packs ensure the local instance of Docker Desktop Enterprise is a precise copy of the production environment where applications are deployed, and developers can switch between versions of Docker and Kubernetes with a single click.

Docker and Kubernetes versions match UCP cluster versions.

Administrator command line tool simplifies version pack installation.

Application Designer: Application Designer provides a library of application and service templates to help Docker developers quickly create new Docker applications. Application templates allow you to choose a technology stack and focus on business logic and code, and require only minimal Docker syntax knowledge.

Template support includes .NET, Spring, and more.

Device management:

The Docker Desktop Enterprise installer is available as standard MSI (Win) and PKG (Mac) downloads, which allows administrators to script an installation across many developer workstations.

Administrative control:

IT organizations can specify and lock configuration parameters for creation of a standardized development environment, including disabling drive sharing and limiting version pack installations. Developers can then run commands using the command line without worrying about configuration settings.

Under this blog post, we will look at two of the promising features of Docker Desktop Enterprise 2.0.0.0:

Application Designer &

Version packs

Installing Docker Desktop Enterprise

Docker Desktop Enterprise is available both for Microsoft Windows and MacOS. One can download via the below links:

Please note that you will have to clean up Docker Desktop Community Edition before you install Enterprise edition. Also, Enterprise version will require a seperate License key which you need to buy from Docker, Inc.,.

To install Docker Desktop Enterprise, double-click the .msi or .pkg file and initiate the Setup wizard:

Click “Next” to proceed further and accept the End-User license agreement as shown below:

Click “Next” to proceed with the installation.

Once installed, you will see Docker Desktop icon on the Windows Desktop as shown below:

License file

As stated earlier, to use Docker Desktop Enterprise, you must purchase Docker Desktop Enterprise license file from Docker, Inc.

The license file must be installed and placed under the following location: C:\Users\Docker\AppData\Roaming\Docker\docker_subscription.lic

If the license file is missing, you will be asked to provide it when you try to run Docker Desktop Enterprise. Once the license file is supplied, Docker Desktop Enterprise should come up flawlessly.

What’s New in Docker Desktop UI?

Docker Desktop Enterprise provides you with additional features compared to the Community edition. Right click on whale icon on Task Manager and select “About Docker Desktop” to show up the below window.

Open up Powershell to verify Docker version up and running :

Click on “Settings” option to get list of various sections like shared drives, advanced settings, network, proxies, Docker daemon and Kubernetes.

One of the new feature introduced with Docker Desktop Enterprise is to allow Docker Desktop to start whenever you login automatically. This feature can be enabled by selecting “Start Desktop when you login” under General Tab. One can automatically check for updates by enabling this feature.

Docker Desktop Enterprise gives you flexibility to pre-select resources limitation so as to make it available for Docker Engine as shown below. Based on your system configuration and type of application you are planning to host, you can increase or decrease the resource limit.

Docker Desktop Enterprise includes a standalone Kubernetes server that runs on your Windows laptop, so that you can test deploying your Docker workloads on Kubernetes.

The Kubectl is a command line interface for running commands against Kubernetes clusters. It comes with Docker Desktop by default and one cn verify by running the below command:

Running Your First Web Application

Let us try running the custom built Web application using the below command:

Open up the browser to verify that web page is up and running as shown below:

Application Designer

Application Designer provides a library of application and service templates to help Docker developers quickly create new Docker applications. Application templates allow you to choose a technology stack and focus on business logic and code, and require only minimal Docker syntax knowledge.

Building Linux-based Application using Application Designer

Under this section, I will show you how to get started with Application Designer feature which was introduced for the first time.

Right click on whale-icon in the Taskbar and choose “Design New Application”. Once you click on it, it will open the below window:

Let us first try using the set of preconfigured application by clicking on “Choose a template”

Let us test drive Linux-based application. Click on “Linux” option and proceed further. This opens up variety of ready-made templates as shown below:

Spring application is also included as part of Docker Desktop Enterprise which is basically a sample Java application with Spring framework and a Postgres database as shown below:

Let us go ahead and try out a sample python/Flask application with an Nginx proxy and a MySQL database. Select the desired application template and choose your choice of Python version and accessible port. You can select your choice of MySQL version and Nginx proxy. For this example, I chose Python version 3.6, MySQL 5.7 and Nginx proxy exposed on port 80.

Once you click on “Run Application”, you can see the output right there on the screen as shown below:

As shown above, one can open up code repository in Visual Studio Code & Windows explorer. You get options to start, stop and restart your application stack.

To verify its functionality, let us try to open up the web application as shown below:

Cool, isn’t it?

Building Windows based Application using Application Designer

Under this section, we will see how to build Windows-based application using the same Application Designer tool.

Before you proceed, we need to choose “Switch to Windows container” as shown below to allow Windows based container to run on our Desktop.

Right click on whale-icon in the Taskbar and choose “Design New Application”. Once you click on it, it will open the below window:

Click on “Choose a template” and select Windows this time as shown below:

Once you click on Windows, it will open up a sample ASP.Net & MS-SQL application.

Once clicked, it will show frontend and backend with option to set up desired port for your application.

I will go ahead and choose port 82 for this example. Click on “Continue” and supply your desired application name. I named it as “mywinapp” as shown below:

Click on “Assemble” to build up your application stack.

Click on “Start” to run your application stack.

While the application stack is coming up, you can open up Visual Studio to view files like Docker Compose, Dockerfile as shown below:

One can view logs to see what’s going on in the backend. Under Application Designer, one can select “Debug” option to open up “View Logs” to view the real time logs.

By now, you should be able to access your application via web browser.

Version Packs

Docker Desktop Enterprise 2.0.0 is bundled with default version pack Enterprise 2.1 which includes Docker Engine 18.09 and Kubernetes 1.11.5. You can download it via this link.

If you want to use a different version of the Docker Engine and Kubernetes for development work install version pack Enterprise 2.0, you can download version pack Enterprise 2.0 via this link.

Version packs are installed manually or, for administrators, by using the command line tool. Once installed, version packs can be selected for use in the Docker Desktop Enterprise menu.

Installing Version Pack

When you install Docker Desktop Enterprise, the tool is installed under C:\Program Files\Docker\Desktop location. Version packs can be installed by double-clicking a .ddvp file. Ensure that Docker Desktop is stopped before installing a version pack. The easiest way to add Version Pack is through CLI running the below command:

Open up Windows Powershell via “Run as Administrator” and run the below command: