Archives

Categories

Category: Uncategorized

Helm install pod in pending state:
When you execute kubectl get events you will see the following error:
no persistent volumes available for this claim and no storage class is set or
PersistentVolumeClaim is not bound
This error usually comes in kubernetes set with kubeadm.
You will need to create persistentvolume with the following yaml file:

You might have an issue where the kubelet service does not start. You can see the error in /var/log/messages: If you have an error as follows:
Oct 16 09:55:33 k8s-master kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Oct 16 09:55:33 k8s-master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE

Then you will have to initialize the kubeadm first as in the next step. And the start the kubelet service.

Step 2.1] Initializing your master

$ kubeadm init

Note:-

execute above command on master node. This command will select one of interface to be used as API server. If you wants to provide another interface please provide “–apiserver-advertise-address=<ip-address>” as an argument. So the whole command will be like this-

$ kubeadm init –apiserver-advertise-address=<ip-address>

K8s has provided flexibility to use network of your choice like flannel, calico etc. I am using flannel network. For flannel network we need to pass network CIDR explicitly. So now the whole command will be-

$ sudo cp /etc/kubernetes/admin.conf $HOME/$ sudo chown $(id -u):$(id -g) $HOME/admin.conf$export KUBECONFIG=$HOME/admin.conf-> Use same network CIDR as it is also configured in yaml file of flannel that we are going to configure in step 3.

-> At the end you will get one token along with the command, make a note of it, which will be used to join the slaves.

Step 3] Installing a pod network

Different networks are supported by k8s and depends on user choice. For this demo I am using flannel network. As of k8s-1.6, cluster is more secure by default. It uses RBAC ( Role Based Access Control ), so make sure that the network you are going to use has support for RBAC and k8s-1.6.

Above command will create service with the name “nginx-service”. Service will be accessible on the port given by “–port” option for the “–target-port”. Target port will be of pod. Service will be accessible within the cluster only. In order to access it using your host IP “NodePort” option will be used.

–type=NodePort :- when this option is given k8s tries to find out one of free port in the range 30000-32767 on all the VMs of the cluster and binds the underlying service with it. If no such port found then it will return with an error.

Check service is created or not

$ kubectl get svc

Try to curl –

$ curl <service-IP> 80

From all the VMs including master. Nginx welcome page should be accessible.

$ curl <master-ip> nodePort

$ curl <slave-IP> nodePort

Execute this from all the VMs. Nginx welcome page should be accessible.

INITIALIZE HELM AND INSTALL TILLER

Once you have Helm ready, you can initialize the local CLI and also install Tiller into your Kubernetes cluster in one step:

$ helm init

This will install Tiller into the Kubernetes cluster you saw with kubectl config current-context.

TIP: Want to install into a different cluster? Use the --kube-context flag.

TIP: When you want to upgrade Tiller, just run helm init --upgrade.

INSTALL AN EXAMPLE CHART

To install a chart, you can run the helm install command. Helm has several ways to find and install a chart, but the easiest is to use one of the official stable charts.

$ helm repo update # Make sure we get the latest list of charts
$ helm install stable/mysql
Released smiling-penguin

In the example above, the stable/mysql chart was released, and the name of our new release is smiling-penguin. You get a simple idea of the features of this MySQL chart by running helm inspect stable/mysql.

Whenever you install a chart, a new release is created. So one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.

The helm install command is a very powerful command with many capabilities. To learn more about it, check out the Using Helm Guide

From Canary Builds

“Canary” builds are versions of the Helm software that are built from the latest master branch. They are not official releases, and may not be stable. However, they offer the opportunity to test the cutting edge features.

The bootstrap target will attempt to install dependencies, rebuild the vendor/ tree, and validate configuration.

The build target will compile helm and place it in bin/helm. Tiller is also compiled, and is placed in bin/tiller.

INSTALLING TILLER

Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster. But for development, it can also be run locally, and configured to talk to a remote Kubernetes cluster.

Easy In-Cluster Installation

The easiest way to install tiller into the cluster is simply to run helm init. This will validate that helm’s local environment is set up correctly (and set it up if necessary). Then it will connect to whatever cluster kubectl connects to by default (kubectl config view). Once it connects, it will install tiller into the kube-system namespace.

After helm init, you should be able to run kubectl get pods --namespace kube-systemand see Tiller running.

You can explicitly tell helm init to…

Install the canary build with the --canary-image flag

Install a particular image (version) with --tiller-image

Install to a particular cluster with --kube-context

Install into a particular namespace with --tiller-namespace

Once Tiller is installed, running helm version should show you both the client and server version. (If it shows only the client version, helm cannot yet connect to the server. Use kubectl to see if any tiller pods are running.)

Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.

Installing Tiller Canary Builds

Canary images are built from the master branch. They may not be stable, but they offer you the chance to test out the latest features.

The easiest way to install a canary image is to use helm init with the --canary-image flag:

$ helm init --canary-image

This will use the most recently built container image. You can always uninstall Tiller by deleting the Tiller deployment from the kube-system namespace using kubectl.

Running Tiller Locally

For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.

The process of building Tiller is explained above.

Once tiller has been built, simply start it:

$ bin/tiller
Tiller running on :44134

When Tiller is running locally, it will attempt to connect to the Kubernetes cluster that is configured by kubectl. (Run kubectl config view to see which cluster that is.)

You must tell helm to connect to this new local Tiller host instead of connecting to the one in-cluster. There are two ways to do this. The first is to specify the --host option on the command line. The second is to set the $HELM_HOST environment variable.

DELETING OR REINSTALLING TILLER

Because Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data. The recommended way of deleting Tiller is with kubectl delete deployment tiller-deploy --namespace kube-system, or more concisely helm reset.

Tiller can then be re-installed from the client with:

$ helm init

CONCLUSION

In most cases, installation is as simple as getting a pre-built helm binary and running helm init. This document covers additional cases for those who want to do more sophisticated things with Helm.

Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.

Kubernetes Distribution Guide

This document captures information about using Helm in specific Kubernetes environments.

We are trying to add more details to this document. Please contribute via Pull Requests if you can.

MINIKUBE

Helm is tested and known to work with minikube. It requires no additional configuration.

SCRIPTS/LOCAL-CLUSTER AND HYPERKUBE

Hyperkube configured via scripts/local-cluster.sh is known to work. For raw Hyperkube you may need to do some manual configuration.

GKE

Google’s GKE hosted Kubernetes platform is known to work with Helm, and requires no additional configuration.

UBUNTU WITH ‘KUBEADM’

Kubernetes bootstrapped with kubeadm is known to work on the following Linux distributions:

Ubuntu 16.04

CAN SOMEONE CONFIRM ON FEDORA?

Some versions of Helm (v2.0.0-beta2) require you to export KUBECONFIG=/etc/kubernetes/admin.conf or create a ~/.kube/config.

CONTAINER LINUX BY COREOS

Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API. On Container Linux the Kubelet runs inside of a hyperkube container image that has socat. So, even though Container Linux doesn’t ship socat the container filesystem running kubelet does have socat. To learn more read the Kubelet Wrapper docs.

We’d love to provide these or point you toward a trusted provider. If you’re interested in helping, we’d love it. This is how the Homebrew formula was started.

Q: Why do you provide a curl ...|bash script?

A: There is a script in our repository (scripts/get) that can be executed as a curl ..|bashscript. The transfers are all protected by HTTPS, and the script does some auditing of the packages it fetches. However, the script has all the usual dangers of any shell script.

We provide it because it is useful, but we suggest that users carefully read the script first. What we’d really like, though, are better packaged releases of Helm.

INSTALLING

I’m trying to install Helm/Tiller, but something is not right.

Q: How do I put the Helm client files somewhere other than ~/.helm?

Set the $HELM_HOME environment variable, and then run helm init:

export HELM_HOME=/some/path
helm init --client-only

Note that if you have existing repositories, you will need to re-add them with helm repo add....

Q: How do I configure Helm, but not install Tiller?

A: By default, helm init will ensure that the local $HELM_HOME is configured, and then install Tiller on your cluster. To locally configure, but not install Tiller, use helm init --client-only.

Q: How do I manually install Tiller on the cluster?

A: Tiller is installed as a Kubernetes deployment. You can get the manifest by running helm init --dry-run --debug, and then manually install it with kubectl. It is suggested that you do not remove or change the labels on that deployment, as they are sometimes used by supporting scripts and tools.

Q: Why do I get Error response from daemon: target is unknown during Tiller install?

A: Users have reported being unable to install Tiller on Kubernetes instances that are using Docker 1.13.0. The root cause of this was a bug in Docker that made that one version incompatible with images pushed to the Docker registry by earlier versions of Docker.

This issue was fixed shortly after the release, and is available in Docker 1.13.1-RC1 and later.

A: We have seen this issue with Ubuntu and Kubeadm in multi-node clusters. The issue is that the nodes expect certain DNS records to be obtainable via global DNS. Until this is resolved upstream, you can work around the issue as follows:

1) Add entries to /etc/hosts on the master mapping your hostnames to their public IPs 2) Install dnsmasq on the master (e.g. apt install -y dnsmasq) 3) Kill the k8s api server container on master (kubelet will recreate it) 4) Then systemctl restart docker (or reboot the master) for it to pick up the /etc/resolv.conf changes

Q: On GKE (Google Container Engine) I get “No SSH tunnels currently open”

Error: Error forwarding ports: error upgrading connection: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-[redacted]"?

Another variation of the error message is:

Unable to connect to the server: x509: certificate signed by unknown authority

A: The issue is that your local Kubernetes config file must have the correct credentials.

When you create a cluster on GKE, it will give you credentials, including SSL certificates and certificate authorities. These need to be stored in a Kubernetes config file (Default: ~/.kube/config so that kubectl and helm can access them.

Q: When I run a Helm command, I get an error about the tunnel or proxy

A: Helm uses the Kubernetes proxy service to connect to the Tiller server. If the command kubectl proxy does not work for you, neither will Helm. Typically, the error is related to a missing socat service.

A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server (at which point Tiller can no longer do anything useful, so it panics and exits).

Often, this is a result of authentication failing because the Pod in which Tiller is running does not have the right token.

To fix this, you will need to change your Kubernetes configuration. Make sure that --service-account-private-key-file from controller-manager and --service-account-key-filefrom apiserver point to the same x509 RSA key.

UPGRADING

My Helm used to work, then I upgrade. Now it is broken.

Q: After upgrade, I get the error “Client version is incompatible”. What’s wrong?

Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions. That error means that the version difference is too great to safely continue. Typically, you need to upgrade Tiller manually for this.

Using Helm

This guide explains the basics of using Helm (and Tiller) to manage packages on your Kubernetes cluster. It assumes that you have already installed the Helm client and the Tiller server (typically by helm init).

If you are simply interested in running a few quick commands, you may wish to begin with the Quickstart Guide. This chapter covers the particulars of Helm commands, and explains how to use Helm.

THREE BIG CONCEPTS

A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster. Think of it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.

A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster. And each time it is installed, a new release is created. Consider a MySQL chart. If you want two databases running in your cluster, you can install that chart twice. Each one will have its own release, which will in turn have its own release name.

With these concepts in mind, we can now explain Helm like this:

Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.

‘HELM SEARCH’: FINDING CHARTS

When you first install Helm, it is preconfigured to talk to the official Kubernetes charts repository. This repository contains a number of carefully curated and maintained charts. This chart repository is named stable by default.

You can see which charts are available by running helm search:

$ helm search
NAME VERSION DESCRIPTION
stable/drupal 0.3.2 One of the most versatile open source content m...
stable/jenkins 0.1.0 A Jenkins Helm chart for Kubernetes.
stable/mariadb 0.5.1 Chart for MariaDB
stable/mysql 0.1.0 Chart for MySQL
...

With no filter, helm search shows you all of the available charts. You can narrow down your results by searching with a filter:

Now the mariadb chart is installed. Note that installing a chart creates a new release object. The release above is named happy-panda. (If you want to use your own release name, simply use the --name flag on helm install.)

During installation, the helm client will print useful information about which resources were created, what the state of the release is, and also whether there are additional configuration steps you can or should take.

Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.

To keep track of a release’s state, or to re-read configuration information, you can use helm status:

More complex expressions are supported. For example, --set outer.inner=value is translated into this:

outer:
inner: value

Lists can be expressed by enclosing values in { and }. For example, --set name={a, b, c} translates to:

name:
- a
- b
- c

As of Helm 2.5.0, it is possible to access list items using an array index syntax. For example, --set servers[0].port=80 becomes:

servers:
- port: 80

Multiple values can be set this way. The line --set servers[0].port=80,servers[0].host=example becomes:

servers:
- port: 80
host: example

Sometimes you need to use special characters in your --set lines. You can use a backslash to escape the characters; --set name=value1,value2 will become:

name: "value1,value2"

Similarly, you can escape dot sequences as well, which may come in handy when charts use the toYaml function to parse annotations, labels and node selectors. The syntax for --set nodeSelector."kubernetes.io/role"=master becomes:

nodeSelector:
kubernetes.io/role: master

Deeply nested data structures can be difficult to express using --set. Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.

‘HELM UPGRADE’ AND ‘HELM ROLLBACK’: UPGRADING A RELEASE, AND RECOVERING ON FAILURE

When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.

An upgrade takes an existing release and upgrades it according to the information you provide. Because Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade. It will only update things that have changed since the last release.

In the above case, the happy-panda release is upgraded with the same chart, but with a new YAML file:

mariadbUser: user1

We can use helm get values to see whether that new setting took effect.

$ helm get values happy-panda
mariadbUser: user1

The helm get command is a useful tool for looking at a release in the cluster. And as we can see above, it shows that our new values from panda.yaml were deployed to the cluster.

Now, if something does not go as planned during a release, it is easy to roll back to a previous release using helm rollback [RELEASE] [REVISION].

$ helm rollback happy-panda 1

The above rolls back our happy-panda to its very first release version. A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1. The first revision number is always 1. And we can use helm history [RELEASE] to see revision numbers for a certain release.

HELPFUL OPTIONS FOR INSTALL/UPGRADE/ROLLBACK

There are several other helpful options you can specify for customizing the behavior of Helm during an install/upgrade/rollback. Please note that this is not a full list of cli flags. To see a description of all flags, just run helm <command> --help.

--timeout: A value in seconds to wait for Kubernetes commands to complete This defaults to 300 (5 minutes)

--wait: Waits until all Pods are in a ready state, PVCs are bound, Deployments have minimum (Desired minus maxUnavailable) Pods in ready state and Services have an IP address (and Ingress if a LoadBalancer) before marking the release as successful. It will wait for as long as the--timeout value. If timeout is reached, the release will be marked as FAILED.

Note: In scenario where Deployment has replicas set to 1 and maxUnavailable is not set to 0 as part of rolling update strategy, --wait will return as ready as it has satisfied the minimum Pod in ready condition. – --no-hooks: This skips running hooks for the command – --recreate-pods (only available for upgrade and rollback): This flag will cause all pods to be recreated (with the exception of pods belonging to deployments)

‘HELM DELETE’: DELETING A RELEASE

When it is time to uninstall or delete a release from the cluster, use the helm deletecommand:

$ helm delete happy-panda

This will remove the release from the cluster. You can see all of your currently deployed releases with the helm list command:

From the output above, we can see that the happy-panda release was deleted.

However, Helm always keeps records of what releases happened. Need to see the deleted releases? helm list --deleted shows those, and helm list --all shows all of the releases (deleted and currently deployed, as well as releases that failed):

Because Helm keeps records of deleted releases, a release name cannot be re-used. (If you really need to re-use a release name, you can use the --replace flag, but it will simply re-use the existing release and replace its resources.)

Note that because releases are preserved in this way, you can rollback a deleted resource, and have it re-activate.

‘HELM REPO’: WORKING WITH REPOSITORIES

So far, we’ve been installing charts only from the stable repository. But you can configure helm to use other repositories. Helm provides several repository tools under the helm repocommand.

TILLER, NAMESPACES AND RBAC

In some cases you may wish to scope Tiller or deploy multiple Tillers to a single cluster. Here are some best practices when operating in those circumstances.

Tiller can be installed into any namespace. By default, it is installed into kube-system. You can run multiple Tillers provided they each run in their own namespace.

Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings. You can add a service account to Tiller when configuring Helm via helm init --service-acount <NAME>. You can find more information about that here.

Release names are unique PER TILLER INSTANCE.

Charts should only contain resources that exist in a single namespace.

It is not recommended to have multiple Tillers configured to manage resources in the same namespace.

CONCLUSION

This chapter has covered the basic usage patterns of the helm client, including searching, installation, upgrading, and deleting. It has also covered useful utility commands like helm status, helm get, and helm repo.

For more information on these commands, take a look at Helm’s built-in help: helm help.

The Helm Plugins Guide

Helm 2.1.0 introduced the concept of a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.

Existing plugins can be found on related section or by searching Github.

This guide explains how to use and create plugins.

AN OVERVIEW

Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.

Helm plugins have the following features:

They can be added and removed from a Helm installation without impacting the core Helm tool.

They can be written in any programming language.

They integrate with Helm, and will show up in helm help and other places.

Helm plugins live in $(helm home)/plugins.

The Helm plugin model is partially modeled on Git’s plugin model. To that end, you may sometimes hear helm referred to as the porcelain layer, with plugins being the plumbing. This is a shorthand way of suggesting that Helm provides the user experience and top level processing logic, while the plugins do the “detail work” of performing a desired action.

INSTALLING A PLUGIN

A Helm plugin management system is in the works. But in the short term, plugins are installed by copying the plugin directory into $(helm home)/plugins.

$ cp -a myplugin/ $(helm home)/plugins/

If you have a plugin tar distribution, simply untar the plugin into the $(helm home)/pluginsdirectory.

BUILDING PLUGINS

In many ways, a plugin is similar to a chart. Each plugin has a top-level directory, and then a plugin.yaml file.

$(helm home)/plugins/
|- keybase/
|
|- plugin.yaml
|- keybase.sh

In the example above, the keybase plugin is contained inside of a directory named keybase. It has two files: plugin.yaml (required) and an executable script, keybase.sh (optional).

The core of a plugin is a simple YAML file named plugin.yaml. Here is a plugin YAML for a plugin that adds support for Keybase operations:

The name is the name of the plugin. When Helm executes it plugin, this is the name it will use (e.g. helm NAME will invoke this plugin).

name should match the directory name. In our example above, that means the plugin with name: keybase should be contained in a directory named keybase.

Restrictions on name:

name cannot duplicate one of the existing helm top-level commands.

name must be restricted to the characters ASCII a-z, A-Z, 0-9, _ and -.

version is the SemVer 2 version of the plugin. usage and description are both used to generate the help text of a command.

The ignoreFlags switch tells Helm to not pass flags to the plugin. So if a plugin is called with helm myplugin --foo and ignoreFlags: true, then --foo is silently discarded.

The useTunnel switch indicates that the plugin needs a tunnel to Tiller. This should be set to trueanytime a plugin talks to Tiller. It will cause Helm to open a tunnel, and then set $TILLER_HOST to the right local address for that tunnel. But don’t worry: if Helm detects that a tunnel is not necessary because Tiller is running locally, it will not create the tunnel.

Finally, and most importantly, command is the command that this plugin will execute when it is called. Environment variables are interpolated before the plugin is executed. The pattern above illustrates the preferred way to indicate where the plugin program lives.

There are some strategies for working with plugin commands:

If a plugin includes an executable, the executable for a command: should be packaged in the plugin directory.

The command: line will have any environment variables expanded before execution. $HELM_PLUGIN_DIR will point to the plugin directory.

The command itself is not executed in a shell. So you can’t oneline a shell script.

Helm injects lots of configuration into environment variables. Take a look at the environment to see what information is available.

Helm makes no assumptions about the language of the plugin. You can write it in whatever you prefer.

Commands are responsible for implementing specific help text for -h and --help. Helm will use usage and description for helm help and helm help myplugin, but will not handle helm myplugin --help.

ENVIRONMENT VARIABLES

When Helm executes a plugin, it passes the outer environment to the plugin, and also injects some additional environment variables.

Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.

The following variables are guaranteed to be set:

HELM_PLUGIN: The path to the plugins directory

HELM_PLUGIN_NAME: The name of the plugin, as invoked by helm. So helm myplug will have the short name myplug.

HELM_PLUGIN_DIR: The directory that contains the plugin.

HELM_BIN: The path to the helm command (as executed by the user).

HELM_HOME: The path to the Helm home.

HELM_PATH_*: Paths to important Helm files and directories are stored in environment variables prefixed by HELM_PATH.

TILLER_HOST: The domain:port to Tiller. If a tunnel is created, this will point to the local endpoint for the tunnel. Otherwise, it will point to $HELM_HOST, --host, or the default host (according to Helm’s rules of precedence).

While HELM_HOSTmay be set, there is no guarantee that it will point to the correct Tiller instance. This is done to allow plugin developer to access HELM_HOST in its raw state when the plugin itself needs to manually configure a connection.

A NOTE ON USETUNNEL

If a plugin specifies useTunnel: true, Helm will do the following (in order):

Parse global flags and the environment

Create the tunnel

Set TILLER_HOST

Execute the plugin

Close the tunnel

The tunnel is removed as soon as the command returns. So, for example, a command cannot background a process and assume that that process will be able to use the tunnel.

A NOTE ON FLAG PARSING

When executing a plugin, Helm will parse global flags for its own use. Some of these flags are not passed on to the plugin.

--debug: If this is specified, $HELM_DEBUG is set to 1

--home: This is converted to $HELM_HOME

--host: This is converted to $HELM_HOST

--kube-context: This is simply dropped. If your plugin uses useTunnel, this is used to set up the tunnel for you.

Plugins should display help text and then exit for -h and --help. In all other cases, plugins may use flags as appropriate.

The Monolith

Whenever I hear “monolith”, I think of a massive LAMP project with a single, burning hot MySQL database.

(not always the case). The monolith architecture looks something like this in Serverless:
I.e. all requests to go to a single Lambda function, app.js. Users and games have nothing to do with one another but the application logic for users and games are in the same Lambda function.

Pros

We found that the greatest advantage that the monolith had over nanoservices and microservices was speed of deployment. With nanoservices and microservices, you have to deploy multiple copies of dependant node_modules (with Node.js) and any library code that your functions share which can be slow. With the monolith, it’s a single function deployment to all API endpoints so deployment is faster. On the other hand, how common is it to want to deploy all endpoints…

Cons

This architecture in Serverless, has similar drawbacks to the monolithic architecture in general:

Tighter coupling. In the example above, app.js is responsible for both users and games. If a bug gets introduced into the users part of the function, it’s more likely that the games part might break too

Complexity. If all application logic is in a single function, the function can get more complicated which can slow down development and make it easier to introduce bugs

Microservices

Microservices in Serverless looks something like this:

Pros

The advantages of the microservices architecture in Serverless inherits advantages of microservices in general. A couple:

Separation of concerns. If a bug has been introduced into games.js, calls to users.js should carry on working. Of course, maybe GET /users/:id might contact the games microservice to get games that the user plays but if users.js and games.js are proper microservices then users.js should handle games.js failing and vice versa

Less complexity. Adding additional functionality to games.js doesn’t make the users.js codebase any more complicated

Cons

As mentioned above with the monolith, microservices in Serverless can result in slower deployments.

Nanoservices

Nanoservices take microservices to the extreme – one function per endpoint instead of one per resource:

Pros

Nanoservices take the advantages of microservices and amplifies them. Separation of concerns is greater – get_users.js is probably going to be simpler than users.js that handles everything to do with a user.

Cons

Again, similar to microservices but even more so – the number of functions to deploy can get huge so deployment time can increase.

Hybrid

There is nothing to stop Developers taking a hybrid approach to their architecture e.g. a mixture of nanoservices and microservices. In our example, if there was a lot of game functionality, it might make sense to split the functionality into nanoservices but if is less functionality related to users, microservices could be more appropriate:

If you have any examples of how you’re architecting your Serverless projects, tell us about it!

The pods in kubernetes are in pending state when we execute kubectl get pods
Execute the following command to see the root cause:
kubectl get events
You will see output as follows:
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAG E
1m 14h 3060 hello-nginx-5d47cdc4b7-8btwf.14ecd67c4676131c Pod Warning FailedScheduling default-scheduler No nod es are available that match all of the predicates: PodToleratesNodeTaints (1).This error usually comes when we try to create pod on the master node:
Execute the following command:

helm install stable/mysql: Error: no available release name found
Execute the helm ls command to get the root cause:
The error I received is
Error: configmaps is forbidden: User “system:serviceaccount:kube-system:default” cannot list configmaps in the namespace “kube-system”
The default serviceaccount does not have API permissions. Helm likely needs to be assigned a service account, and that service account given API permissions.
The commands used to solve are:

After that if you get the following error: Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod not found (“tiller-deploy-cffb976df-m5z6f_kube-system”)
Then execute helm init –upgrade

kubernetes pods keep crashing with “CrashLoopBackOff” but I can’t find any logI had the need to keep a pod running for subsequent kubectl exec calls and as the comments above pointed out my pod was getting killed by my k8s cluster because it had completed running all its tasks. I managed to keep my pod running by simply kicking the pod with a command that would not stop automatically as in:

Kubernetes pods cannot connect to internet kubeadm:
If your pods cannot connect to the internet, you caan check the following:
Spin up a busybox
Execute: ping 8.8.8.8ping google.comroute -n You will get an ip for gateway. Check if you can ping the gateway
In the kubernetes master node check the ip of kube-dns pod with command:kubectl get pods -n kube-system -o wide | grep kube-dns this will return an IP in output. In your pod container check if this IP is present as nameserver.ifconfig note the IP address range assigned to the container.
In the kubernetes master node execute ifconfig check that the IP address noted previously belong to which bridge’s IP range.
If it belongs to some other interface than expected you can check it by executing:brctl show check if the bridge has an interface attached to it.
If not this is the reason the pods do not have an internet connection.
You can attach the interface with this command:brctl addif mybridge eth0This issue can be in the weave network, try to do a kubeadm reset and add a flannel network

Kubernetes is a cluster and orchestration engine for docker containers. In other words Kubernetes is an open source software or tool which is used to orchestrate and manage docker containers in cluster environment. Kubernetes is also known as k8s and it was developed by Google and donated to “Cloud Native Computing foundation”

In Kubernetes setup we have one master node and multiple nodes. Cluster nodes is known as worker node or Minion. From the master node we manage the cluster and its nodes using ‘kubeadm‘ and ‘kubectl‘ command.

Kubernetes can be installed and deployed using following methods:

Minikube ( It is a single node kubernetes cluster)

Kops ( Multi node kubernetes setup into AWS )

Kubeadm ( Multi Node Cluster in our own premises)

In this article we will install latest version of Kubernetes 1.7 on CentOS 7 / RHEL 7 with kubeadm utility. In my setup I am taking three CentOS 7 servers with minimal installation. One server will acts master node and rest two servers will be minion or worker nodes.

On the Master Node following components will be installed

API Server – It provides kubernetes API using Jason / Yaml over http, states of API objects are stored in etcd

Scheduler – It is a program on master node which performs the scheduling tasks like launching containers in worker nodes based on resource availability

Controller Manager – Main Job of Controller manager is to monitor replication controllers and create pods to maintain desired state.

etcd – It is a Key value pair data base. It stores configuration data of cluster and cluster state.

Kubectl utility – It is a command line utility which connects to API Server on port 6443. It is used by administrators to create pods, services etc.

On Worker Nodes following components will be installed

Kubelet – It is an agent which runs on every worker node, it connects to docker and takes care of creating, starting, deleting containers.

Kube-Proxy – It routes the traffic to appropriate containers based on ip address and port number of the incoming request. In other words we can say it is used for port translation.

Pod – Pod can be defined as a multi-tier or group of containers that are deployed on a single worker node or docker host.

Installations Steps of Kubernetes 1.7 on CentOS 7 / RHEL 7

Perform the following steps on Master Node

Step 1: Disable SELinux & setup firewall rules

Login to your kubernetes master node and set the hostname and disable selinux using following commands

Step 5: Deploy pod network to the cluster

Try to run below commands to get status of cluster and pods.

To make the cluster status ready and kube-dns status running, deploy the pod network so that containers of different host communicated each other. POD network is the overlay network between the worker nodes.

As we can see master and worker nodes are in ready status. This concludes that kubernetes 1.7 has been installed successfully and also we have successfully joined two worker nodes. Now we can create pods and services.