My Life as a Sys Admin

Nowadays CI or Conitnous Integration is being implemented in almost all IT companies. Many of the DevOps work’s are in related to the CI. The common scenario is, Developers push the codes to the GIT/SVN repo and triggers jenkins to perform tests and sometimes packaging, and if it’s a fuly automated system the new changes are deployed to the staging. And the QA team takes over the testing part. But when you are in small team, all these has to be achieved with the minimal team. So before the new change is completely pushed to staging, i decided to have a simple testing of all the components quickly. I read about blogs where many DevOps engineers spins up new instances like a full replica of their entire architecture and performs the new code deployment and load test on this new cluster and if all the components are behaving properly with the new code change, it’s then further deployed to Staging for next level of full scale QA.

Though the above step seems to be interesting, i didn’t want to waste up resources by spinnig up a new set of instances each time. Being a hardcore Docker fan, i decided to replace the instance lauch iwth Docker containers. So instead of launching ne instances, Jenkins will launch new Docker containers with SDN(Software Defined Network). Below is simple architecture diagram of my new design.

So the work flow goes like this,

1) Developers pushes the new code changes along with the new Tag to the corresponding Repositories.

2) Github webhook then triggers jenkins to start the Build jobs.

3) Jenkins performs the build and if the build succeeds, jenkins triggers Debian pacakging for the application.

4) Once the packaging is completed, Jenkins will trigger Docker image creation for the corresponding application using the newly build packages.

5) Once the image build is completed, Jenkins uses Docker Compose to build our Virtual clusters which is an exact replica of our Prod/Staging.

6) Once the cluster is up, we perform automated testing of all our components and makes sure that the components are behaving normally with the new code changes.

Now once the test results are normal, we can initiate the code deployment to staging and can start the full scale QA.

By using Docker, i was able to reduce the resource usage. All these containers are running on a Single M3.Medium box. Sice i’m concentrating more on the components working part and not on the load test side, with this smaller box i was able to achieve my results properly.

A bit about docker-compose. I’m using docker-compose for managing the docker cluster. Compose is a tool for defining and running complex applications with Docker. With Compose, we can define a multi-container application in a single file, then spin our applications up in a single command which does everything that needs to be done to get it running. Below is my docker-compose yml file content.

pkgr is a tool for building deb/rpm packages for Python/Ruby/Node/GO applications. It uses heroku buildpack and embed all the dependencies related to the application runtime within the package. It also gives us a nice executable, which closely replicates the Heroku toolbelt utility. There are only 2 requirements for pkgr, 1) It must have a Procfile and 2) It should be Heroku compatible.

By default, pkgr supports packaging Ruby/GO/Node apps. But it also supports custom buildpacks, so we can use heroku-python build pack to pacakge Python apps too.

MongoDB is one of the commonly used NOSQL document store. For smaller use cases, we might not need a full scaled replica set, instead we can use MongoDB in a traditional way like a Master-Slave architecture. In this blog, i’m going to explain how to convert a Standalone MongoDB server to a Master-Slave Model, and Promoting a Slave instance into a Master node in case of master crash.

Standalone to Master-slave Model.

First, on the master node, we need to add master=true on to the mongodb config file and restart the mongo service. On the new mongo node, which is going to be the slave, add the below config options to the mongodb configuration file.

We can also check the replication status from the Mongo master cli via rs.printReplicationInfo() or db.serverStatus( { repl: 1 } ). We can also check the same on the slave nodes, but by default, read queries are not allowed on the slave and it will throw an error. We can allow reads by running db.getMongo().setSlaveOk() on the slave mongo shell. This will override the restriction and we can use the rs.printReplicationInfo() or db.serverStatus( { repl: 1 } ) to see the replication status.

Promoting a Slave node to Master

This is one the requirement that we keep slave nodes. In case of Master crash, we can easily promote the Slave node and can minimize the interruption. Now promoting a Slave node to Master, follow the below steps.

1) Stop the mongo service on the slave
2) remove all the local files from the mongo data directory
$ cd <mongo_data_directory> && rm -rvf local*
3) Remove the slave configurations from mongo config file, and set `master=true` (This is required if we have more than 1 Slaves, so that the rest of the slaves can connect to new master).
4) Restart the mongo service, now this new master ready to accept writes.

If we have multiple slaves, we need to change the slave source IP, so that they can connect to the new master. But even if the connect to the new master, replication will fail. So we have two methods, either remove the data and perform a new data replication or use force a complete resync to all the slaves using the below command

#On the mongo master shell, run
$ use admin
$ db.runCommand( { resync: 1 } # This will force a complete resync on all the slaves.

This procedure is useful, if you are using a Standalone/Master-Slave method. For a real HA/Fault tolerant design, replica set proves to be more efficient, where primary master selection takes place automatically if the actual primary node crashes, thus preventing the down time to minimum.

It’s been quite a while since my last blog. This time i’m coming with a bunch of topics to write, starting with Kannel. After moving to my new role, the first task i got was to set up an SMPP server with one of our carriers. After digging sometime in internet i found one project kannel, which is a perfect game player for me. So in this blog, i’ll be explaining on how to setup an SMPP SMS gateway locally.

Now we have the kannel installed on our custom prefix folder. Let’s go ahead setting the Kannel application.

Setting up Kannel

Kannel comprises of two processes, smsbox and bearerbox. Bearerbox service is the one which is in contact with the carrier gateways, responsible for sending and receiving SMS. smsbox is the service which interacts between our application and bearerbox. ie, it receives incoming sms from our bearer box and sends it our application and vice versa. The kannel config consists of multiple parts, which are explained below.

Add all the above configurations according to the requirement on to the kannel.conf file. A sample init script for Debian/Ubuntu is available here

Once the SMPP service is started, check the bearebox logs for the connectivity with the carrier’s smpp gateway. Once the connection is up, we can start to send/receive sms. For incoming sms, smsbox will make an HTTP request based on our configuration. For example, if we are using a POST method, the sms details like From, To can be retrieved from the POST HEADERS and the sms text from the request data. Below are some of the headers that come along with the POST requests.

X-Kannel-From => sender id
X-Kannel-To => recepient id

Similarly for outbound sms, our application makes a HTTP GET request to the smsbox url and smsbox will carry it over to the bearerbox which then carry over to the carrier for delivery.

For the past 2 year’s, i played with config management tools like Puppet and Salt. But all these tools were mostly Client-Server Model, except Salt where it supports Push model also. But for the last 6 months, Ansible is gaining more popularity. Ansible is a Push model system which relies on SSH. So before i adopt Ansible completely, i decided to have a try. I need to make sure that the Ansible supports all basic features what other competitors supports. Which is really helpful in migration also.

Installation

Ansible is pretty easy to install. We can install it from source or via package managers or even via PIP.We can use the official ubuntu ppa for installing Ansible.

Since Ansible relies on SSH, things like Host Key verification errors will prevent the SSH connections resulting in failures. We can disable the Host Key Verfication check in the ansible.cfg file

host_key_checking = False # add this option to the config file

or we can set an env variable export ANSIBLE_HOST_KEY_CHECKING=False for the current session. By default ansible uses the hosts file present in the ansible home directory. So we can define the static machines there. We can add either the IP or DNS resolvable FQDN. Once the IP/FQDN is added, we can test the connectivity via ping module. Make sure that the Ansible server’s SSH key is added to the authorized_keys on the remote machines.

Managing Custom Facts

Config management tools like puppet/Salt supports custom facts to be defined on the remote machines. We can define the custom facts and the config management server can use these facts. Even though Ansible is an agentless server, we can define the custom facts on the remote systems. Whenever we query for facts, ansible connects to the remote machines and fetches the facts using its default library. But it also looks for custom facts in /etc/ansible/facts.d/. We need to put our custom facts file in this directory. The file has to be of .fact extension,must be executable and should return a valid JSON. This is in the case of a script. If we just want to define some facts directly, we can simple create a file like below

[myfact]
role=test
profile=staging

The above fact file will add two fact variables called role and profile with the value as mentioned in the file. Now let’s use the system module and see if we are able to retrieve the new custom facts.

Managing Dynamic Inventory

In the Cloud environment, it’s difficult to maintain a static inventory. Ansible does supports Dynamic inventory for vendors including AWS EC2. Ansible provides us an Inventory script. We can also use this script directly and query EC2 to get the list of all instances. To successfully make an API call to AWS, we will need to configure Boto. The simplest is just to export two environment variables:

We can also use regex with these say like tag_Name_test*. For rackspace user’s there is an official module called rax that works perfectly with ansible

Enrcypting YAML Data files

This is an important feature that most of the config management system lacks. In most of the current systems, we need to define the sensitive data like say ssh-keys, API’s AuthID/Token etc… in plain text which increases the security risk. Ansible Vault comes for rescue here. Vault feature can encrypt any structured data file used by Ansible. This can include “group_vars/” or “host_vars/” inventory variables, variables loaded by “include_vars” or “vars_files”, or variable files passed on the ansible-playbook command line with “-e @file.yml” or “-e @file.json”. Role variables and defaults are also included!. While invoking any playbook, we can pass the --ask-vault-pass along the vault password, so Ansible can can decrypt the file and use its contents while performing any execution.

Ansible indeed is truly an awesome product. It does have many new features like vault compared to its competitors. It’s backed by an awesome community. So we can expect more exciting features in future.

Last year Containers based technology showed up big boom. A lot of OpenSource projects and startups wrapped over Docker. Now Docker became a favourite tool for both Dev and Ops guys. I’m a big fan of Docker and i do all my hacks on containers. This time i decided to play with Docker private registry, so that i sync all my docker clients with a central registry. In this test setup i’m using Ubuntu 12.04 server with Nginx as a reverse proxy. With the Nginx proxy i can easily enforce basic auth and can protect my private docker registry from unauthorized access.

Installing Docker Registry

Download the latest release of Docker Registry from the Docker’s github repo

Now once the nginx is up, we can check the connectivity between docker client and registry server. Since registry is using a self signed certificate, we need to whitelist the CA on the Docker client machine.

Note: If the CA is not added to trusted list, Docker client wont be able to authenticate against the registry server. Once the CA is added to trusted list, we can test the connectivity between Docker client and Registry server. If the Docker daemon was running before adding the CA, then we need to restart the Docker daemon

Currently both the Docker Client and Registry resides on the same machine, we can test push/pull image from a remote machine. The only dependency is we need to add the Self Signed CA to the trusted CA list, otherwise docker client will raise an SSL error while trying to login against the private registry.

Setting up S3 Backend for Docker Registry

Docker registry by default supports S3 backend for storing the images. But if we are using S3, it’s better to cache the image locally so that we don’t have to fetch S3 all the time. Redis really comes to the rescue. We can set up Redis Server as an LRU Cache and can define the settings in the config.yml of the registry or as an env variable.

$ apt-get install redis-server

Once Redis server is installed, we need to define the maxmemory to be allocated for the cache and maxmemory-policy which tells Redis how to clean the old cache when the maxmemory limit is reached. Add below settings to the redis.conf file

maxmemory 2000mb # i'm allocating 2GB of cache size
maxmemory-policy volatile-lru # removes the key with an expire set using an LRU algorithm

Now let’s define the env variables so that docker-registry can use them while starting up. Add the below variables to the /etc/default/docker-registry file.

The above logs shows us that registry has started with Redis cache. Now we need to setup the S3 backend storage. By default for dev env, defaul backend is file storage. We need to change it to S3 in the config.yml

Now if we check the config.yml, in the S3 backend section, the mandatory variables are the ones mentioned below. The boto variables are needed only if we are using any non-Amazon S3-compliant object store.

AWS_REGION => S3 region where the bucket is located
AWS_BUCKET => S3 bucket name
STORAGE_PATH => the sub "folder" where image data will be stored
AWS_ENCRYPT => if true, the container will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. Default value is `True`
AWS_SECURE => true for HTTPS to S3
AWS_KEY => S3 Access key
AWS_SECRET => S3 secret key

We can define the above variables in the /etc/default/docker-registry file. And we need to restart the registry process to make the changes effective.

Now for those who want to have a Continous Integration system, we can set up Jenkins to build the autmated images and upload to our Private registry and use Mesos/CoreOS to deploy the image through out our infrastructure in a fully automated fashion.

With the rise of CI tools like Jenkins/Gitlab and Config management tools like Salt/Ansible, Continous integration became so flexible. Now if we check, most of the Projects are using GIT as a Version control and CI tools like Jenkins to build and test the packages automatically whenever any change is pushed to the repo. And finally once the build is successful, the packages are pushed to repo so that config management systems like Salt/Puppet/Ansible can go ahead and perform the upgrade. In my previous blogs, i’ve explained on how to build a Debian package and how to create and manage APT repo’s via aptly. In this blog i’ll explain how to automate these two processes.

So the flow is like this. We have a Github repo, and once a changed is pushed to the repo, Github will send a hook to our Jenkins server which in turn triggers the Jenkins package build. Once the package has been successfully built, jenkins will automatically add the new packages to our repo and publish the same to our APT repo via aptly

Once the Jenkins service is started, we can access the Jenkins UI via ”http://jenkins-server-ip:8080”. By default there is no authentication for this URL, so accessing the URL will open up the Jenkins UI.

Creating a Build Job in Jenkins

In order to use a Git repo, we have to install the Git plugin first. In Jenkins UI, Go to ”Manage Jenkins” – > ”Manage Plugins” – > ”Available” and search for ”GIT plugin” and install it. Once the Git plugin has been installed we can create a new build job.

Click on ”New Item” on the Home Page and Select ”Freestyle Project” and Click on “OK”. On the Next page, we need to configure all the necessary steps for build job. Fill in the necessary details like Project Name, Description etc. Under “Source Code Management”, select Git and enter the Repo URL. Make sure that the jenkins user has access to the repo. We can also use Deploy keys, but i’ve generated a separate ssh key for Jenkins user and the same has been added to Github. Under ”Build Triggers” select ‘Build when a change is pushed to GitHub’ so that Jenkins will start the build job everytime when a change has been pushed to repo.

If you see my above command, i’ve used the script command. This is because, i was getting this error “aptly stderr: gpg: cannot open tty /dev/tty': No such device or address“, whenever i try to update a repo via aptly using Jenkins. This is due to a bug in aptly. The fix has been placed on the Master branch but its not yet released. The script command is a temporary work around for this bug.

Now we have a Build job ready. We can manually trigger a build to test if the Job is working fine. If the build is successfull, we are done with our build server. Now the final step is Configuring Github to send a trigger whenever any change is pushed to Github.

Configuring Github Triggers

Go the Github repo and Click on the Repo settings. Open ”Webhooks and Services” and select ”Add Service” and select ”GitHub plugin“.Now it will ask for Jenkin’s Hook URL, which is ”http://:8080/github-webhook/” and add the service. Once the service is set, we can click on “Test service” to check if the webhook is working fine.

Once the test hook is created, go to the Jenkins job page and select ”GitHub Hook Log”. The test hook should get displayed there. If not there is something wrong on the config.

Now we have a fully automated build and release management. Config management tools like Salt/Ansible etc.. can go ahead and start the deployment process.