My Life as a Sys Admin

Category Archives: Uncategorized

Last year Containers based technology showed up big boom. A lot of OpenSource projects and startups wrapped over Docker. Now Docker became a favourite tool for both Dev and Ops guys. I’m a big fan of Docker and i do all my hacks on containers. This time i decided to play with Docker private registry, so that i sync all my docker clients with a central registry. In this test setup i’m using Ubuntu 12.04 server with Nginx as a reverse proxy. With the Nginx proxy i can easily enforce basic auth and can protect my private docker registry from unauthorized access.

Installing Docker Registry

Download the latest release of Docker Registry from the Docker’s github repo

Now once the nginx is up, we can check the connectivity between docker client and registry server. Since registry is using a self signed certificate, we need to whitelist the CA on the Docker client machine.

Note: If the CA is not added to trusted list, Docker client wont be able to authenticate against the registry server. Once the CA is added to trusted list, we can test the connectivity between Docker client and Registry server. If the Docker daemon was running before adding the CA, then we need to restart the Docker daemon

Currently both the Docker Client and Registry resides on the same machine, we can test push/pull image from a remote machine. The only dependency is we need to add the Self Signed CA to the trusted CA list, otherwise docker client will raise an SSL error while trying to login against the private registry.

Setting up S3 Backend for Docker Registry

Docker registry by default supports S3 backend for storing the images. But if we are using S3, it’s better to cache the image locally so that we don’t have to fetch S3 all the time. Redis really comes to the rescue. We can set up Redis Server as an LRU Cache and can define the settings in the config.yml of the registry or as an env variable.

$ apt-get install redis-server

Once Redis server is installed, we need to define the maxmemory to be allocated for the cache and maxmemory-policy which tells Redis how to clean the old cache when the maxmemory limit is reached. Add below settings to the redis.conf file

maxmemory 2000mb # i'm allocating 2GB of cache size
maxmemory-policy volatile-lru # removes the key with an expire set using an LRU algorithm

Now let’s define the env variables so that docker-registry can use them while starting up. Add the below variables to the /etc/default/docker-registry file.

The above logs shows us that registry has started with Redis cache. Now we need to setup the S3 backend storage. By default for dev env, defaul backend is file storage. We need to change it to S3 in the config.yml

Now if we check the config.yml, in the S3 backend section, the mandatory variables are the ones mentioned below. The boto variables are needed only if we are using any non-Amazon S3-compliant object store.

AWS_REGION => S3 region where the bucket is located
AWS_BUCKET => S3 bucket name
STORAGE_PATH => the sub "folder" where image data will be stored
AWS_ENCRYPT => if true, the container will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. Default value is `True`
AWS_SECURE => true for HTTPS to S3
AWS_KEY => S3 Access key
AWS_SECRET => S3 secret key

We can define the above variables in the /etc/default/docker-registry file. And we need to restart the registry process to make the changes effective.

Now for those who want to have a Continous Integration system, we can set up Jenkins to build the autmated images and upload to our Private registry and use Mesos/CoreOS to deploy the image through out our infrastructure in a fully automated fashion.

It’s more than a month since i joined the DevOps family at Plivo, since i’m pretty new to the Telecom Technology, i was digging more around it. This time i decided to play around with FreeSwitch, a free and open source communications software for the creation of voice and messaging products. Thanks to Anthony Minessale for designing and opensourcing such a powerfull powerfull application. FreeSwitch is well documented and there are pretty good blogs also available on how to setup a PBX using FreeSwitch. This time i’m going to explain on how to make a private Freeswitch server to use Plivo as a SIP Trunking service.

A bit about Plivo. Plivo is a cloud based API Platform for building Voice and SMS enabled Applications. Plivo provides Application Programming Interfaces (APIs) to make and receive calls, send SMS, make a conference call, and more. These APIs are used in conjunction with XML responses to control the flow of a call or a message. We can create Session Initiation Protocol (SIP) endpoints to perform the telephony operations and the APIs are platform independent and can be used in any programming environment such as PHP, Ruby, Python, etc. It also provides helper libraries for these programming languages.

First we need a valid Plivo account. Once we have the Plivo account, we can log into the Plivo Cloud service. Now go to the ”Endpoints” tab and create a SIP endpoint and attach a DirectDial app to it. Once this is done we can go ahead and start setting up the FreeSwitch instance.

Installing FreeSwitch

Clone the Official FreeSwitch Github and Repo and compile from the source.

$ git clone git://git.freeswitch.org/freeswitch.git && cd freeswitch
$ ./bootstrap.sh && ./configure --prefix=/usr/local/freeswitch
$ make && make install
$ make all cd-sounds-install cd-moh-install # optional, run this if you want IVR and Music on Hold features

Now if we have more than one ip address on the machine, and if we want to bind to a particular ip, we need to modify two files ”/usr/local/freeswitch/conf/sip_profiles/external.xml” and ”/usr/local/freeswitch/conf/sip_profiles/internal.xml”. In both the files, change the parameters ”name=”rtp-ip”” and ”param name=”sip-ip”” with the bind ip as the values.

By default, Freeswitch will create a set of users, which includes numerical usernames ie, 1000-1019. So we can test the basic connectivity between user’s by making a call between two user accounts. We can register two of the accounts in two SoftPhones and we can make a test call and make sure that FreeSwitch is working fine. We can use the FS binary file to start FreeSwitch service in forground.

$ /usr/local/freeswitch/bin/freeswitch

Configuring Gateway

Once the FreeSwitch is working fine, we can start configuring the SIP trunking via Plivo. So first we need to create an external gateway to connect to Plivo. I’m going to use the SIP endpoint created on the Plivo Cloud to initiate the connection. The SIP domain for Plivo is ”phone.plivo.com”. We need to create a gateway config. Go to ”/usr/local/freeswitch/conf/sip_profiles/external/”, here we can create an XML gateway config file. My config file name is plivo. Below is the content for the same.

There are a lot of other parameters which we can add it here, like caller id etc. Replace the username and password with the Plivo endpoint credentials. If we want to keep this endpoint registered, we can set the register param as true and we can set the expiry time at expire-seconds, so that the Fs will keep on registering the Endpoint with Plivo’s Registrar server. once the gateway file is created, we can either restart the service or we can run “reload mod_sofia” on the FScli. If the FreeSwitch service si started in foreground, we will get the FScli, so we can run the reload command directly on it.

Setting up Dialplan

Now we have the Gateway added. Now we need to the setup the Dial Plan to route the outgoing calls through Plivo. Go to ”/usr/local/freeswitch/conf/dialplan/” folder and add an extension on the ”public.xml” file. Below is a sample extension config.

So now all calls matching to the Regex will be transferred to the default dial plan. Now on the the default dial plan, i’m creating an exntension and will use the FreeSwitch’s ”bridge” application to brdige the call with Plivo using the Plivo Gateway. So on the ”default.xml” add the below extension.

Now we can restart the FS service or we can reload “mod_dialplan_xml” from the FScli. Once the changes are into effect, we can test whether the call is getting routed via Plivo. Configure a soft phone with a default FS user and make an outbound call which matches the regex that we have mentioned for routing to Plivo. Now if all works we should get a call on the destination number. We can check the FS logs at ”/usr/local/freeswitch/log/freeswitch.log”.

We can also mention the CallerID on the Direct Dial app which we have mapped to the SIP endpoint. Now for Incoming calls, create a app that can forward the calls to one of the user’s present in the FreeSwitch, using the Plivo’s Dial XML. So the XML should look something like below. I will be writing a more detailed blog about Inbound calls once i’ve have tested it out completely.

<Response>
<Dial>
<User>FSuser@FSserverIP</User>
</Dial>
</Response>

But for security, we need to allow connections from Plivo server. So we need to allow those IP’s on the FS acl. We can allow the IP’s in the ”acl.conf.xml” file at ”/usr/local/freeswitch/conf/autoload_configs”. And make sure that the FS server is accessible via a public ip atleast for the Plivo server’s which will forward the calls.

The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 20,000 times in 2013. If it were a concert at Sydney Opera House, it would take about 7 sold-out performances for that many people to see it.

I’ve been playing around with MCollective for the past few months. But this time i wanted to try out theMongo discovery method. The response time was quite faster for the Mongo discovery method. So i really wanted to try it out. Setting out MCollective Server/Client is prettyl simple. You can go through my previous blog. Now we need to install the Meta registration plugin on all the MCollective Servers. Just Download and Copy meta.rb in the MCollective registration plugin folder. In my case, i’ve Debian based machine’s, so the location is, /usr/share/mcollective/plugins/mcollective/registration/. This will make the metadata available to other nodes.

Now add the below three lines into the server.cfg of all the MCollective server’s.

registration = Meta
registerinterval = 300
factsource = facter

Now install the mongodb registration agent on one of the nodes, which will be our slave node.. Do not install this on all the nodes. There is a small bug in this agent. So follow the steps mentioned here and modify the registration.rb file. Now install mongoDB server on the slave node. Also add the below lines to the server.cfg in the slave machine.

Now restart the mcollective service. If we increase the log level to debug, then we can see the below lines in the mcollective.log. This indicates that the plugin is getting activated and it is receiving request from the machines, whose fqdn is shown in the below line.

Initially, i used the default registration.rb file which i downloaded from the github. But it was giving me an error handlemsg Got stats without a FQDN in facts. So don’t forget to modify theregistration.rb

Now go connect to mongoDB and verify that the nodes are getting registered in it.

So, now both my master and slave have been registered into the mongoDB. Now in order to use theMongo Discovery Method, we need to install the mongodb discovery plugin and also we need to enable the direct addressing mode. so we need to add direct_addressing = 1 in the server.cfg file.

A few months back, i got a chance to attend the rootconf2012, where i first came to know about “Hubot” , developed by github. I was very much interested, especially the gtalk plugin, where we can integrate “hubot” with a gmail account. We can make Hubot listen to every words and make it to respond back. There are so many default hubot-scripts which we can use to play around with it.

Configuring Hubot is very simple.

First, we’ll install all of the dependencies necessary to get Hubot up and running.

Once all the parameters are set, we can start the Hubot with Gtalk adapter.

“./bin/hubot -a gtalk”

Now Hubot is online with Gtalk. No we can add the Hubot gmail account to our Gtalk Account and start playing around with it. Hubot comes with a bunch of default scripts. If we type “help”, we will get a a bunch of options for each of these scripts.

Today I was able to execute some Bash commands, using my custom coffee scripts, which gave me some weird ideas, to use “Hubot” for “ChatOPS“. Let’s see how it works. Once it’s done i’ll update it in my blog. Wait for more……………….

Yesterday I found a munin-graphite client, which was used in the carnin-eye project. It just need one simple “client.yml” file, whose location can be mentioned in the munin-graphite.rb file. You can get the munin-graphite.rb file from carnin-eye github page.

We just have to mention the munin-node details in the client.yml. Below is the content of the client.yml file,

Finally, we have to create cron job to execute the munin-graphite.rb file, which will populate the our munin data into graphite.

The config file will be present in “/etc/mcollective/server.cfg”. Edit the file, stomp host should be the the machine where we have installed the activemq. Stomp port will be “6163” (can be changed by modifying activemq.xml file)

Also change modify the stomp user and password to the following,

plugin.stomp.user = mcollective

plugin.stomp.password = marionette

The above password can changed by modifying the activemq.xml file

And restart the mcollective service.

MCollective Client

For Mcollective client, download and install mcollective-common mcollective-client packages, and edit the the client.cfg file present inside the /etc/mcollective folder.

Now we can use the “mco” command to check the connectivity. we can use mco find to find the mcollective servers.