My Life as a Sys Admin

Monthly Archives: April 2013

DKIM and SPF are becoming most commonly adopted methods for email validation. Even if we want to use the DMARC (Domain-based Message Authentication, Reporting & Conformance), we need to configure SPF and DKIM first. DMARC acts as a layer above the SPF and DKIM. DMARC allows the receiever’s mail server to check if the Email is aligned properly as per the DMARC policy, and it queries the sender’s DNS server for the DMARC action, ie, whether to reject or quarantine if alignment fails. The action will be mentioned in the TXT record on the Sender’s DNS server. There is a good collection of DMARC training videos available in MAAWG site. We will get a clear idea on how DMARC works from those videos.

In this post, i will explain on how to make Qmail to do DKIM sign on the outgoing mails. There is a qmail-patch method available, but since i’m using qmail-1.0.3 with custom patch, i was not able to use the DKIM patch along with my custom patch. So the next method is to use a wrapper around “qmail-remote”, since qmail-remote is responsible for delivering remote mails, a wrapper around it will help us to sign the email and then start the remote delivery. There are a few wrappers mentioned in this site. I’m going to use this qmail-remote wrapper.

Initial Settings

First move the current ”qmail-remote” binary to ”qmail-remote.orig”. Now download the wrapper and move it to the /var/qmail/bin/ file.

This wrapper depends on two programs, 1) dktest, which comes with the libdomainkeys, 2) dkimsign.pl, which is perl script for signing the emails. Both these files, must be available at the path mentioned in the “qmail-remote” wrapper file.

Go through the ”dkimsign.pl” script and install the Perl modules mentioned in it using cpan. There is no official debian package for libdomainkeys, so we need to compile it from the source.

It is very important that the default file be readable only by root and the group which qmailr (the qmail-remote user) belongs to. Now add a TXT entry to the DNS for ”default._domainkey.example.com” containing the quoted part in the /etc/domainkeys/example.com/default.pub

Once everything is added, restart the “qmail-send” and send a test mail to any non local domain. IF things goes fine, we can see a line like the below in “qmail-send” log.

In my previous post’s, i’ve explained on How to setup Sensu server and setting up check’s and handler’s. The default dashboard is very simple with limited options, but for those who wants a full fledged dashboard, there is a Rails project in Github Sensu-Admin. So let’s try setting it up.

First clone the repository from Github.

$ git clone https://github.com/sensu/sensu-admin.git

Now go to sensu-admin folder, and run bundle install to install all the dependency gems. Now go inside the ”config” folder, edit the ”database.yml” and fill in the database details. I’m going to use mysql, below is my database config.

Now run rake db:migrate and then rake db:seed. The seed file creates auser account named ”admin@example.com” with password ”secret”.

We can start the Rails app by running “rails s”, this will start the app using the thin webserver at port 3000. Access the dashboard using the url ”http://server_ip:3000” Login to the dashboard with the admin@example.com and go to the “*Account” tab and modify the default user name and password. Now we go through tabs and check if it displays the checks, clients, events etc properly. This is a screenshot of the SensuAdmin dashboard.

In my previous post, i’ve explained on how to setup Sensu Server and Client. Now i’m going to explain how to setup Check’s and Handler’s in Sensu. There is a very good collection of sensu-community-plugins.

Setting up Check’s

On the Sensu Client Node,

First clone the plugins repository on the client node. Now install the ”sensu-plugin” gem on the client node. And then copy the required plugins to /etc/sensu/plugins/ folder.

On the Sensu Server,

We need to define the check first. Create a json config file for the check in /etc/sensu/conf.d. Following is a sample check config,

The above check will be applied to all clients subscribed to ”snmp” exchange. Based on the interval, Server will publish this check request, which will reach all the clients subscribed to the ”snmp” exchange using an arbitrary queue. The client will run the command mentioned in the command part, and then it will publish the result back to th server through Result queue. The check_snmp is a small plugin written by me. If we check the sensu-server log, we can see the result coming from the client machine. Below one is a similar log output in my sensu-server log.

The above log line shows us what are handler’s enabled for this check, what is the executed command, subcribers, name of the check, timestamp at the time when the command was issued, timestamp of the time when the server has received the result, Output of the check command etc. If there is any while executing th check command, we can see the errors popping in the log’s soon after this line in the server log.

Setting up Handler’s

Sensu has got a very good collection Handler’s, available at the sensu-community-plugin repo in github. For example there is a hanlder called ”show”, available at the debug section in Handler’s, which will display a more debug report about the Event as well as the Sensu server’s settings. This is the output which i got after applying ”show” handler in my serverlog. But it’s not possible to go check the log’s continously, so there another plugin called “mailer”, which can send email alerts like how nagios does.

So first get the “mailer” plugin files from the sensu-community-plugin repo in github.

Now edit the mailer.json, and change the settings to fit to our environment. We need to define a new pipe handler for this new handler. So create a file /etc/sensu/conf.d/handler_mailer.json, and add the below lines to it.

Now restart the sensu-server to make the new changes to come into effect. If everything goes fine, when the sensu detects a state change it will execute this mailer handler, we can also see the below lines in server log.

Sensu is executing the mailer script, and if there is any problem, we will see the corresponding error following the above line, or we will receive the email alert to email id mentioned in the “mailer.json” file. But in my case, i was getting an error, when the sensu invoked the “mailer” handler.

After playing for some time, i came to know that, it was not parsing the options from the mailer.json file, so i manually added the smtp and email settings directly in mailer.rb file. Then it started working fine. I’m writing a small script which will be using the basic ‘net/smtp’ library to send out mails. There are many other cool Handler’s like sending matrices to Graphite, Logstash, Graylog, sending notifcations to irc,xmpp,campfire etc. Compare to traditional monitoring tools, Sensu is an Amazing tool, we can use any check script’s, whether it’s ruby or perl or bash, doesn’t matter. The one common thing which i heard about other people, was the lack of proper dashboard like the traditional monitoring tools. Though Sensu dashboard is a simple one, i’m sure it will improve a lot in future.

Since I’m a CLI Junky, I dont care much about the dashboard thing, apart from that i have many good and interesting stuffs to hang around with Sensu. Cheers to portertech and sonian for open sourcing such an amazing tool.

Monitoring always plays an important role, especially for sysadmins. There are a lot of Monitoring tools available, like Nagios, Zenoss, Icinga etc. Sensu is a new generation Cloud monitoring tool designed by Sonian. Sensu is bascially written in Ruby, uses RabbitMQ Server as the Message Broker for Message transactions, and Redis for storing the data’s.

Sensu has 3 operation Mode.

1) Request-Reply Mode, where the server will send a check request to the clients through the RabbitMQ and the clients will reply back the results.

2) Standalone Mode, where the server will not send any check request, instead the client itself will run the checks according to interval mentioned, and sends the results to the sensu master through the Result queue in RabbitMQ.

3) Push Mode, where the client will send out results to a specific handler.

So now we can start installing the dependencies for sensu, ie, RabbitMQ and Redis.

Now we need to generate SSL certificates for RabbitMQ and the sensu clients. We can use RabbitMQ with out ssl also, but it will more secure with SSL, @joemiller has wrote a script to generate the SSL certificates. It’s avaliable in his GitHub repo. Clone the repo and modify the “openssl.cnf” according to our need and then we can go ahead with generating the certificates.

Once the config file is created, restart the RabbitmQ server. Now RabbitMQ has a cool management console, we can enable this by running ”rabbitmq-plugins enable rabbitmq_management” in console. Once the Management console is enabled, we can access it RabbitMQ Web UI: Username is “guest”, password is “guest” – http://SENSU-SERVER:55672. Protocol amqp should be bound to port 5672 and amqp/ssl on port 5671.

By default sensu package comes with all sensu-server,sensu-client,sensu-api and sensu-dashboard., If we dont want to use the current machine as a client, we can stop the sensu-client from running, and do not create the client config. But for testing purpose, i’m going to add the current machine as client also. Create a file ”/etc/sensu/conf.d/client.json” and add the client configuration in JSON format.

Now restart the sensu-client to affect the changes. The logs are recorded at ”/var/log/sensu/sensu-client.log” file. We can access the sensu-dashboard from “http://SENSU SERVER:8080”, with the username and password mentioned in the config.json file.

Setting up a Separate Sensu-Client Node

If we want to setup sensu-client on a separate node, just dd the Sensu apt repo, and install the sensu package. After that just enable only the sensu-client service and remove all other sesnu-services. Then create a config.json file and add only the rabbitmq server details in it. Now generate a separate SSL certificate for the new client and use that in the config file.

Once the Sensu Server and Client are configured successfully, then we can go ahead adding the check’s. One of the best thing of sensu, all the config’s are written in JSON format, which very easy for us to create as well as to understand things. In the next blog, i will explain on how to create the check’s and how to add these check’s to various clients, and how to add handler’s like Email alerts, Sending Metrics to graphite.

Today i came across a very interesting project in GITHUB. HARAKA is an SMTP server written completely in NodeJS. Like the qpsmtpd, apart from the core SMTP features we can improve the functionality using small plugins. There are very good pluginsi for HARAKA, basically in javascripts. Like Postfix,Qmail, we can easily implements all sorts of checks and features with the help of these plugins.

Setting up HARAKA is very simple. In my setup, i will be using HARAKA as my primary smtp server, where i will implement all my filterings and then i will relay to a qmail server for local delivery. There is plugin written by @madeingnecca in github, for directly delivering the mails to user’s INBOX (mail box should be in MAILDIR format). In the real server’s we use LDAP backend for storing all the USER databases. So before putting HARAKA into production, i need a to rebuild the auth plugin so that HARAKA can talk to LDAP for user authentication in SMTP.

So first we need to install NodeJS and NPM (Node Package Manager). There are several ways for installing NodeJS. We can compile it from the source, or we can use NVM (Node Version Manager), or we can install the packages from APT in Debian machines. But i prefer source code, because official APT repo has older versions of NodeJS, which will create compatibility issue. Current version is “v0.10.4”. Building NodeJS from source is pretty simple.

Now go inside to the Haraka folder and run the below command. All the dependency packages are mentioned in the package.json file.

$ npm install

The above command will install all the necessary modules mentioned in the package.json file and will setup HARAKA. Now we can setup a separate service folder for HARAKA.

$ haraka -i /etc/haraka

The above command will create the haraka folder in /etc/ and it will create creates config and plugin directories in there, and automatically sets the host name used by Haraka to the output of the hostname command. Now we need to setup up the port number and ip which HARAKA SMTP service should listen. Go to config folder in the newly created haraka service folder and open the “smtp.ini” file, and mention the port number and ip.

Now before starting the smtp service, first let’s disable all the plugins, so that we can go in steps. In the config folder, open the “plugin” file, and comment out all the plugins, because by default haraka will not create any plugin scripts, so most of them mentioned in that will not work. So we will start using the plugins, once we have copied the corresponding plugin’s js files to the plugin directory inside our service directory.

Let’s try running the HARAKA foreground and see if it starts and listens on the port we mentioned.

$ haraka -c /etc/haraka

Once HARAKA SMTP service starts, we can see the line ”[NOTICE] [-] [core] Listening on :::25” in the STDOUT, which means HARAKA is listening on port 25. We can just Telnet to port 25 and see if we are getting SMTP banner.

Now we can try out a plugin. Haraka has a spamassassin plugin. So will try it out. So first install spamassassin and start the spam filtering.

$ apt-get install spamassassin spamc

Now from the plugin folder inside the git source folder of HARAKA, copy the spamassassin.js and copy it to the plugin folder of our service directory. By default plugin folder is not created inside the service directory, so create it. Now we need to enable the service. Inside the config folder of our service directory, create a config file “spamassassin.ini”, and inside the file fill in the necessary details like, “reject_thresold”, “subject_prefix”, “spamd_socket”. Now before starting the plugin, we need to add it in the plugin, inside the config folder. Once spamassassin plugin is added, we can start the HARAKA smtp service. If the plugin is added properly, then we can see the below lines in the stdout,

Now using swaks, we can send a test mail see, if spam assassin is putting scores for the emails. Like this we can enable all other plugins, based on our needs.

Since i’m going to relay the mails, i need to make HARAKA to accept mails for all my domains. For that i need to define all my domains on HARAKA. In the config folder, open the file “host_list”, and add all the domains for which HARAKA should accept mails. There also a regular expression option available for, which can be done in “host_list_regex” file.

Now we need to add, smtp relay, for that edit the “smtp_forward.ini” file and mention the relay host ip, port number and auth details(if required). Now we can restart the HARAKA service and we can check SMTP relay by sending test mails using swaks.

I haven’t tried the Auth plugin yet, but soon i will be trying it. If possible, i will try to use LDAP backend for authentication, so that HARAKA can be used a full fledged SMTP service. More developments are happening in this, hope it wil become a good competitor …

It’s being a year since i have really played with Centos or any Redhat based Distro’s. I saw a few videos on youtube realting to zenoss, which is a new generation monitoring tool. Later i attended two zenoss webinar’s, which made to try it out in own infrastructure. In this blog i will explain how to setup zenoss on a Centos6.4 machine. Make sure that you have atleast 2GB of Ram. Initially i put 1GB of Ram and 2GB of swap in my Centos VM. But when i started the zenoss services, the whole and ram and swap was consumed and finaly i was not able to start the services.

Basicaly zenoss need RabbitMQ messaging server, JAVA6, MYSQL as its dependencies. There is an automated script available from the zenos website, which will download and install all necessary dependencies. It’s a bash script. We can download it from the below link.

Once we extract the above tar ball, we can see a bunch of files. zenpack_actions.txt file contains the list of zenpacks which is going to be installed. We can modify it based on our needs.

Once done, we can start the installer script.

$ ./core-autodeploy.sh

This will start by downloading the zenoss rpm file. Once the installation completed, it was giving an error, saying that “connection reset” while installing the zenpacks. I was going through all the log files, finally i found that the error was in the rabbitmq. The zenoss user authentication was failing. Below is the error which iwas getting in the rabbitmq log.

The error says that the zenoss user credential is wrong. So using “rabbitmqctl” command i reset the zenoss user password. Once the password is changed, we have to mention the new passowrd in the zenoss global.conf file. This file will be present in “/opt/zenoss/etc” location. Open the the global.conf file, and replace the amqppassword with the new password. By default during installation, the script generates a base64 encoded random password using the openssl. Once we hab=ve replaced the password, we can start the zenoss service.

$ service zenoss start

Now while starting the service, zenoss will continue the installing the zenpacks. Once the service is started, we can access the WebGUI from “http://server_ip:8080 ” url. Initially it will ask us to set the password for the admin user as well as to create a secondary user. More over it will ask us to add hosts to monitor, we can skip this step and move the dashboard. Later on we can add the hosts directly from the infrastructure tab. I’ve added my vps as well as few of my local server’s with snmp, so far it is working perfectly. There many cool stuffs inside zenoss, hope this will a cool playground …

Today i was completely playing around with virtualization. I was playing around with Foreman and KVM, then i got WebVirtmanager to play around, which is working perfectly with LVM storage pool. It’s almost a week since i saw a few videos related to Apache Cloudstack, so today i decided to give it a try. In this blog i will explain on how to set up Apache CloudStack on an ubuntu 12.10 Machine. Apache Cloudstack is one of the coolest cloud platform’s available. It supports hypervisors like KVM, XEN, vSphere. The latest version is 4.0.1-incubating. The source can be downloaded from here. There is a very good documentation available from Cloudstack.

Maven 3, which is not currently available in 12.10. So, we’ll need to add a PPA repository that includes Maven 3

$ add-apt-repository ppa:natecarlson/maven3

The current ppa supports only ubuntu 12.04 aka Precise, so edit /etc/apt/sources.list.d/natecarlson-maven3-quantal.list and replace “quantal” with “precise”. So now the content of the file looks like below one

Now we can resolve the buildtime depdencies for CloudStack by running the below command.

$ mvn3 -P deps.

Now there is a small bug, which add the dependency of “chkconfig” package to a few of the cloudstack packages. But “chkconfig” is required for Redhat based machines, not for debian based machines. So edit “Debian/control” file inside the apache cloudstack source folder and remove “chkconfig” from the dependency list. After that we can start building the debian packages.

$ dpkg-buildpackage -uc -us

The above command will build 16 debian packages.

Setting up a Local APT repo

Now we can set up a local apt repo so that we can install all these 16 packages along with their corresponding dependencies. First ensure that “dpkg-dev” is installed. After that copy all the packages to a specific location in order to create the local repo.

We need to configure the local machine to use this local repo. Add the local repository in echo "deb http://server_url/cloudstack/repo/binary ./" > /etc/apt/sources.list.d/cloudstack.list and run “apt-get update”. Now we can install the cloudstack packages.