Menu

Coding and life :)

লিনাক্স

In AWS, EC2 by default provide 8GB space, in a past project I had to extend the size of one of my development instance as the data was growing fast. From AWS console add new EBS volume. Then attach it to your instance by AWS console and log into you EC2 instance via ssh.

Run following command:

sudo fdisk -l

which will show list of volumes with the newly added volume as unpartitioned. Something like below:

Then next step is to build the file system of new EBS volume using unix command mkfs. Like below:

sudo mkfs -t ext4 /dev/xvdf

Next you have to mount it in your desired path, e.g. /mnt/ebs1. Run following command:

sudo mount /dev/xvdf /mnt/ebs1

Then add an entry into /etc/fstab. it would be something like this:

"/dev/xvdf /mnt/ebs1 ext4 defaults 1 1"

There are facts if you add the EBS volume to your /etc/fstab and some how if there are issue (like file system corruption, unavailability of zone etc ) with the volume during booting the instance it will not be booted. Because while booting your system will look for the entry and when its not available the whole instance is down. Check AWS forum post for details.

And also check this whole SO discussion to resolve this issue in alternative way ( using a script for example).

Check following docs if you are more interested about the unix commands that used in this post.

Share this:

Like this:

I was working with Node.js for building a REST API. For REST API module I was using restify. The restify is a simple and yet powerful node module. One of the use case of the API was, I had to serve static file for specific routing. I went through the docs and tried different things but couldn’t figure yet out at first. After hustling for hours, me and Christian started to go deep into it and figured it out!

Share this:

Like this:

I was working in a project for last couple of months, as the days are passing the codebase is getting larger. Suddenly I thought, It would be great if I can know how many lines of code I have written so far for each module. And also in total. I know unix has a really awesome utils named wc.

After googling and trying different params and commands I managed to find it by merging to unix tool(wc and find), the full command for recursive line number counting is like below:

wc -l `find . -type f`

The command returns something like below:

Using find . -type f listing all the files recursively and wc -l is counting the line numbers 🙂

For learning these tow unix command in details check wc and find manual.

17.24154389.507804

Share this:

Like this:

Last year while I was working in a project, I needed to automate the whole backing up process from taking snapshot of the current db to saving it to AWS S3 buckets. At that time I took most of the staffs from this blog post.

Couple of days ago, I started to code for making small backup script that will backup to another cloud machine rather than to AWS S3. Instead of coding it from scratch, I reused my previously coded script. All I need to implement a bash function(save_in_cloud) which runs a simple scp command 🙂

The whole script look like below:

I reused this script, all I did just added a new function which copy the current backup data to a remote server. And also updated do_cleanup, now it works in any year.

The backup script depends on other two js (fsync_lock.js and fsync_unlock.js) functions which responsible for locking mongo during db snapshots and releasing lock after the snapshots.

Share this:

Like this:

Deployment of code in test/staging/production servers is one of the important part of modern web applications development cycle.

Deploying code were painful because its repetitive same tasks we have to do every time we want to push code, during deployment if something goes wrong the application will go down too. But the scenario has changed, now we have many tools to make the deployment easier and fun. I have used Capistrano and Fabric for deployment. Found Fabric really painless and as its a Python battery, it was easier for me to adopt and get things done.

I am going to cover fundamental operations and finally a simple fabric script(like boilerplate) for writing your own fabric script.

env = its a Python dictionary like subclass where we define specific settings like password,user etc

local = runs command in local host(where fabric script is being run)

run = runs command in a remote host

You can use these code tasks in many different ways, to do that check the Fabric Office Documentation from here.

First gist is a sample fabric script,second one is a bash script to install fabric in your ubuntu machine.

Share this:

Like this:

It was around 2 Am and I was working like a caveman,but its hard to escape bed time 😦

Suddenly I found I set a wrong cron job in a cloud and it generated duplicate results.I have to make a report from the cron output and every line should be unique.The file is around 1.2 GB.

It was a json file, that has several thousand lines,many of them are redundant.I have to remove the redundant values and make a file which every line is unique.

I started to write a python script to do that,I was on the half way to finish my python script that takes file and create another file that contains uniqu elements from the input file.As I was too tired,thought I should do a search is there any unix command to this job.And found exactly what I needed 🙂

sort filename.txt | uniq

Or

cat filename.txt | sort -u

If the input file contans:

Line 1
Line 2
Line 2
Line 3
Line 1
Line 3

The command generates

Line 1
Line 2
Line 3

And I just redirected the output of the command into a new file like below:

sort filename.txt | uniq > result.txt

Explanation of the command:

‘sort’ command lists all the lines by default alphabetically and ‘uniq’command can eliminate or count duplicate lines in a pre sorted file.

You can also use sort and uniq in different situation, for details check following links:

Share this:

Like this:

Squid is used as a proxy cache server.It makes web responses faster,minimize bandwidth.
Couple of days ago,I had set up a local squid proxy server for our dev environment.The first step was to configure a standalone squid in my Mac OS X lion.
In this post I will write down the minimal steps and configuration for local Squid proxy service.

This command will create your all the swap directories.
And you need to change the permissions of the cache directories too.
Now time to start Squid using

./squid

Now,browse couple of pages and turn off your internet,then try to browse the pages again.

You can see the web pages and they are coming from cache.
That’s it.You can download my configuration file and change it as your settings. From here
Hopefully in next post I will write down the steps how to configure Squid for a network and serving multiple computer as a Proxy server.

N.B:Please check the paths and change it accordingly for your installation directory.