Menu

Coding and life :)

Author: mushfiq

If you never wrote a micro service before but you know what is a micro service is, this post will introduce you by writing a μ-service. As it is a new “Buzz” floating around for last couple of years. Read details.

Micro service architecture has definitely many advantages over monolithic application, on the other hand it depends on several factors whether it make sense to go with micro service architecture or not. If you want to read more details about Micro service pattern and its pros and cons, please check this post for details. Specially micro service “Pros”, “Cons” section.

Let’s not get into the debate and start writing some code. In this post we will be doing the following:

Building a REST API using Django (DRF)

Docerize the newly developed REST API and run it via uwsgi

Step 1: Building the REST API using Django:

We will be using Django REST Framework (DRF). The API will be exposing data for Event Management company (imaginary) where the company uses the API to manage their events and performer. For sake of simplicity in our API we will be able to able Add new performers and events. And there will be listing endpoint where we will be listing recent events and associated performers name.

So lets write some code:

Django REST framework made easier to develop REST API on top of Django, all one need to do define serializers and the load query objects via Django models and thats it. DRF will take care rest of the staff. As the API is minimum and we are doing CRUD, in the serializers we need to extend serializers. Model and thats is. Finally Views.py looks like below:

Step 2: Dockerized μ-service:

Lets checkout the Dockerfile for details:

In the Dockerfile from Line 1-11 we are cloning the repo, updating the working directory, installing dependencies . From line 13-19 we creating db through manage.py, loading dummy data and running uwsgi to serve the API.

Share this:

Like this:

Playing with docker for last couple of projects. So far it’s good experience. In this post we will develop a REST API using Golang and then Dockerize the deployment. The REST API will be very basic with just a single endpoint. So we will cover the following section in the post:

Developing minimal REST endpoint

Coding a Docker file

Running the Docker container with the API

Finally will push the Docker image to a docker registry

Developing REST endpoint

For developing the REST API we are going to use Gin which is a HTTP web framework. It provides standard solutions/API for solving common problems using features like middleware support, Routing and standard convention of error management. But, sometime it make sense only using Go’s net/http can be used to develop any http based application but to avoid developing everything from scratch, Gin will give us a good starting point.

The REST API returns current time when in the route “/”. The code look like below:

In the main function we are declaring the Router for the API and passing the handler function associated with the endpoint. And in the handler method we are getting current time and passing it to Gin context.

Creating the Dockerfile

For Dockerizing the REST API lets develop the Dockerfile. We will use Golang official docker image because its used by many other developers and again we don’t need to do all the work of choosing an OS, pulling Golang setting up environment further steps.

In the Dockerfile we are using bare bone docker image scratch which is minimal base image.

In line 5, we are cloning the binary of the api.

In line 7 running the latest build and finally exposing the port of the docker container via EXPOSE 8080.

So far we developed the API with the endpoint and also coded the Docker file. Lets build the docker image and run the Docker container.

Running the docker container

In the local development machine we are using Docker-machine. So we need to follow following steps:

docker-machine start default
docker build -t mush/gondar .

docker build builds docker image (name of our docker image is gondar) for the container and then we need to run the image as a container with the following command:

In line 2 we are running the newly build docker image and forwarding port from docker to host via -p. And we are good to go, our docker container is running and we can check it via a curl command like below:

Share this:

Like this:

If you start developing a REST API, one of the fundamental requirements you will need to implement an authentication system. Which prevents any anonymous user to expose your REST endpoint.

For developing REST API, I used to start from scratch by using Django/Flask, then I used Piston . And when the further development of Piston stopped, I started using Tastypie. Last year I was reading documentation of DRF and I realised, my next REST API I will develop on top of DRF. And since then I am using it. The documentation is organised and it has a growing community around it.

So back to the point, in DRF you can have an access key based authentication system quickly without coding much configuration and code.

While authenticating an user via access key, the core idea is, we need to check whether there is any user exists with the provided access_key or not. And to return data or raising exception.

At the beginning, add a new file in your django app called “authentication.py“. To write custom authentication in DRF, “BaseAuthentication” and then we need to override “authenticate” method. authenticate takes to django request object from which we will get the access key like request.get(“access_key”, None). The whole sub-class look like below:

And next step is to add it to our REST_FRAMEWORK settings in project settings (settings.py), like below:

And then call the endpoint like: /news?access_key=”ACCESS_KEY”. And it will return our REST output.

In this tutorial, in Subscriber model I have a field called which is “access_key”, you can use any other models/field for authentication checking.

This is the preferred way I mostly apply authentication in DRF based REST API and then as the API grows I used to add more sophisticated authentication for the API. DRF also comes with token based authentication which is described in the docs briefly.

Share this:

Like this:

To access data of the REST API from other domain API should have CORS enabled for the website. Like most of all framework Bottle by default does not set CORS header. To enable it, following decorator can be used:

In the API response header “Access-Control-Allow-Origin” will be added. As per our example, it will be Access-Control-Allow-Origin: example.com. To enable it for any website you can set it as “*”. There is an interesting discussion whether to set it * or not.

Share this:

Like this:

Google have an API to provide data related place which is call “Google Places”. You can search to the api using different name of the location and also can append services (e.g Burger in Newyork) and it returns marched you keyword, location.

Last year I wrote a wrapper on PHP on top of the API to access it from PHP application. Used Composer for the first time, as dependency manager for the wrapper.

To use it set the configuration (update the API key of you google place account) and you are ready to go.

Share this:

Like this:

While working in a node.js project, I had an use case where user will have to query based on days and query result will be pdf filtered by the date range. And I have to create a directory of pdf files and return it as a zip file for the user.

As there are many modules related to zip, I tried couple of the active modules but none was meeting my requirements. And some of those modules has different bugs/issues which were open at that time. After different try/error and going through most of the zip modules, I used archiver. And below a sample how it worked.

After cloning the gist make sure you have files to zip and run the script like below:

Happy coding 🙂

Share this:

Like this:

I like to automate tasks, I think every software engineer like that, right? After all thats our job. I wrote the following script for downloading google spreadsheet as csv. Just got it when I was going through my old code base, hopefully it would help someone else too.

Share this:

Like this:

In AWS, EC2 by default provide 8GB space, in a past project I had to extend the size of one of my development instance as the data was growing fast. From AWS console add new EBS volume. Then attach it to your instance by AWS console and log into you EC2 instance via ssh.

Run following command:

sudo fdisk -l

which will show list of volumes with the newly added volume as unpartitioned. Something like below:

Then next step is to build the file system of new EBS volume using unix command mkfs. Like below:

sudo mkfs -t ext4 /dev/xvdf

Next you have to mount it in your desired path, e.g. /mnt/ebs1. Run following command:

sudo mount /dev/xvdf /mnt/ebs1

Then add an entry into /etc/fstab. it would be something like this:

"/dev/xvdf /mnt/ebs1 ext4 defaults 1 1"

There are facts if you add the EBS volume to your /etc/fstab and some how if there are issue (like file system corruption, unavailability of zone etc ) with the volume during booting the instance it will not be booted. Because while booting your system will look for the entry and when its not available the whole instance is down. Check AWS forum post for details.

And also check this whole SO discussion to resolve this issue in alternative way ( using a script for example).

Check following docs if you are more interested about the unix commands that used in this post.

Share this:

Like this:

I was working with Node.js for building a REST API. For REST API module I was using restify. The restify is a simple and yet powerful node module. One of the use case of the API was, I had to serve static file for specific routing. I went through the docs and tried different things but couldn’t figure yet out at first. After hustling for hours, me and Christian started to go deep into it and figured it out!

Share this:

Like this:

I was working in a project for last couple of months, as the days are passing the codebase is getting larger. Suddenly I thought, It would be great if I can know how many lines of code I have written so far for each module. And also in total. I know unix has a really awesome utils named wc.

After googling and trying different params and commands I managed to find it by merging to unix tool(wc and find), the full command for recursive line number counting is like below:

wc -l `find . -type f`

The command returns something like below:

Using find . -type f listing all the files recursively and wc -l is counting the line numbers 🙂

For learning these tow unix command in details check wc and find manual.