A good developer workflow will give your team a huge advantage over competitors, it allows your developers to focus on creating new features, tests. If your team has bad workflow and doesn’t have automated tests, automated deployment, developers will have to wait for the guy with the ssh key to the Linux box to deploy the code, only to find out after trying it out that it doesn’t work properly because there aren’t sufficient tests or someone forgot to run them. This happening every day will drive the best developers away, and it will reduce their output significantly.

Tests and deployment are automated

Setting up continous integration

The core of continuous integration and continuous development is automation. Software development is messy, mistakes happen, for a CI to be effective you have to have tests which are able to protect your developers from breaking functionality, most of the developers are afraid of change, a good CI with tests is able to solve this problem. Your tests are going to run when you’re creating a pull request to the master branch / production (or whenever you push code).

So first let’s look at a very simple workflow, this is very common for new projects where there aren’t a lot of developers. In this workflow there are no branches yet, every code gets pushed to master, and after a green test it gets deployed to int or prod server automatically.

A simple workflow, for small teams / new projects.

Let’s create the CI part which runs the tests after each commit to master. We create a new project on bitbucket and go to the pipeline part a click on javascript, it will create a sample pipeline.yml file.

Sample pipeline.yml

After enabling the pipeline we pull our repo, and we can edit our pipeline describing file. I use karma for running tests and chrome as an environment, so I looked for a docker image which has Node and Chrome. The image I choose to go with is this.

After uploading this file, it will run your tests on remote machine after every commit and will show a green or red light beside your commit depending on if the tests have passed.

Setting up continuous delivery

The second part is the CD, there are multiple ways to deploy & scale an application, in an automated way, in this tutorial we are going to go with Docker and AWS Elastic Container Service.

The thoughts behind the decision. There is a big question whether we should use our own orchestration layer (Mesos/Kubernetes/Docker Swarm) or a ready-made solution. If you decide to go with your orchestration layer you’ll have more flexibility, probably its going to be cheaper but its a lot more responsibility and many more things can go wrong than if you just go with a cloud providers solution. So In this tutorial, we are going to use AWS Elastic Container Service (ECS later on) it will manage our docker image restart/update/scale it if it is needed. The reason why its so great because we don’t have to create too much AWS specific code, so we can avoid vendor lock-in and we can migrate our app pretty fast to anywhere if we have to, also, Docker grants us a high level of abstraction, which will give us robustness and our app will be less prone to errors and if something bad happens we will know where to look. So as the first step I have set up a Dockerfile in the root of the app which looks like:

#We will use the node image from dockerhub, we dont want to waste time setting up node.
FROM node:9.0.0
#We create a forder for our app
RUN mkdir /app
#We copy the contet of our whole project to the docker containers app folder
ADD . /app
#We go to the app dir (similar to "cd /app")
WORKDIR /app
#We install the npm dependecies of the project (based on package.json)
RUN npm install
#We compile an optimized version of our app. (AOT && Treeshaking)
RUN npm run buildprod
#We install a http-server which is a lightweight solution for serving the compiled files
RUN npm install -g http-server
#We expose port 80 (which is more like documentation)
EXPOSE 80
#We go to /app/dist where the compiled files are.
WORKDIR /app/dist
#When we run the dockerimage it will spin up the http-server on port 80
CMD ["http-server", "-p 80"]

Now we can test our app locally. First, we have to build our image, let’s go to the project folder where the Dockerfile is.

docker build -t firebase-test-in .

With this command we run everything in the Dockerfile except for the last command (the CMD one). It takes a short while to build the image you can see the output in the terminal. If everything is okay, you can list your images.

As we can see we have a container running and port 80 is forwarded to 0.0.0.0:8080. So let’s check it out, we should see our running app on localhost:8080. The next step after successfully Dockerizing our application to integrate Docker hub, AWS ECS and Bitbucket Pipelines. Luckily we have an official documentation for it. (If you get stuck somewhere check it out)

We are going to create a hook for Docker hub so whenever there is a new commit on Bitbucket it can download the repo and build a new image based on the Dockerfile. When you log in to Docker hub under settings there is a linked accounts & services part where you can connect your Bitbucket account. After connecting them you have to set up build rules.

Currently in my setup whenever there is a new commit or merge on production branch, Docker hub creates a new build with the tag latest. This is the image we are going to use on ECS. You can also pull your images to your local machine to test them. I have also modified the bitbucket-pipelines.yml so whenever there is a change on the production branch a script is called which will deploy our app to ECS.

options:
max-time: 5 #5minutes incase something hangs up
pipelines:
branches:
master:
- step:
image: weboaks/node-karma-protractor-chrome
caches:
- node
script:
- npm install
- npm install karma -g
- karma start --single-run --browsers ChromeHeadless karma.conf.js
production:
- step:
image: python:3.5.1
script:
- pip install boto3==1.3.0
- export TAG=`git describe --abbrev=0 --tags`
# invoke the ecs_deploy python script
# the first argument is a template for the task definition
# the second argument is the docker image we want to deploy
# composed of our environment variables
# the third argument is the number of tasks to be run on our cluster
# the fourth argument is the minimum number of healthy containers
# that should be running on the cluster
# zero is used for the purposes of a demo running a cluster with
# one host
# in production this number should be greater than zero
# the fifth argument is the maximum number of healthy containers
# that should be running on the cluster
- python ecs_deploy.py task_definition.json $DOCKER_IMAGE:latest 1 0 200

I have also added a “task_definition.json” to the root of our project, which gets used by the pipeline.

The next part is creating a cluster in ECS and deploying our image. You can start with this link, it will guide you through creating your first cluster. Use the username/image_name on dockerhub for the image, you image on dockerhub has to be public in this simeple scenario.

It is advised to create a new user in AWS IAM for handling ECS related tasks. After creating a user with sufficient privileges add the following environment variables to your bitbucket project:

AWS_SECRET_ACCESS_KEY: Secret key for a user with the required permissions.

AWS_ACCESS_KEY_ID: Access key for a user with the required permissions.

DOCKER_IMAGE: Location of the Docker Image to be run. The tag/version is passed in bitbucket-pipelines.yml. (username/image_name on dockerhub, same as used in the ECS cluster setup.)

So by now, our pipeline should be working. Whenever we merge the master into the production branch a new image will be built on Docker hub and it will be deployed onto ECS. If you go to the cluster and click on the EC2 instance you can see the public address and you can access your application. YEY!!!

Our final workflow

(Actually right now Docker hub seems to be a huge bottleneck, sometimes it hangs for an hour before building the image. In this case, you have to run the pipeline script on production branch again with hand)

After creating a cool pipeline it would be also advised to set up a logging service for our pipeline, application, so we could monitor our application more easily. At the end, we have created a pipeline which saves a lot of time for developers and makes development much more convenient, and our product will probably contain fewer bugs.

Whether you’re working or preparing for an interview where you might get some tricky questions about JS the following 5 features can prove to be useful. The ECMAScript6 (ES6) final specification was released in 2015 July, since then it’s supported in Node, new browsers and you can transpile the code to ES5 with babel to be compatible with older browsers. Let’s get to it.

1. Let and const, block-scoped variables.

On a high level, using const and let will result in a more robust codebase, which is easier to understand, because your code will be more restricting.

Const keyword prevents the value of a primitive from changing or a new object being made and overwriting an existing object. Using const over let/var has one huge advantage in my opinion:

It makes the intent of your code much clearer, leading to a quicker understanding of it whenever anyone takes a look at it, a readable clear code is nowadays one of the most important attributes of good code.

The let keyword is a nice improvement after var which is function scoped when let is block scoped. As you can see in the example a variable created with var in a loop can be seen outside of it, when the same thing cannot be said for let.

3. Classes

So finally ES6 introduced classes to the language, so people can understand/write code with less invested time, finally, you can write code which is somewhat similar to OO stuff like Java, although JS doesn’t have interfaces nor abstract classes. Classes are syntactic sugar for its hard to understand prototypal inheritance. The main difference between classes and constructor functions (which are used to create instances pre ES6) is hoisting, so first you must declare your classes and you can only use them after that. (In Javascript functions and ‘var’ declarations get hoisted, which means you can use them in your code before you have declared them.)

4. Spread operator

Spread operator is a really great addition to the language it has multiple use cases:

as function parameters

copying arrays

copying objects

So let’s see the first use case as function parameters, as you can see we want to pass multiple parameters to our function and we don’t know how many to expect. After the first one, the rest of the parameters are going to be in an array. This way we can create really versatile functions.

5. Default function parameters

Default parameters are a nice add on for the language, it helps in writing denser and easy to understand code and in my opinion, it’s nicer than function overloading (what we have in Java and C++). In many cases, we want to create a function where most of the time we want to pass only one parameter but in some edge cases, another parameters are necessary. Let’s see an example.

Many articles talk about advanced functional programming topics, but I want to show you simple and useful code that you can use in the day to day developer life. I’ve chosen JavaScript because you can run it almost everywhere and it’s well suited for functional programming. Two of the reasons why it’s so great are that functions are first class citizens and you can create higher-order functions with it.

Note how you can store a function in a variable and call it later. Functions in variables are treated as just any other variable.

typeof add3
"function"

But why is it great that you can return a function as a result? Because you are able to return behaviour not just values and because of this the abstraction will be higher and the code will be more readable and elegant.

Let’s take an example, you want to print the double of every number in an array something that every one of us does once in a while, you would probably do something similar:

The forEach function gets an array of numbers and a function printDouble and calls printDouble on every element of the array. Actually, this is a very useful function and its implemented in the prototype of array. So you don’t have to write the previous code in every codebase that you work on.

(forEach is a higher-order function too because it takes a function as a parameter.)

Quick overview

Last summer me and my cousin started to work on our idea to crawl the comments of the internet and harness the data from it. Our current goal is to help the marketers to get useful insights from our data about the effects of their campaigns, releases, and presence in the digital world.

For example, a Chinese brand releases new phones. How do they get info about their users feedback? Besides looking at the numbers of the sales, returned handsets, perhaps emails from the customers about the features not working.

Our quest is to solve this problem by having a huge dataset of comments, and by analysing this we can give useful insight about:

How positively their brand is perceived.

What do people think is positive/negative about certain products.

Yearly/monthy/weekly breakdown of the buzz around them on the internet, with channel distribution. (On which sites were the comments)

Did they share their own ideas or shared someone elses content about your product.

Conversational clouds, what did people mention in relation to your product (screen, charger, packaging)

Current state of the solution

Our application’s backend side based on ‘templates’/scripts which instruct the crawler service to get the comments of various forums currently a few tech sites, Ars Techinca, Xda developers we have 12M documets at the moment. Then the comments are analysed in microservices by their language, their sentiment, and the structure of the sentence. This info is saved into a database, then the data is indexed in an elasticsearch cluster so we can quickly query it.

The frontend (you can access it here) currently enables you to search the data and create some really basic pie charts based on keywords.

The architecture without the language detection and the sentiment analysis services looks like this:

The tech stack is:

Spring boot for microservices

Angular 1 + Bootstrap for the frontend

Mongo for storage

Elastic for indexing & querying

MVP

Things to add so we can demo it:

Create views which show where were keywords mentioned time/site/language. Sentiment of keywords. Conversational clouds. Basically anything statistics what delivers value with little as possible development time.

Add user/group/organisation management so only people with certain right can access the data and generate reports. (We planned to use Stormpath but their future is kinda shady)

Our current goal is to deliver the MVP in 2-3 months and get feedback from customers as fast as possible so we can make sure we are heading in the right direction.

Thanks for reading, we appreciate any feedbacks/ideas in the comments or in an email.

Finally found what I’ve been looking for, GITEA. Its love at first sight, not only because of the beautiful UI, but because it doesn’t use SO MUCH GODDAMN MEMORY which is expensive in the cloud for such a mundane thing as version management. For the past years, I have had gitlab/bitbucket/stash servers for my personal projects but they used too much memory, considering that the server was used only by 2 people tops (gitlab recommends 4 gigs, runs with 1 gb ram + 3gb swap). The problem with them is that they are written in Java and designed for massive scalability, on the other hand, gitea is a lightweight go service forked from gogs, consuming ~30MB memory with light usage. Also, it is blazing fast, has a great ticket management and a built-in wiki. Its almost as good as Github.

The title, says it all, but basically if you have a small service (in this case written with nodejs) running on a server with limited capabilities, you might run into problems if you want to do processor/memory heavy computations. For this can be a solution to offload the work to lambda, which can scale automatically, and you only have to pay for computation, so basically if you rarely need the computation, instead of renting a server you can do this and save heaps of money.

Talk is cheap, lets build it.

First of all if your dont already have you need the following:

Amazon AWS account

We will create the lambda function with this, and our extension of the application will run here.

Nodejs

Node.js is an open-source, cross-platform JavaScript runtime environment for developing a diverse variety of tools and applications.

The prompt will guide you through, though it is recommended to turn off nginx while you do this, so you dont have anything listening on port 80, 443. After you have finished youll have your cert files in /etc/letsencrypt/live/yoururl