Recently, I wanted to run some jobs using docker images. I’m a huge advocate of using Docker, so naturally I was going to build a Docker image, run my Python scripts, and then schedule said job to run on a configurable schedule basis. Doing so on AWS is pretty easy by using lamda and step functions; however, since this wasn’t a paid gig and I wasn’t able to get someone to fork the bill, enter Google Cloud!

Google Cloud Platform (GCP) is, in a way, the new kid on the block. AWS has a long history with the cloud platform and excellent customer support, whereas Google’s customer service is a bit like Bigfoot: you’ve heard of it, some people say they’ve seen it, but it doesn’t really exist. However, Google is still an amazing tech company: they release early and they improve their products to make them awesome(e.g. Android). And best of all, they offer 300 free credits. So I decided to go for Google, how bad could it be?
In this post, I’ll talk about how I set up the Google Cloud to work for me. It took blood, sweat, and tears, but I got it working. I scheduled a job occasionally: I spin up a cluster of instances, run the job, and shut it down! Not only is that cool (ya, I’m a geek), it’s also quite cost-effective.

I will outline what I did, and even share my code with you:.
Here goes:

Step 1 – Build docker image and push to Google Cloud private registry

The first step was the easiest and most trivial. It is pretty much the same as AWS.

Create a build docker image

Let’s start with creating a build image. GitLab CI allows you to use your own image as your build machine. If you’re using a different CI, I leave it to you to adjust this for your own system.

1

2

3

4

5

6

7

8

9

10

from docker:latest

RUN apk add--no-cache python py2-pip curl bash

RUN curl-sSL https://sdk.cloud.google.com | bash

ENV PATH$PATH:~/google-cloud-sdk/binser

RUN pip install docker-compose

This a Dockerfile for the build machine. It uses a docker machine, pulls pip, and installs gcloud.

Then I push this build image to docker-hub. If you haven’t done this before you need to:
1) Singup to docker cloud https://hub.docker.com and remember your username.

Create a GCP service account

1

You have to create a service account, give it access to the registry, then export the key file as JSON. This is very simple step. If you’re unsure how to do it, just click through the IAM / Admin – you need to create a user, give it an IAM and export the key.

Customize CI Script to push to private registery

Once this is all done and you have your build machine, we can work on your CI script. I will show you how to do this on GitLab CI, but you can adapt this for your own environment. First create a build environment variable called CLOUDSDK_JSON and paste the contents of the JSON key you created in the previous step as the value of that key. Then add the following: .gitlab-ci.yaml file to your project.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

image:/build-machine

services:

-docker:dind

stages:

-build

-test

-deploy

before_script:

-apk add--no-cache python py2-pip

-pip install--no-cache-dir docker-compose

-docker version

-docker-compose version

-gcloud version

build_image:

stage:build

except:

-develop

-master

script:

-docker build-t:latest.

deploy:

stage:deploy

only:

-develop

-master

script:

-docker build-t:latest.

-echo$CLOUDSDK_JSON&gt;key.json

-gcloud auth activate-service-account--key-file=key.json

-docker tag:latest$PRIVATE_REGISTERY/:latest

-gcloud docker--push$PRIVATE_REGISTERY/:latest

-gcloud auth revoke

Adjust the job-image-name to your job docker image name, service_account_name to the service account name you created and the build image to the image you pushed to docker hub. This YAML file is directed as a python job, but you can change it to any other language.
I have 3 stages: build, test, and deploy.
I build and test on all branches, but only deploy on master. GitLab CI has an issue, each step can happen on a different machine, so my first build step isn’t kept to the deploy phase, which forced me to re-build in the deploy phase.

Once this is done, you CI system should be pushing your image to your Google private registry, well done!

Step 2 – Running Jobs in a Tеmp cluster

Here comes the tricky part. Since jobs only need to run every x time, and only for a limited period, it’s ideal to be run as a Google function. However those are limited to one hour, and can only be written in JavaScript (AWS support multiple languages with lamda and with state machines). Since I didn’t want to pay for full-time cluster time running, I had to develop my own way to run jobs.

Kubernetes Services

Controlling jobs in a cluster and cluster control can be achieved using Kubernetes. This is one part of GCP that really shines: it let’s you define services, jobs, pods (a collection of containers), and then run them.

To do this, I wrote a Kubernetes Service class in Python that will:
– Spin up / create a cluster.
– Launch docker containers on the cluster.
– Once jobs finish, shut down the cluster.

1

2

3

4

5

6

7

8

9

classKubernetesService():

def __init__(self,namespace='default'):

self.api_instance=kubernetes.client.BatchV1Api()

service=build('container','v1')

self.nodes=service.projects().zones().clusters().nodePools()

self.namespace=namespace

This is the class and constructor. The full code for this class has more configuration and env variables, as is part of the App Engine Cron project. I will include repo if you want full details on how to achieve this.

1

2

3

4

5

6

7

def setClusterSize(self,newSize):

logging.info("resizing cluster {} to {}".format(CLUSTER_ID,newSize))

self.nodes.setSize(projectId=PROJECT_ID,zone=ZONE,

clusterId=CLUSTER_ID,nodePoolId=NODE_POOL_ID,

body={"nodeCount":newSize}).execute()

This function can control the cluster size. It can spin it up before jobs need to be run, then shut it down afterwards:

kubernetes_job function creates containers (an additional function that creates container objects with env variables. Containers are then part of a pod, and that pod is part of a job template that is part of a job spec. You can read more about it in the Kubernetes docs.

If you don’t want to code to continue to wait for the jobs, you can poll for completion, and that is what shutdown_cluster_on_jobs_complete is for. It will shutdown the cluster once there are no running jobs.

This class controls the entire job scheduling and ensures their execution is successful.
It’s part of an appengine (however, they can be used independently).
Next we need to have this script scheduled or triggered to activate.
And that is our cron scheduler task.

Cron scheduler appengine service

Sadly, Google doesn’t give you an easy way to run code in the Cloud; you actually have to write more code to run code (silly, right?)

The concept is that the appengine provides you with a cron web scheduler that calls you own apps endpoints in given intervals.

First, you add cron.yaml to your project and you configure which endpoint and time interval to hit that endpoint:

1

2

3

4

5

6

7

8

9

cron:

-description:task tokick off all updates

url:/events/run-jobs

schedule:every2hours

-description:task toshutdown jobs when finished

url:/events/shutdown-jobs

schedule:every5min

Then we can add a handler to shut down the jobs, and to kick them off.

Last we want to add a Setting class to load env like variables from the datastore:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

import os

from google.appengine.ext import ndb

ifos.getenv('SERVER_SOFTWARE','').startswith('Google App Engine/'):

PROD=True

else:

PROD=False

classSettings(ndb.Model):

name=ndb.StringProperty()

value=ndb.StringProperty()

@staticmethod

def get(name):

NOT_SET_VALUE="NOT SET"

retval=Settings.query(Settings.name==name).get()

ifnotretval:

retval=Settings()

retval.name=name

retval.value=NOT_SET_VALUE

retval.put()

ifretval.value==NOT_SET_VALUE:

raise Exception(('Setting %s not found in the database. A placeholder '+

'record has been created. Go to the Developers Console for your app '+

'in App Engine, look up the Settings record with name=%s and enter '+

'its value in that record\'s value field.')%(name,name))

returnretval.value

Note that most of the app depends on the datastore. Sadly, Google doesn’t allow you to have env variables easily, but you can set up env variables in the datastore.
For this I added a class called Settings.

Then we just add bind the route handler:

1

2

3

4

5

6

7

import webapp2

app=webapp2.WSGIApplication([('/events/run-jobs',RunJobsHandler)],

debug=True)

This should allow our app to spin up a cluster, launch containers, and then shut down the cluster. In my code, I also added a handler for the shutdown.

Then make sure you have gcloud installed (here is how and just deploy the appengine using the gcloud deploy command and you should be good to go ( here is how
While my example runs the same docker image, and just has different operation with different env variables, you can easily adjust this code to suit whatever need you might have.
Here is the full git repo: gcp-optimized-jobs
Hope you find it useful!

This post is a simple guide to JS testing with Mocha, Chai, Sinon on CircleCI. It will show you how to setup for testing, some great tips for good coverage and more.
I’ll cover some best practices I use for testing JS code. It’s not official best practices, but I use these concepts as I found they make it easier to get easy to read test with full converge and a very flexible setup.

This post will dictate a unit test file to see the different points I found helpful when composing unit test files:

Setup

mocha is a testing framework for js, that allows you to use any assertion library you’d like, it goes very commonly with Chai. Chai is an assertion library that works with mocha. chai You can read there about how mocha and chai work, how to use it and more.
One of chai’s strong points is that you can easily extend it using support libraries and plugins. We will use a few of them, so let’s first setup our dependencies in our project:

chai – the chai library, has a good reference for how to use chai to assert or expect values, and a plugin directory – This is a valuable resource!

chai-httpchai-http – This is a chai extension that allows us to hit http endpoints during a test.

chai-as-promised – mocha support tests / setup that return a promise. This enables us to assert / expect what the result of the promise would be. We will see this in action shortly.

co-mocha – a mocha extension that allows us to use generator functions inside mocha setup / mocha test. If you do not do this step, and try to use a generator function, the test will finish and will not run yield correctly in test code. This means you will have twilight zone like results, of tests passing when they should fail!

sinonjs – cool test mocks, spies and stubs for any JS framework. Works really well, and very extensive

After we install all the packages, let’s create a new file, and add all the required libraries to it as follows:

1

2

3

4

5

6

7

8

9

10

11

12

13

//demo test file

constchai=require('chai');

constchaiHttp=require('chai-http');

constserver=require('../server');

constchaiAsPromised=require('chai-as-promised');

require('co-mocha');

constsinon=require('sinon');

constTestUtils=require('./utils/TestUtils');//explained later on.

constserver=require('../server.js');//explained later on

In this example I’m testing an express server, but you can use any type of node http server (assuming you are testing a server). Just make sure you export the server from you main or server file, and then you can require it from your test files.

1

2

3

4

5

6

7

8

9

10

We will see how we usethe server later on inthe test.

//server.js

constexpress=require('express');

constserver=express();

//all server route and setup code.

module.exports=server;

Grouping tests using ‘describe’

Mocha does a great job at grouping tests. To group tests together, under a subject use the following statement:

1

2

3

describe('Test Group Description"',()=>{

// test cases.

}

‘describes’ are also easily nest-able, which is great. So the following will also work:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

describe('Test Endpoint "',()=>{

describe('GET tests "',()=>{

// GET test cases.

}

describe('POST tests "',()=>{

// POST test cases.

}

describe('PUT tests "',()=>{

// PUT test cases.

}

describe('DELETE tests "',()=>{

// DELETE test cases.

}

}

This groups them together, and if you’re using something like intelliJ or webstorm then the output is displayed in a collapsible window very nicely:

Test hooks

When running tests many times we need to do setup before each test, before each test suite. The way to do that is to use the testing hooks before, after, beforeEach and afterEach:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

describe('hooks',function(){

before(function(){

// runs before all tests in this block

});

after(function(){

// runs after all tests in this block

});

beforeEach(function(){

// runs before each test in this block

});

afterEach(function(){

// runs after each test in this block

});

// test cases

});

Also these hooks can return a promise, the test framework will not continue until the promise is resolved, or will fail it is rejected:

1

2

3

4

5

6

7

8

before(()=>{

// do some work, return promise / promise chain

returnnewPromise(()=>true);

}

after(()=>functionThatReturnsPromise());

Also since we have require co-mocha, our hooks can also run a generator function:

1

2

3

4

5

6

7

8

let stuffINeedInTests=null;

before(function*(){

constresult=yield functionThatReturnsPromise();

constresultFromGen=yield*generatorFunction();

stuffINeedInTests={promiseResult:result,genResult:resultFromGen}

}

I can then use the stuffINeedInTest in my test files. You can also do this setup using promises as shown above.

Hook on root level

Test hooks are awesome, but sometimes we might want some hooks to run not only once for a test file, but once for all our tests. mocha does expose root level hooks, so in order to achieve that we will create a new hooks file: root-level-hooks.js
and put our hooks in there with no describe block around it:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

//root-level-hooks.js

require('co-mocha');//enable use of generators

before(()=>{

//global hook to run once before all tests

});

after(function*(){

// global after hook that can

// call generators / promises

// using yield / yield*

});

Then at the top of each test file we will require this file in:

1

2

3

4

5

6

7

//demo test file

require('./root-level-hooks');

//demo test file 2

require('./root-level-hooks);

This way our hooks will run once for all test runs. This is the perfect place to load up a test db, run some root level setup, authenticate to system etc.

External System Mocking

Some systems / modules call other systems internally . For example think of a functions that processes a payment for an order. That function might need to call a payment gateway, or after the order is processed, send the shipping information to a another system (for example a logistics system or upload a file to s3). Unit test are intended to be very stand alone, and not depend on external systems. Therefore we need a way to mock those external systems, so when the tested code reaches out to these systems ,the test case can respond on its behalf.

In our test we will use sinon.
Basically we will mock the calls using a test class or mocked calls, that reads a response file and send it’s back.
This makes the mock strait forward:

What we are doing here is creating a mock object, in this case we are mocking the axios, as my server code uses it, but we can use the same construct to mock any external system.
Our request mock will provide a get and a post methods, just like the axios library does. I’m using the sinon.spy to check what URL is requested by the module code, and a switch statement to handle the different urls requested by the module. Our mock can return urls, json, promises, files, or whatever is needed to successfully mock the external system.

1

2

3

4

5

6

7

8

9

10

11

constaxios=require('axios')

before(()=>{

sinon.stub(axios,'get').callsFake(requestMock.get);

sinon.stub(axios,'post').callsFake(reqeustMock.post);

});

after(()=>{

axios.get.restore();

axios.post.restore();

});

I’m then using the before hook to register the mock as axios mock, so when the module called require(‘axios’) it will receive my mock and not the node_module that actually does the http request.

Then I’m using the after hook, to disable the mock and return to normal.

Test Cases

Mocha let’s us create tests very easily. You use the ‘it’ keyword to create a test.
Either:

1

2

3

4

5

it('Unit test description and expected output',()=>{

returnvalue orreturnapromise.

});

Or using generators

1

2

3

4

5

it('Unit test description and expected output',function*(){

//yield generator or promise.

});

You can also use the done callback, but I prefer not to use it.
I like to keep code a small as possible, and without any distractions.
However it’s here if you need it

1

2

3

4

5

it('Unit test description and expected output',(done){

//call done when finished some async operation

});

<

1

Each test case is composed out of two parts:
1) The test itself
2) Expected result

Test themselves

Since we have added the mock for external system we can safely use our test code to hit a function, or if we are testing a rest endpoint we can call that endpoint:

1

2

3

4

5

6

7

8

9

10

11

12

13

chai.request(server)

.get('/serverPath')

.then(function(response){

// process response

});

//or

constresponse=yield chai.request(server)

.post('serverPath')

.send({testObject:{name:'test'}});

In this example we are testing an endpoint, but calling a function would have been even easier.

Expected Result

The second part is includes looking at the results of our test runs and we will be using chai to look at the responses. chai provides a long list of ways to look at responses either using expect, should or assert, whichever you prefer.
I try to use expect often as it doesn’t change the Object.prototype. Here is a discussion on the differences expect vs should vs assert

1

2

3

4

5

6

7

expect(res).to.have.property('statusCode',200);

expect(res).to.have.property('body);

assert.isOk(res.statusCode === 201, 'Bad status code');

TestUtils.testForSucessAndBody(res,expect,201);

TestUtils.test

Failing these will trigger the test to fail.
I normally use a test helper class with a few standard ways to test for correct response and to compare return object to the expected object:

Test for failures

Using promises, I can also quickly test for failures to ensure our code doesn’t only work properly for valid input, but it should also work for invalid input.

I then require the TestUtil class in my test file, and then I can use the test utils for quickly expecting or asserting different conditions.

Mocha tests on circle

When using CircleCI, it’s great to get the output of the test into the $CIRCLE_TEST_REPORTS folder, as then circle will read the output, and present you with the results of the test, rather than you looking through the logs each time to figure out what went right and what went wrong. Circle guys have written a whole document about that, and you can see it CircleCi Test Artifacts.

In our discussion we will focus on using mocha and getting the reports parsed. In order to do so, we need mocha to output the result in junit xml format. This can be achieved easily using the mocha-junit-reporter. This lib will allow mocha to run our test and outpu the results in the correct format.

This output the information in the junit folder for both eslint (if you are using it) and for mocha.

Now all that is needed is to create a link between your junit folder and the CIRCLE_TEST_REPORTS, which can be done by editing the circle.yml file and adding the following line in the pre step for test.

1

2

3

4

5

6

7

test:

pre:

-mkdir-p$CIRCLE_TEST_REPORTS/junit

# for none docker:

If you aren’t using docker, you can also add a symbolic link after the creation of the folder - ln -s $CIRCLE_TEST_REPORTS/junit ~/yourProjectRoot/junit

However if you are using docker-compose, or docker run to execute your test inside a will also need to add a volume that maps you test output to the CRICLE_TEST_REPORTS.
For docker compose:

1

2

3

4

volumes:

-$CIRCLE_TEST_REPORTS/junit://junit

for docker run you can do the same with using the -V command.
Once that is done, you’ll get the report output in circle after the build finishes.

In this post I’ll present a suggested design pattern and implementation for this design pattern using a Node + Express REST API with ES Classes. Personally, I hate writing the same code again and again. It violates the DRY principle and I hate to waste my time and my customers’ time. Being a C++ developer in background, I love a nice class design.

In today’s microservices and web, REST endpoints have become somewhat of the de-facto way to connect services and web applications. There are loads of examples how to create REST endpoints and servers using Node.js and Express 4.0. SOAP, which was popular a while back, has given way to JSON. New technologies like GraphQL have not made it to mainstream yet, so for now we are stuck with REST and JSON.

I haven’t found a tutorial that discusses how to do this using ES6 classes and a good class design. This is what we will cover today.

Rather than building REST endpoints over and over, my concept is to have a base router implement base behavior for the REST endpoint, then have derived classes override such behavior if needed.

We create an Abstract Base Class, with all the default route handlers as static methods. Those will take a request, process it (most likely read / write / delete / update the DB) and return the results. Then the SetupRoutes, will be the glue that binds the static methods to the actual routes. In addition our constructor will take a route name which will be the route path that will be processed.

Then derived classes can either disable certain routes, or override routes as need be, while maintaining the base behaviour, if that is what is needed (for example when wrapping a service, or doing simple DB operations).

Now let’s implement this in JavaScript using Node.js, Express and ES Classes. I’m going to implement this example using MongoDB and Mongoose, but you can use any other DB or service you wish. The Mongoose in this code sample is pretty meaningless, it’s just for the sake of the example.

Then I’ll create the server.js main file (we won’t discuss this in detail, as it’s mostly a node/express server. The one line that’s important to note is require('./routes/index')(server,db); as this will create all the routes for our application).

I like to use automatic glue code, rather thant re-type or build a static array. This way we have the system detect new routes and add them automatically, just by adding a file to a folder.

I’m using require-dir which will include all route handlers. I wanted each route to handle it’s own paths, and not the global paths (I like encapsulation). So as a design decision I made the filename the subroute file.

I then create an instance of the route handler class, passing it a reference to the dbDB (so it can do it’s thing).

setupRoutes() returns a router, which I then connect to our server. I’m building on server.use of the express router , to bind routes to the baseurl. If you adpot this impementation you can always use your own structure.

Next let’s look at the base-router-handler which is the base to all route handlers. It will contain most of the code for any endpoint:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

//routes/base-route-handler.js

'use strict';

constexpress=require('express');

constcoWrapper=require('../utils/expressCoWrapper');

classBaseRouteHandler{

constructor(collectionName,db){

this.db=db;

this.router=newexpress.Router();

this.collectionName=collectionName;

this.collection=this.db[this.collectionName];

this.setupMiddleware();

}

staticvalidateOkResponse(res,foundItems){

if(!foundItems||!foundItems.length){

res.status(404).send('item not found');

returnfalse;

}

returntrue;

}

setupMiddleware(){

// attach any middleware you might need on a route baseis,; can be overriden in subclasses

res.connection.setTimeout(0);// disable server timeout - this may take a while

constresult=yield this.collection.find({});

res.json(allItems);

}catch(err){

res.status(500).send('Internal Error');

throwerr;

}

}

static*postMultiple(req,res,next){

try{

constresult=yield this.collection.update([req.body]);

res.json(result);

}catch(err){

res.status(500).send('Internal Error');

throwerr;

}

}

// eslint-disable-next-line require-yield

static*notImplemented(req,res,next){

res.status(501).send('Not implemented');

}

setupRoutes(){

constself=this;

this.router.route('/:id')

.get(coWrapper(self.constructor.getSingle))

.put(coWrapper(self.constructor.putSingle))

.delete(coWrapper(self.constructor.deleteSingle))

.patch(coWrapper(self.constructor.notImplemented))

.post(coWrapper(self.constructor.notImplemented));

this.router.route('/')

.get(coWrapper(self.constructor.getMultiple))

.post(coWrapper(self.constructor.postMultiple()))

.put(coWrapper(self.constructor.notImplemented))

.patch(coWrapper(self.constructor.notImplemented))

.delete(coWrapper(self.constructor.notImplemented));

returnthis.router;

}

}

module.exports=BaseRouteHandler;

I wanted to use generators, as I like their async / await like structure. So I wrote a co-wrapper file that will handle errors and the generators’ routes correctly, including wrapping with a promise. I do not wish to go into depths explaining it, as it’s not the point of this post. But you can see this file, in the git repo.

Next we create the base constructor, which takes the route name and (?). It creates the binding to a collection / table / service / anything else you want. It also calls the middleware setup; if you wish to bind your route based middleware, you can override this function in derived classes.

Next I go through and create static route handlers for each route. As you can see the route handlers are pretty simple: take json in, perform some DB operation and return the result. In other examples you might have more complex behaviour. The nice thing is the base creates a default behaviour, but by overriding the static methods in dervied classes we can do whatever we wish to do.

Once the baseclass is ready we can now create a real route, that will do something!
Let’s create a ‘route-handlers’ folder inside the ‘routes’ folder and add a file called companies.js.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

//routes/route-handlers/companies.js

'use strict';

constBaseRouteHandler=require('../base-route-handler');

classCompaniesRouterextendsBaseRouteHandler{

constructor(db){

super('companies',db);

}

static*putSingle(req,res,next){

yield*super.notImplemented(req,res,next);

}

static*deleteSingle(req,res,next){

yield super.notImplemented(req,res,next);

}

static*postSingle(req,res,next){

// do some code to send an email to the admin, to ask to create multiple new companies

}

}

module.exports=CompaniesRouter;

First look at how easy it was to create a new route. We didn’t need to write even this much code. We could just create the constructor and be done with it, if we wanted the same behaviour as the base class.

I did want to show, though, how easy it is to override the code without requiring much work. The base class provided us with a basic implementation for notImplemented[is “basic” an adjective instead of a specific type of implementation?], which makes it easy to disable routes.

Even adding a route is easy. Just add a handler implementation of your own. Makes it easy to test just the functionality and not have to re-write the same code over and over.

why module level variables are bad?

The calling node will assign to these arguments when it will invoke the wrapper function.
This is what makes them look as if they are globals in the scope of your node module.
It seems we have globals in our module however:
– export is defined as a reference to module.exports prior to that.
– require and module, are defined by the function executed.
– __filename and __dirname are the filename and folder of your current module.

caching – a double edge sword

Node will then cache this module, so the next time you require the file, you won’t actually get a fresh copy, but you’ll be getting the same object as before.
This means you’ll be using the same global modules variables in multiple places, which means danger!

Here is a code example that illustrated the problem:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

//moduletest.js

'use strict';

varx=0;

module.exports=function(val){

console.log(`val:${val},x:${x}`);

if(val!==x&&x!==0)thrownewError(`failure!!!${x}!=${val}`);

x=val;

}

//main.js

constfn1=require('./moduletest');

constfn2=require('./moduletest');

setInterval(function(){

fn1('a');

},200

);

setInterval(function(){

fn2('b');

},50

);

I’m running here two calls to the same function, with a small delay between each call, after a few runs we will notice that the function will run over each others variables. Which is an example of a module global issue.

How to solve globals?

There are multiple potential solutions to this global issue, I’ll present you with two potential solutions

Solution 1 – Functional

If we define a local scope inside our module, we can return a new set of variables for each run.
We will use a ‘let’ keyword, along with a scoped function (not needed, but nicer and better scope control).

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

//testmodule.js

'use strict';

module.exports=(function(){

letx=0;

returnfunction(val){

console.log(`val:${val},x:${x}`);

if(val!==x&&x!==0)

thrownewError(`failure!!!${x}!=${val}`);

x=val;

}

});

//main.js

fn1=require('./testmodule')();//<--- calling a function each time

fn2=require('./testmodule')();

// fn1 and fn2 are new functions with new variables, we busted the cache !! :)

// notice I also use let, to ensure scope variables, and not hoisted vars.

Solution 2 – use Classes

We can just define a class then create a new class for each run.
This way each variable is a private member of that class, ensuring proper encapsulation.

JavaScript is filled with an abundance of libraries, frameworks, and acronyms that would make any conversation between two web developers sound like they are about to fly a spaceship to colonize Mars.
If you don’t believe me, check out this funny post:How it feels to learn JavaScript in 2016 [If this post gets a high bounce rate I suggest deleting the rest of the paragraph after Mars, on account of the link]
As such writing Async JS is no different or less confusing.

In this post I’ll try to bring clarity to asynchronous code in Javascript. I’ll focus on back-end node.js code, but a lot of it also applies to the front-end.
Let’s first cover async JS mechanisms we have in Node:

Callbacks

Promises

Generators

Async / Await

I have not included things like observers, async.js and events, as they are not exactly the core of JS. For example, events rely on an async js mechanism (such as callbacks). Many of the observer mechanisms are used mainly in front-end patterns today, and async.js is an external library which I stopped using. However if you want to learn more I suggest you look these up.

Callbacks

Callback functions are the most basic types of async code, and are common not only to Javascript but to many other languages.
Callbacks are simple to understand. Callbacks are simple functions passed as arguments, that are called when the called function is finished.

1

2

3

4

5

6

7

8

9

10

11

12

functioncallMeWhenDone(){

console.log("finished");

}

functiondoLongProcessWithCallback(param1,param2,callback){

//somelong operation

// finished

callMeWhenDone();

}

doLongProcessWithCallback("stringInput",34,callMeWhenDone);

Very simple and straightforward. The main problem with callbacks is that when these are all chained together, as many operations are in async, you’ll end up with loads of callbacks which is a nightmare to read, manage or follow. This is called callback hell.

Promises

Promises are a different way to handle asynchronus code that allows for easier managment of async code, yields easier code flow, and uses exceptions for errors, uniform signatures and easy composition, meaning we can chain promises together!

Promises are a bit like real life promises. Imagine your boss promising you a promotion next quarter. You don’t know if you’ll get it or not, and you’ll know that only in the future. Promises have three states: resolved, rejected and pending.

A promise constructor takes two parameters, reject and resolve, which will be called when the promise finishes and returns a chainable promise object.

1

2

3

4

5

6

7

8

9

constdoLongProcessWithPromise=newPromise(functionresolve(){

console.log("finished");

},functionreject(){

console.log("failed");

}

);

doLongProcessWithPromise();

This might look more complex, and for very simple situations you might be right. But let’s look at the chainable .then and .catch (for success and failure of a promise).

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

doLongProccessWithPromise()

.then(function(result){

//this is called after the promise resolves,

//and the input parameter is the return value from the success

})

//or imagine this

userSubmitPayment

.then(processPaymentInBillingSystem)

.then(processPaymentAcceptedAndApplyToOrder)

.then(generateShippingInformation)

.then(sendEmailToShippingDepartment)

.then(sendEmailToUserWithTrackingNumber)

.catch(ErrorInProcess)

As you can see this allows for chaining of promises, which creates sequential code. Sweet!

Prior to ES6 promises were supported using external libraries such as Bluebird, Q , RSVP and many others. However they are now also a part of the coding lanaguge, as promises are that important.

Promises deserve a post of their own so here is some more reading if you want to dive in and understand them better:

Generators

Generators are not designed to be an asynchronous mechanism per say. Their intent was to create an iterator-like functionality in the lanaguge; however they are often used to create cleaner looking, synchronous-like code. This is built on the fact that generators can be paused and resumed. Once again generators deserve a post of their own, so I will add additional reading links at the bottom of this section.

Generators landed in ES6, and can be created by adding a ‘*’ after the function keyword (or before, in class members):

1

2

3

4

5

6

7

8

9

10

11

function*generatorFunction(){

yield'a';//Once yield is called, the function is paused until it's called next.

yield'b';

yield'c';

}

varg=generatorFunction();"

console.log(g.next());// output: a

console.log(g.next());// output: b

console.log(g.next());// output: c

The nice thing about generators is that inside a generator function you can pass the control to another generator *yield or to a promise / value with yield:

1

2

3

4

5

6

7

8

9

function*generatorFunction(){

constuserInfo=yield getUserReturningPromise();

constorderInfo=yield*getOrdersForUserGenerator(userInfo);

returnorderInfo;

}

//wraps the generator with a promise and can now be used as a promise.

As you can see you can the code becomes simpler. You can even wrap a generator into a promise easily with a coroutine (Bluebird has a coroutine, for example).
As you can see, promises and generators co-exist nicely!

Async / Await

Async/Await is not part of ES6 sadly, but only ES7. The use of generators and promises, while nice, is not very clean. It requires a lot of wrapping, and the intent of generators was to provide an iterator, not an async mechanism. This is where async / await shines, as it is a cleaner way to handle promises and asyncronous code in a sequential manner:

All you have to do is define an async function (with the async keyword), then enter an await keyword from your promises, much like the generator yield, but with less mess:

1

2

3

4

5

6

async functiondoProcess(){

constuserInfo=await getUserReturningPromise();

constorderInfo=await getOrdersForUserPromise(userInfo);

returnorderInfo;

}

As you can see the code is clean, but didn’t require any wrapping, or using generators. Adding just two more keywords allows us to use promises everywhere (promises tend to be faster than generators).

Most times I write tech posts, or professional posts, this time I decided to write something more personal.
I’ve been asked why I moved to Sofia countless times that I decided to write a blog post about it.

The short answer: FOR LOVE!

The long answer:

Israel

In 2007 I moved back to Israel after many years in Australia. I didn’t know yet, but I was about to spend the next 7 years there working with various startups. Some of my endeavors were more successful some less, it was an interesting journey.

In 2012 I join a startup as a co-founder, responsible for marketing, sales and biz-dev. The company had 25 products and I recommended we focus on one. As I started to market it, I saw the product was lacking and knew that with the right app we would get acquired. So instead of marketing I ended up being a developer and weekend marketer.I spend 90% of my time writing extension for browsers in C++, building mobile apps, and websites. And not long after, as I predicted we got some interest, and I was able to negotiate a very impressive deal to get acquired.

I was happy and proud, I felt I made it. However in a short time, everything unraveled, the company had structural issues (I had daily discussions about this with my co-founder much before the acquisition offer arrived, but I was a minor shareholder and joined him, so it was impossible to put my foot down) and at the same time my engagement to my girlfriend quickly deteriorated. I was even able to raise some funding to try and save our company, but it turned out the investors weren’t honest with us. It was a nightmare, and I ended up hitting rock bottom without a company, and a broken heart…

SF, Europe and other places

I was in a constant self-debate what to do next, where to go. I was thinking of moving to SF but wasn’t sure I was ready for that. I started working with a cool company in SF, building various systems for them. I consulting and travelling between Israel and USA and various other places as I was working remote.

Belgrade

In one of my travels I had to go to Stockholm to work with a designer. I happen to miss the direct flight out of Israel, and the only other flight out connected through Belgrade. I had a 9 hour overnight connection and remember this crazy Serbian guy that lived in Tel Aviv. We went out that night and at the end I was captivated. Amazing nightlife and the incredible Balkan women, so stunning and friendly, exactly the cure a broken heart needs!

On my way back from Stockholm I booked two weeks in Belgrade to investigate the matter further. I found Balkan people (Serbian only at that stage) very nice, helpful, friendly, and intelligent. I decided I have to check the local tech scene, as I’m a geek. I booked meetings with various companies, startup accelerators and what not.

Coming from SF and Tel Aviv I have certain expectations, or world concept of what tech companies look like. While visiting the various companies I notices some do seems have a fun work environment and offices, but most had developers stuff up in rooms, like they were cattle. Many of the offices were gray and sad, I literally felt like this is the place that souls go to die. 95% of the companies were doing outsourcing and the tech community was bootstrapping itself. I was excited to have found an interesting opportunity!

I decided to take a brave step and move there. I flew back to Israel packed my things, and in late 2014 I gave back my flat, sold my car, packed all my possessions and moved to Belgrade. I decided to officially move to Serbia, but still visit Israel, the US and several other places until I decide for good where I’d like to spend the next few years of my life.

In November 2014 I left Israel and landed in Belgrade. I was house hunting, connecting with people and understanding the scene. During my first week, a friend of mine organized a fun night out. She also invited this girl who was about to be her roommate and a setup for me.
That night my future’s friend roommate and I, really hit it off, and we are together ever since that first night. Dragana turned out to be a great listener, always there to support me and be there for me as I come and go from Serbia and get obsessed about building products and technology discussions.

In 2015 I was building teams in Serbia, hiring people and flying all over, trying to decide if I can stay to live in Serbia, move back to Israel or move to the USA. I was living in Belgrade, but I was in something like 10-12 countries. It was super difficult and our relationship started to take the toll.

I was considering staying in Serbia longer, as we already had an apartment there and I did work with people in Belgrade, but the more business I did there, the more I realized how hard it is. While I found great people, the legislative restrictions and the government that make is very difficult to do business, I just couldn’t live there for the next 5 years, as much as I wanted too.

So what do you do? You have an amazing girl you’ve found and you want to build a life with her. You also really like the Balkan but cannot do business in the place you live? You look for alternatives.

Sofia

And that’s when Sofia came into the picture. I’ve been to Sofia many times, had friends there and knew the tech community was super active. Dragana was looking into exchange programs and Sofia was close enough to Belgrade that she can continue coming back and forth. Sofia is also going through some transformation, the city is re-developing, and I’m seeing many tech guys starting to build products and not just services. It’s very exciting to maybe be part of such a community. I also knew it was EU which made it lots easier to do business with and on top of all that, the best ski resorts in Eastern Europe (It’s now almost in June and I’m in a ski resort, I’m an addict). So late 2015, we decided we are going for it, and will move permanently to Sofia.

It took a lot of research on my end, and lots of paperwork, frustrations and nerves but in early April 2016 our home was finally only Sofia, and not any other place. So far I’m really impressed with Sofia. Sofia has 1% unemployment, loads of very talented people and you feel the tech community is growing. I’m very optimistic about the future in Sofia!

I would like to sign this post with a big thank you to my girl. She is always there for me, listening to me talking about all sorts of ideas, work, my constant travels, my self obsessed workaholic nature (I tend to work 12-15 hours a day), and at times I forget about her, yet she is there to take care of me, and give me lots of love and support. Thank you my love!

Finding and fixing bugs is not always easy, especially if someone else wrote the code!

I know that engineers in general have NIH syndrome, but I am one that doesn’t share that view. Technology is an enabler, meaning it’s not an end goal, it is there to provide a service (or at least that is how it is most of the time).

As such, we must sometimes make fixes to our code, or to other people’s code, and that requires debugging. I’ve seen many people use console.log/logger/printf – heck, sometimes they even suggested that I do it that way. But as much as I enjoy waterboarding myself, I’d much rather use a debugger whenever I can. Debugging a node.js project isn’t complex, it just requires a little bit of setup, after which you can debug a local app or even a remote production/staging/test environment.

The first step is to run node.js with the special debug flag and the optional port:

node --debug
node --debug=4455
node --debug-brk

If you’re using gulp/nodemon etc, be sure to include those flags in a separate task and/or pass the relevant params to your node app.

Then you can launch your app, or do it via the task, and your node.js app is running and allowing any debugger to connect to it.

You can use any node.js you choose. I personally use phpstorm/webstorm. While it’s not a perfect product and has some issues, I’ve had very successful debugging sessions with it, and I’ll try and outline how to set that up.

First install webstorm/phpstorm. Both IDEs are great and very similar, except Phpstorm also allows you to edit and work on PHP files, whereas Webstorm mainly concentrates on JS files and web files.

After the install, launch the app and go into the plugin:

Go to File->Settings and in that screen click on the plugins menu item.

The click on the “Install JetBrains plugin…” button and in the new window either scroll down or search in the top search box for NodeJS plugin.

Once the install is finished, you should have NodeJS installed and you can go ahead and open your projects directory in the IDE. (File->Open Directory, obvious I know, but still… 😉 )

In the last step we need to configure the remote config for our node project.

Click on Run -> Edit Configurations… Menu.

And Click on the + button and select Node.js Remote Debug.

Then in the main window just setup the server address and port (this can be used to debug a remote machine or a local machine)
And you’re all setup to start debugging your server!

Then click ok, select the configuration from the top right-hand side menu and click on the little bug icon button:

At this stage you’re up and running. If you look at the bottom debug tab you should see you’re connected and then you can put a breakpoint anywhere in your code and solve any bug you come across like a hero (at least in theory! 🙂 ).

***** Important note *****

While Phpstorm/Webstorm is wonderful, I’ve had some issues with debugging performance. This issue relates to some settings in the software so to ensure you do not get frustrated waiting for the first breakpoint to hit, I would suggest you configure Phpstorm/Webstorm as follows:

My recent experience is that many companies insist on having engineers on site. When they hear “remote” or “not in the office” many people have a very negative perception. They either believe it’s cheap labor or they believe they require people to come into the office each day in order to get good results. While I do understand the bad experience many companies have, this is not always the case. Many are highly successful with distributed remote engineers, or even a remote team. While there are many places with highly talented engineers all over the world I see again and again companies that insist on hiring people only from the local eco-system. And it’s true that there are certain skills that only exist in Silicon Valley / Tel Aviv / NYC and other places where people have successfully built large companies, however a large percent of the work can still be done in a different place where the talent is more loyal, at a lower cost, while not sacrificing the skill-set of the people. It’s very difficult and expensive to hire engineers in SF or NYC or TLV, and as there are so many offers for talented engineers there, retention becomes just as hard as recruiting.

I’ve been highly successful at finding and retaining talent world-wide. I’ve also been working with companies for around 6 years, remotely. Either personally for my own start-up or providing development services for companies. I’d like to share my thoughts on what are the secrets to making such an environment flourish.

My experience with remote teams

Today my time is split between the US, Israel and Eastern Europe. I’ve been working for the past 6 years or so in and with remote environments and teams. I’ve used remote teams to build a complex password manager running on multiple web and mobile platforms, and in 4 years it has reached over 70,000 paying customers. I’ve also been successful at building products for US companies with teams in Eastern Europe and getting results using the latest front-end and back-end technologies.

Working in a remote team as an individual

When I first started out, I had doubts: how does this remote thing even work, if at all? While I’d heard of companies doing it, up until then I was used to waking up in the morning and going into an office. At the time I’d just started working with my new co-founder, with his company that has sold over 3 million dollars of mobile software products, and has worked with over 20 developers from all around the world. I was fascinated by this. Slowly but surely I saw the way he works with them and why he was so successful in doing so. It actually took me a lot of effort to get him to start meeting regularly (as we lived 2 blocks away) and we ended up meeting once every 3-4 weeks in person. We worked night and day and would communicate via skype, email and other methods. We built an amazing product together and got some great offers for partnerships and acquisitions.

Working with a mixture of remote and local teams

For the past 2 years I’ve been working with US-based companies, where most of my development work is done either by me, or by using teams of people in Eastern Europe & the US. I’ve built products and I know that there is a clear difference between a remote single contributor and a remote team. Remote teams are very similar to regular teams, except you might have other people in other countries as your co-developers, product managers, or product owners, and you must manage this process. There are many similarities to being in a remote team and being a remote single contributor. I am not going to go over the differences as I want to focus on the core elements of working with remote teams / single contributors and what is common to making any remote environment work.

The secrets to making remote work

Finding good engineers is hard, no doubt. However using good engineers remotely requires the remote team or remote lead person on that team to have additional skills in order to make it work.

Be Proactive & Driven – This is the single most important quality for any remote engineer / remote team manager. The reason is that when someone is sitting in the office, you can instantly see if someone is not engaged, or stuck. You can just tap him on the shoulder and ask what’s up buddy? Is there anything I can do to help? What are you working on? etc. In remote teams that is not possible, so you need to ensure the person on the other side, and possibly in the other time-zone, is proactive. He will get on call at strange local times, he will email you that something isn’t working. He will flag that he finished his tasks and needs more work, or even let you know that while you’ve planned it before, seems he is finishing early. He will be the type of person tapping himself on the shoulder and not requiring anyone to chase him. EVER! This type of person will make or break your remote / outsource / not in the office work environment.

Resourceful – Resourcefulness goes hand in hand with being pro-active. When working in a remote team, many times you will be faced with integration issues. Integration issues are the ones that take up a lot of time. The back-end RESTAPI that is suppose to return X returns Y. Break. Your mobile app / front-end app cannot read / write the data and the work cannot continue, or perhaps it can? While the proactive perosn would raise the issue, a resourceful one would also find a creative way to continue his work. For example, many times I will create mock data / a mock server when I can’t get the back-end to work. This can mean the difference between 24-48 hours delay in the work, to zero down time, or just 1-2 hours to fix a bug. A resourceful person will find an alternate path to continue his work, create solution to a problem or just move to another task. Resourcefulness is highly important for any engineer, but in remote teams it is vital as it can be the difference between making the remote team work, and reaching the conclusion that remote teams do not work.

Understand Product – Finding a good engineer that can also understand product is very difficult. However when working remotely this not only becomes a nice to have, it becomes vital. Understanding product means thinking in terms of user experience, and what is the easiest and most intuitive way to use the application. Many talented engineers can produce great code per requirements or spec, but do not think in terms of what the user needs. When this happens in house, the product lead can very quickly do a course adjustment: “Hey, I thought that would work, but on second thought let’s scratch that and move this button over here.” With remote teams these iterations might take more time, and so it’s important to have someone you trust that would adjust the course himself. Someone who would understand what the “real requirements are” or what the functional requirements are, and build the right usability for the user. Even if not perfect, then the product person would have a much smaller adjustment to make. Understanding product is not simple, but once you find the right person that can do that, you’re setting yourself up for success with remote teams / engineers.

Result Oriented – Most people hate micromanagement, and while sometimes management does need to intervene in the remote environment, this becomes almost impossible. That is why in remote environments, your engineer / lead must be result oriented. He is not focused on completing a feature, or getting his “workload” ticked off. He should be focused on making sure your business goals are achieved, and that his part is playing it’s role in the global scheme of things. A result oriented person would ask about your business deadlines, when do things need to be done by, and why. This means that person is not about just counting the hours worked, but about making sure he is helping you get to where you need to be.

TimeZone Issues

I’ve worked with teams in many time-zones, and when I meet new customers they always raise that concern. I would like to use the end of this post to crush any time-zone concerns people have. Is having developers in different time-zones a challenge? Sure it is! Does it mean it won’t work? Not necessarily. If you’ve found a good engineer or engineers, that have the list of skills I’ve mentioned, you won’t be suffering from time-zone issues. These types of people, with these skills, are leaders. They will work at many times that overlap with your hours, they will be answering emails at 2am in the morning their time, they will jump on call at strange hours as they commit to your success. Furthermore, how many times do you really need to talk to your engineer 8 hours a day? Most of the time you’d rather not do that, as if you are, you might be hurting your own performance at the same time…

I’m a big believer in remote teams and when done right they are a wonderful asset. The right team / person can build you amazing software that works very well. It’s all of matter of understanding how to work it, and what to look for. I hope this helps and feel free to contact me if you have any questions about creating a successful remote software team.

Recently I’ve setup an ember project and I needed authentication. There is an excellent library / cli plug-in called ember-simple-auth. While the guides are great, the most recent version of Ember(1.13 / 2) doesn’t play nicely with the latest ember-simple-auth.

Ember-Simple-Auth, provides a special branch called jjAbrams (PR 602). While it is an awesome library, getting it to work can be somewhat tricky as not everything is as documented and requires some tweaks here and there. I’m trying to outline what I did, in the hopes it will save time for many others. And help prevent other devs from banging their heads against their keyboards, posting issues on git repositories or irc channels, or reading simple-auth source code to understand why things don’t work they way they are suppose to (as I did) in the hopes of understanding why things don’t work. Especially if you’re running against a custom server, like I did.

Here are the steps to get it working.

<

ul class=”regular”>

First create an ember app using ember-cli

1

ember newapp-name

Then follow the instructions on ember-simple-auth, how to get this specials build. It’s rather simple but still you need to make sure you don’t have any of the ember-simple-auth / simple-auth packages in your package.json / bower.json and also delete them from your node_modules / bower_components directories. Here is the pull request for it (make sure to read it as it explain how to get setup). https://github.com/simplabs/ember-simple-auth/pull/602

Next add a Login page and login controller and a protected page (notice I’m not using the LoginMixin as in many of Ember-Auth examples as it states it’s deprecated and I’ve also chosen not to use the Unauthenticated mixin (just because I’d rather leave that always accessible).

Next create an authenticators directory and an custom.js (authenticates login requests and restores session on refresh)
Notice I use the same name everywhere (access_token) as once you resolve when authenticating it stores all the info in the session.