runtime

Node.js is a cross-platform runtime environment and a library for running Javascript applications outside of the browser. It uses non-blocking I/O and asynchronous events to create a highly performance environment built on Google’s V8 engine.

The engine is consisted of the memory heap (where memory allocations happen) and the call stack (where the stack frames are as code executes).

Asynchronous callbacks–allow us to execute heavy code without blocking the UI and making the browser unresponsive.

You can execute any chunk of code asynchronously by using setTimeout(callback, milliseconds). setTimeout doesn’t automatically put your callback on the event loop queue. It sets up a timer, and when that timer expires, your callback will enter the event loop to be executed.

The event loop is a special queue for asynchronous callbacks. Its job is to monitor the call stack and the callback queue. If the call stack is empty, it will take the first event from the callback queue and push it to the call stack, which effectively runs it.

Below is an awesome gif showing this process of queueing up asynchronous callbacks and executing them.

Promises are the solution to callback hell.

It is an object that may produce a single value some time in the future: either a resolved value, or a reason that it’s rejected.

A promise may be in one of the 3 possible states: fulfilled, rejected, or pending.

All promise instances get a then method which allows you to react to the promise. then method callbacks can also be chained.

Big O notation describes the performance or complexity of an algorithm.

O(1) – Constant time complexity – describes an algorithm (a one-line statement code) that will always execute in the same time (or space) regardless of the size of the input data set. An example is accessing a value of an array.

var arr = [ 1,2,3,4,5];
arr[2]; // => 3

O(N) – Linear time complexity – describes an algorithm (usually a loop) whose performance will grow linearly and in direct proportion to the size of the input data set. For example, if the array has 10 items, we have to print 10 times. If it has 1,000 items, we have to print 1,000 times.

//if we used for loop to print out the values of the arrays

for (var i = 0; i < array.length; i++) {
console.log(array[i]);
}

O(log N) – Logarithmic time complexity –describes an algorithm where have a large set of data and you halve the dataset at each iteration until you get the result that you’re looking for. An example of this is finding a word in a dictionary (binary search). Sorting a deck of cards (merge sort) would be O(N log N).

Other examples:

Example 1:
for (var i = 1; i < n; i = i * 2)
console.log(i);
}

Example 2:
for (i = n; i >= 1; i = i/2)
console.log(i);
}

O(N2) – Quadratic time complexity – represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N3), O(N4) etc. Examples include checking for duplicates in a deck of cards, bubble sort, selection sort, or insertion sort.

O(2N) – Exponential time complexity – denotes an algorithm whose growth doubles with each addition to the input data set. An example of an O(2N) function is the recursive calculation of Fibonacci numbers. Another example is trying to break a password by testing every possible combination (assuming numerical password of length N).

Amortized time:

If you do an operation say a million times, you don’t really care about the worst-case or the best-case of that operation – what you care about is how much time is taken in total when you repeat the operation a million times.

So it doesn’t matter if the operation is very slow once in a while, as long as “once in a while” is rare enough for the slowness to be diluted away (that the cost is “amortized”). Essentially amortised time means “average time taken per operation, if you do many operations“.

Sometimes we want to optimize for using less memory instead of (or in addition to) using less time. Talking about memory cost (or “space complexity”) is very similar to talking about time cost and we also use the Big O notation.