http://www.ronniesan.com/Ghost 0.7Thu, 22 Feb 2018 00:48:33 GMT60PM2 is a process manager for NodeJS. PM2 will restart your node processes if they crash and keep track of the number of restarts. You can also connect your processes to keymetrics.io so you can monitor remotely.

Installing PM2

Install PM2 globally via NPM to use it.

$ npm install

]]>http://www.ronniesan.com/using-pm2-to-manage-your-nodejs-processes/8528faab-0ae8-4ac3-b5e2-2d1761698e79Wed, 13 Apr 2016 00:14:51 GMTPM2 is a process manager for NodeJS. PM2 will restart your node processes if they crash and keep track of the number of restarts. You can also connect your processes to keymetrics.io so you can monitor remotely.

The App name column identifies the process that is running. By default, it will use the name of the script you started. You can customize it by doing the following:

$ pm2 start /sites/my-site/server.js --my-site

If you make a change to the script, you can restart it using the app name or the id like such:

$ pm2 restart my-site
$ pm2 restart 0

Or you can watch the current directory and it's sub-directory for changes...

$ cd /sites/my-site
$ pm2 start server.js --name=my-site --watch

Then whenever a file changes, the process will restart.

To stop watching, you need to explicitly stop the process with the watch argument.

$ pm2 stop my-site --watch

Restarting processes on server reboot is really easy, too:

$ pm2 startup
$ pm2 save

For full documentation on PM2 and to learn about all the other cool stuff you can do with it, visit the keymetrics site at http://pm2.keymetrics.io/.

]]>http://www.ronniesan.com/using-lru-cache-in-nodejs/c537a14e-1224-48b3-980a-c422f10669baFri, 29 Jan 2016 02:53:34 GMTFirst of all, what is a cache? A cache is a place where frequently-accessed data is stored that can be quickly retrieved with little overhead. Cache is useful when you have a database-driven website that gets high amounts of traffic as it can help reduce the load on your database server by bypassing it completely to retrieve certain data.

The way caching works is very simple. When a piece of data is needed, your code first checks to see if it exists in your cache. If the data you need does not exist in the cache, it will then get the data from the database. A copy of that data is then placed in the cache so it will be available there the next time you ask for that particular piece of data. That copy of the data will sometimes have a TTL or (time to live) expiration time. This ensures that data in the cache does not get outdated as updates are made to the original record in the database.

A cache is usually stored in memory so that it can be accessed quickly and with less technical overhead. You can pull data from RAM much faster than from a database, but the data will not be persistent (will not be saved if the process is stopped). This makes sense for caching since we want to access data faster, but there is a saved copy of that data in the database.

When you hear the term "stored in memory" it means the value is stored in the server's RAM. Storing something in the server's RAM can be as simple as setting a variable equal to a value. Since a running NodeJS process/script stores variable data in RAM, you are storing that data in memory.

LRU cache stands for "least recently used" cache. This puts a limit on the amount of items you store in the cache your server does not run out of memory as more and more items are stored in the cache. As the limit of items is reached, the LRU cache will start to remove the "least recently used" items in the cache.

Let's learn how to use the NodeJS LRU cache module in an app. For the purposes of this tutorial, I will skip the details of accessing the data from the database.

Installing The LRU Cache Node Module

To begin, we will install the LRU cache module via NPM. Navigate to your app's root directory (the same one with the package.json file) in console and type the following command:

$ npm install --save lru-cache

This will install the lru-cache module into the node_modules folder for your app.

Importing the Module

Next, in we will import the module into our application and create our cache while setting a few options...

We only want a maximum of 100 items to be in our cache at any given time and each item can only stay in the cache for 1 hour.

Storing and Retrieving Data

Now we can start using our cache to store and retrieve data. It makes sense to create a reusable library when we setup our cache so we don't have to write the logic every time we access data, but for the purposes of demonstration, we won't get into that detail here.

function getData(table, id) {
// Create a key to identify the data in the cache
var cache_key = table + ':' + id;
// Check if the data is in the cache
var data = cache.get(cache_key);
// The key was not in the cache so data is undefined
if (!data) {
// Get the data from the database
data = getDataFromDatabase(table, id);
// Store the data in the cache
cache.set(cache_key, data);
}
return data;
}
// Retrieve a piece of data
var user = getData('users', '12345');

It's as simple as that. With this little bit of code, we have effectively reduced the amount of strain on our database server.

]]>http://www.ronniesan.com/a-better-way-to-reference-dependencies-in-nodejs/236a9b00-e1a2-4961-a89a-842e49165557Fri, 11 Dec 2015 07:45:14 GMTLet's say you're writing a NodeJS application and you've created a library or module that you want to use in different places throughout your app. You don't want to register it with NPM because it isn't useful for anyone else since it's specific to your app. You also don't want to throw it in your node_modules folder because you need to track changes in git and you have the node_modules folder included in your .gitignore file.

What most people do is require that code using a relative path:

var my_module = require('../../lib/my_module.js');

The path changes as you start requiring the module in different sections of your application. Then you move a file around and realize you have missing dependencies because you forgot to update the paths.

There's an easy fix to this problem.

In your package.json file, you can add a property in the scripts section called postinstall that runs a bit of code after npm install is run. Add the following postinstall code to create a symlink to a folder in the root of your project (in the code below I use a directory named lib).

When I clone my project I just run npm install and the symlink will be created for me.

Now I can require all of my custom dependencies relative to the root folder of my project no matter what directory I'm in. If I move a file using the dependency, the path to my dependency will always resolve correctly since NodeJS looks in the node_modules folder at the root of your project first.

var my_module = require('lib/my_module.js');

]]>Because NodeJS is non-blocking, you might be running code that wasn't intended. This is why return statements are important in your code. It's a simple mistake that many developers can make when transitioning to NodeJS from a different language.

Let's say you have an endpoint like the one below...

// A

]]>http://www.ronniesan.com/dont-forget-your-return-statements-in-nodejs/5d43d80d-ef26-4168-878e-edc5d546fdd6Fri, 04 Dec 2015 14:40:42 GMTBecause NodeJS is non-blocking, you might be running code that wasn't intended. This is why return statements are important in your code. It's a simple mistake that many developers can make when transitioning to NodeJS from a different language.

In the code above, we have an endpoint that grabs a person from a database. If an error occurs in our getPerson call, we return some JSON with the error message.

The problem with this code is that the script does not stop processing when we return the error message. Because we did not use a return statement in our code, the code below the if block will still execute.

Adding that return statement will stop the rest of the script from executing after returning a response.

]]>http://www.ronniesan.com/using-json-parse-to-coerce-values/f916dac1-2580-44f1-a26c-f0683bc23e96Mon, 30 Nov 2015 23:26:49 GMTThe problem with posting values to an endpoint directly from an online form is that they sent as strings. This means you have to coerce the values to the proper type after they are submitted. An easy way to do this is by using a try...catch block and JSON.parse on the submitted values.

When the wrapped value is parsed, it will return as a string. This is important for things like a zipcode that might begin with a 0 or something similar.

]]>http://www.ronniesan.com/not-all-nodejs-code-is-non-blocking/c9302933-b98b-4b07-a67a-bf14a7e26cd7Sun, 29 Nov 2015 07:36:57 GMTEven though NodeJS is asynchronous in nature, you have to understand that all the code you write is not always non-blocking. Any function that is CPU-intensive or that waits for a return value before proceeding will block the current event loop. While the event loop is blocked, it cannot accept any new connections or process any ongoing requests.

A common piece of code that is blocking is the JSON.parse method. While most parsed strings are parsed relatively quickly, a large JSON string (several MB in size) could take up to a couple seconds to parse. At a small scale, this isn't such a big deal, but at a scale of several requests per second, the system can begin getting bogged down.

You can tell when your code is blocking when it expects a return value before proceeding. Take the code below for instance:

If you run the script above, you'll notice it takes a couple seconds for the word Done. to be logged to the console. This is because it's waiting for the return value before continuing on.

Sometimes using code that blocks the event loop is unavoidable. The best way to handle this kind of code is to spin up a child process or use a worker script.

I'm also a big fan of lodash, but it might be a good idea to use the async library instead for complex loops or array/collection mapping.

]]>http://www.ronniesan.com/viewing-a-dynamically-generated-reql-query/0cc571b1-7df4-4eb9-ac06-e71d37ec2999Sun, 15 Nov 2015 14:40:00 GMTRethinkDB is perfect for building dynamic queries based off any number of special conditions or input. The difficult part is trying to debug a dynamic query when something goes wrong. An easy way to see your final query is to convert the entire thing to a string and log it in the console before you run it.

Below is a typical example of how a query might be dynamically built in an application. Towards the bottom, you'll see that a simple .toString() appended to the end of the query when console logging will output the dynamically built query to the console.

This will output your query it in its final state before you run it. You can copy and paste it from the the console into Sublime Text (or whatever editor you are using) and prettify it there. All the variables will be replaced, but it's much easier to debug now.