TL;DR: In this article, we are going to learn what tools we should take advantage of when developing NPM packages. We will start from scratch. We will create a GitHub Repository to host our package, then we will look into interesting and important topics. For example, we will talk about IDEs, we will configure ESLint in our project, we will publish the package on NPM, and we will even integrate a continuous integration tool. The code that gave life to this article can be found in this GitHub repository.

What NPM Package Will We Build

After following all the steps shown in this article, we will have our own package published in the NPM official repository. The features that this package will support (and how to build them) are not the focus of this article. There are plenty of great tutorials out there that can teach us how to develop in Node.js. The focus here are the processes and the tools that we can use to build great packages.

Nevertheless, to give a heads up, we are going to build and publish a NPM package that masks raw digits into US phones. For example, if we pass 1234567890 to the package, it will return (543) 126-0987.

The following list gives an overview of the topics that we are going to cover in this article:

Note that this article won't lecture about Git. If you are not familiar with Git, you will still be able to follow this article. However, every developer should learn how to properly use Git and GitHub. So, if needed, stop reading and go learn Git (and install it too, of course :D). You can come back later.

Creating the GitHub Repository

Great, we already decided where we will keep our source code safe. It's time to create the repository to start working on it. If we head to the Create a new repository web page on GitHub, we will see a form that asks for three things: repository name, description, and visibility. As we are building a module that handles masks, let's answer these questions as follows:

Repository name: masks-js

Description: A NPM package that exports functions to mask values.

Visibility: Public

After that, GitHub gives us options to initialize the repository with a README file, to add a .gitignore file, and to add a license to our module. We will use all three options as follows:

Add a license: Again, less work later. Let's set this combo to MIT License.

Done! We can hit the Create repository button to finish the process.

Cloning the GitHub Repository

After creating the repository (which should be instantaneous), GitHub will redirect us to our repository's webpage. There, we can find a button called Clone or download that gives a shortcut to the URL that we will need. Let's copy this URL and open a terminal. On this terminal, let's choose an appropriate directory to host the root directory of our project (e.g. ~/git), and then let's clone the repository.

The code snippet below shows the commands that have to be used to clone the repository:

The last command will put our terminal in the project root. There, if we list the existing content, we will see four items:

A directory called .git that is used by Git to control the version of our code locally. Most probably, we will never touch this directory and its content manually.

A file called .gitignore where we keep entries that identify items that we do not want Git to version. For example, in the near future, we will make Git ignore files generated by our IDE.

A file called LICENSE. We don't have to touch this file, it contains a predefined content granting the MIT License to our code/package.

A file called README.md that contains just the name of our package (masks-js) and its description.

Ignoring Files on Git and NPM

During the next sections, we will create some artifacts that we don't want to send to GitHub or to NPM. For example, our IDE will add some configuration files to our project root that we want to Git to ignore. Another thing that we want Git to ignore is the ./lib directory that we will create when publishing our package. This directory will only be shared on the NPM package itself (i.e. for developers downloading the package through NPM). Therefore, let's update .gitignore as follows:

# leave everything else untouched
.idea/
.vscode/
lib/

As we don't want NPM to ignore ./lib, let's create the .npmignore file. In this file we will add the following configuration:

.nyc_output/
coverage/
node_modules/
.idea/
.vscode/

This will make NPM ignore these five folders, but not ./lib.

Note that we are just removing folders that are not important to developers that want to use our package.

IDEs (Integrated Development Environments)

Developing good software, arguably, passes through a good IDE. Among other things, IDEs can help us refactor our code, be more productive (mainly if we know their shortcuts), and debug our code. They usually help us by pointing out possible problems before compiling and/or running our code either. Therefore, this is a topic that cannot be put aside.

On the Node.js/NPM environment, there is a good number of IDEs available. A few of them are paid and lot are free. However, in this author's opinion, there are only two IDEs that are really relevant: WebStorm and Visual Studio Code.

WebStorm: This is a full-fledged IDE that provides great tools and has great support to everything related to JavaScript (e.g. TypeScript, HTML, CSS, SCSS, Angular, Git, etc). If it does not support some feature by default, it probably does so through plugins. The biggest disadvantage of this IDE is that it's paid. However, WebStorm is so good at what it does that it's worth the price.

Visual Studio Code: This is another full-fledged IDE. It also comes with great support for Node.js and related technologies, just like WebStorm does. This IDE, in contrast to WebStorm, is free and open source. If you are wondering the difference between them, there are a few resources out there that compare both. For example, there is this article on Medium and this discussing on Reddit.

Other options, although famous, cannot be really considered IDEs. That is, they can be considered IDEs if they are correctly configured with a bunch of plugins. However, why waste time on these kind of configuration when we can choose a good IDE that is ready to help us? If you are still interested on seeing what other "IDEs" are available, there are resources out there that show more options and their differences.

What is important in this section is that we understand that we do need an IDE and choose one. This will help us a lot during the development lifecycle of our package.

NPM Package Development

Now that we have chosen our IDE, let's open our project and start configuring it. Throughout the next sections, we are going to create our project structure and configure tools that will help us produce high-quality code.

NPM Init

First things first. As our goal is to create and publish a NPM package, we need to initialize our project as one. Luckily, this process is straightforward. NPM, through its CLI (Command Line Interface), provides two great ways to configure a project as a NPM package. The first one, triggered by npm init, will ask a bunch of questions and produce the package.json file for us. The second one, triggered by npm init -y, will not ask any question and produce the package.json file with default values.

We will stick with the second option, npm init -y, to get our file as fast as possible. Then, we will edit the package.json content manually to look like this:

Important: the JSON snippet above contains three URLs that point to https://github.com/brunokrebs/masks-js. We need to replace them with the URL of our repository on GitHub.

Two properties in the file above may bring our attention. The main property now points to build/index.js and the version property labels our code as being on version 0.0.1. Let's not worry about them now, we will discuss about these properties in the following sections.

Semantic Versioning

In this section, we are not going to change anything in our project. The focus here is to talk about how to label new releases of our package. In the NPM and Node.js landscape, the most used strategy is by far Semantic Versioning. What makes this strategy so special is that it has a well-defined schema that makes it easy to identify what versions are interoperable.

Semantic Versioning, also known as SemVer, uses the following schema: MAJOR.MINOR.PATCH. As we can see, any version is divided into three parts:

MAJOR: A number that we increment when we make incompatible API changes.

MINOR: A number that we increment when we add features in a backwards-compatible manner.

PATCH: A number that we increment when we make small bug fixes.

That is, if we have a problem with our code and fix it simply by changing an if statement, we have to increment the PATCH part: 1.0.0 => 1.0.1. However, if we need to add a new function (without changing anything else) to handle this new scenario, then we increment the MINOR part: 1.0.0 => 1.1.0. Lastly, if this bug is so big that requires a whole lot of refactoring and API changes, then we increment the MAJOR part: 1.0.0 => 2.0.0.

EditorConfig

EditorConfig is a small configuration file that we put in the project root to define how IDEs and text editors must format our files. Many IDEs support EditorConfig out of the box (including WebStorm and Visual Studio Code). The ones that don't, usually have a plugin that can be installed.

At the time of writing, EditorConfig contains only a small (but useful) set of properties. We will use most of them, but two are worth mentioning:

indent_style: Through this property, we define if we want our code to be indented with tabs or spaces.

charset: We use this property to state what charset (e.g. UTF-8) we want our files encoded into.

To set up EditorConfig in our project, we need to create a file called .editorconfig in the project root. On it, we define how we want IDEs to handle our files:

Note: EditorConfig can handle multiple configuration blocks. In the example above, we added a single block defining that all files ([*]) must be encoded in UTF-8, indented with spaces, and so on. However, we could have defined that we wanted XML files ([*.xml]) to be indented with tabs, for example.

Although subtle, EditorConfig is an important step into producing high quality code. More often than not, more than one developer will work on a software, be it a NPM package or anything else. Having EditorConfig in place will minimize the chances of a developer messing with our code style and the encoding of our files.

ES6+: Developing with Modern JavaScript

JavaScript, as everybody knows, has gained mass adoption as the primary programming language over the last few years. Node.js was primarily responsible for this adoption, and brought with it many backend developers. This triggered a huge evolution of the language. These evolutions, although great, are not fully supported by every platform. There are many JavaScript engines (and many different versions of these engines) in the market ready to run code, but most of them do not support the latest JavaScript features.

This rich environment created one big challenge for the community. How do we support different engines and their versions while using JavaScript most recent features? One possible answer to this question is Babel. Babel, as stated by their official website, is a JavaScript compiler that allows developers to use next generation JavaScript today.

Note that Babel is one alternative. There are others, like TypeScript, for example.

Using Babel is straightforward. We just have to install this library as a development dependency and create a file called .babelrc to hold its configuration:

With this file in place, we can configure a NPM script to make Babel convert modern JavaScript in code supported by most environments. To do that, let's open the ./package.json file and add to it a script called build:

{
...
"scripts": {
"build": "babel ./src -d ./lib",
...
}
...
}

When we issue npm run build, Babel will take the source code found in the ./src directory (which can be written in modern JavaScript) and transform it to ECMAScript 5 (the most supported version of JavaScript). To see this in action, let's create the aforementioned ./src directory in the project root and add a script called index.js into it. To this script, let's add the following code:

Although short, this script contains code that is not supported by ECMAScript 5. For example, there is no const in this version, nor it accepts Hi, ${name} as a string. Trying to run this code into an old engine would result in error. Therefore, let's use Babel to compile it:

npm run build

After asking NPM to run the build script, we will be able to see that Babel created the ./lib directory with index.js in it. This script, instead of our code above, contains the following:

Linting NPM Packages

Another important tool to have around when developing software is a linting tool. Lint is the process of statically analyzing code for common errors. Linting tools, therefore, are libraries (tools) that are specialized in this task. In the JavaScript world, there are at least three popular choices: ESLint, JSHint, and JSLint. We can use any of these three libraries to lint our JavaScript code, but we have to choose one.

There are many strategies that we can follow to decide which tool we should use: from a simple random decision to a decision based on a thorough analysis. Though, to speed things up, let's take advantage of a fast (but still good) strategy: let's base our decision into data. The following list shows how many times each package was downloaded from NPM on Nov/2017, how many stars they have on GitHub, and what are their search volume in the US:

ESLint was downloaded 10 million times from NPM, has 9.6 thousand stars on GitHub, and is searched around 1300 times per month in the US.

JSLint was downloaded 94 thousand times from NPM, has 7.5 thousand stars on GitHub, and is searched around 750 times per month in the US.

JSHint was downloaded 2 million times from NPM, has 3 thousand stars on GitHub, and is searched around 750 times per month in the US.

Following the strategy to base our decision on data results, without doubt, into choosing ESLint as the winner. The numbers don't lie, ESLint is the most popular tool in the JavaScript landscape. So let's configure it in our project.

Installing and configuring ESLint is easy. We have to instruct NPM to install it for us, then we can use the --init option provided by ESLint to generate a configuration file:

The last command will trigger a series of questions. Let's answer them as follows:

How would you like to configure ESLint? Use a popular style guide

Which style guide do you want to follow? Airbnb

Do you use React? No

What format do you want your config file to be in? JSON

This will generate a small file called .eslintrc.json with the following content:

{
"extends": "airbnb-base"
}

What is nice about ESLint is that it also enables us to adhere to popular style guides (in this case the Airbnb JavaScript Style Guide). There are other popular styles available to JavaScript developers and we could even create our own. However, to play safe, we will stick to an existing and popular choice.

Great, sounds good to have a tool that help us avoid common mistakes and keep our code style consistent, but how do we use it? It's simple, we configure it in our build process and we make our IDE aware of it. This way we get alerts while using the IDE to develop and we guarantee that no developer, unaware of ESLint, generates a new release with inconsistencies.

To add ESLint to our build process, we can create a new script that executes ESLint and make it run in the build script:

Automated Tests

One of the most important topics in software development is tests. Developing high quality code without automated tests is impossible. That is, we could write code that executes flawlessly without writing a single line of automated tests. However, this code would still not be considered as having high standards.

Why? Simple. Imagine a situation where we wrote a code that contains no bugs. One day, another developer decide that it's time to increment this code by adding some nice new feature. This feature, however, needs to reuse some pre-existing code and change it a little. How, without automated tests, is this developer supposed to test the new version? Manually testing is an alternative, but an arduous and error-prone one. That's why we invented automated tests.

The goal of our NPM package is to, based on an inputted string, return a masked value. This kind of package does not have external dependencies (like a RESTful API) nor it will be rendered in an interface (like a web browser). Therefore, writing only unit tests to guarantee that our functions do what they are supposed to do will be enough.

Cool, we now know what type of tests we will write. What is still uncovered is what library will we use to write these tests. Since the data strategy is doing well, let's use it again. After a small research on Google, we find out that there are three great candidates:

In this case, the numbers were pretty similar. But Mocha, with more stars on GitHub and around three times more downloads on NPM during 2017, looks like the winner. We will probably be supported by a great community and have access to a lot of resources if we choose Mocha. So let's configure it in our project.

First, we need to install Mocha as a development dependency:

npm i -D mocha

Then, we need to replace the test script in our package.json file by the following one:

If we issue npm test in the project root, we will see that Mocha manages to run our test properly. Even though we used modern syntax like import and arrow functions (() => {}).

If we are using a good IDE, we will probably be warned that there are no describe nor it functions available in the ./test/index.js file. This happens because ESLint is not aware of these functions. To make ESLint recognize Mocha's functions, we need to make a small change into the .eslintrc.json file. We need to add a new property called env and add mocha into it:

Coding the NPM Package

Hurray! We finally got into what matters, the code. We can create NPM packages without most of the tools shown in this article, but code is just necessary. No code, no NPM package. Although code is so important, it's not the focus of this article. So, to keep things short and easy to grasp, let's create just a very small prototype.

We will create and export only one function that returns a masked US phone. Even for a specific and precise functionality like this, there are many scenarios to cover. But again, we will keep our focus on the tools and techniques we can use to produce high-quality code, not in the coding and testing tasks themselves.

Enough said, let's work. First, let's replace the content of the ./src/index.js file with the following:

Test Coverage

Feels good to have our code in place with some tests to prove its functionality, but how confident are we of our code and our tests? Are we sure that our tests are covering all the scenarios that we thought about? It's hard to affirm that even in a small package like ours. So, what can we do? The answer is simple, we can use a test coverage tool to see how much of our code we are covering with tests.

Test samples, like those showed in the previous section, exist to help us prove that our code handles all the scenarios that we thought about. Test coverage tools help the other way around. They show if we have enough test samples to cover all the scenarios that came to our mind when typing the code. Ok, we are convinced that we can take advantage of a test coverage tool, but which one?

Running npm test now will make Istanbul analyze how our tests are covering our source code.

Cool, integrating Istanbul on our project was easy. But can Istanbul do more than just saying that we are covering X percent of our code? Sure! Istanbul can show what lines are covered, what lines are not. To get this information, we are going to configure on Istanbul a reporter called lcov. This reporter will generate test data in two formats: one that is machine readable (lcov format), and one that is human readable (HTML in this case).

To configure lcov on Istanbul, we can simply add the following property to our package.json file:

{
...
"nyc": {
"reporter": [
"lcov",
"text"
]
}
}

Note that we configured both lcov and text because we still want Istanbul to keep showing that nice summary that we saw before.

Running npm test now will generate, besides that colorful summary on the terminal, a directory called coverage in the project root. If we inspect this directory, we will see that it contains two things: a lcov.info file with some characters inside that look meaningless (they actually show what lines were executed and how many times); and another directory called lcov-report with an index.html file inside. This is where we will get more data about what lines our tests are covering and what lines are being ignored.

To see the report contained by the lcov-report directory in a browser, let's use a tool like http-server. In our project root, we can use it as follows:

Publishing the NPM Package

After installing and checking our code coverage with Istanbul, we figure that we forgot to cover cases where no value (null or undefined) are passed into our function. Let's fix this by adding new test samples in the ./test/index.js file:

If we ask Istanbul now (npm test), we will see that we managed to add enough scenarios to cover all our source lines of code. This is not a proof that our package contains no bug, but enough to make us confident to publish its initial version. So let's do it.

Publishing NPM packages looks like a very simple process. As described in the official documentation, all we need is to create a user (if we still don't have one) and then issue npm publish in the project root, right? Well, not so fast. Indeed, is not hard to publish a NPM package, but we always want to distribute an ES5 version of our package for maximum compatibility. We could leave this as a manual process (that is, expect the developer to run npm run build before publishing a new version), but this is too error-prone.

What we want instead is to automatically tie the build script to publish. Luckily for us, when NPM is publishing a new version of a package, it checks the package.json file to see if there is a script called prepublishOnly. If NPM finds this script, it runs whatever command is inside it. Therefore, what we have to do is to configure prepublishOnly in our package.json file as follows:

{
...
"scripts": {
...
"prepublishOnly": "npm run build"
},
...
}

Hurray! Looks like we are ready to publish our package. Let's run npm publish and make it available to the world. Note that, before publishing, we might need to create a NPM user and to login to our NPM CLI (npm login).

It's important to note that the name property on package.json is the name that our package will get after we publish it. If someone else tries to publish a package with the same name as ours, they will get an error and will have to choose another name. (Hint: I left the masks-js namespace available on NPM to see who will be the first one to finish this tutorial)

Continuous Integration

Well, well. We have published the first version of our NPM package. This is amazing. Looks like all we need to do to publish a new version is to write some code, cover it with tests, and issue npm publish. But, can we do better? Of course! We can use a continuous integration tool to automate the NPM publishing process.

In this case, we will use Travis CI, one of the most popular and OSS-friendly (Open Source Software friendly) continuous integration tools around. This tool is totally integrated with GitHub and, as such, configuring it in our project is straightforward.

First, we need to head to our profile on Travis CI and turn on the switch shown in the left of our project's name.

Then, back into our project root, we need to create a file called .travis.yml with the following properties:

language: node_js
node_js:
- node
before_deploy:
- npm run build

Note that, from now on, we will count on Travis to generate builds for us. This means that we have to remove the prepublishOnly script from the package.json file.

The properties in the .travis.yml file will tell Travis that our repository contains a node_js project. Besides that, Travis will also use this file to identify which Node.js versions it should use to build our package. In our case, we tell Travis to use only the latest version (node_js: node). We could also add other Node.js versions in there, but as we are using Babel to generate ES5 compatible JavaScript, this is not necessary.

In this case, we are interested in copying the 1a14bf9b-7c33-303c-b2f8-38e15c31dfee value. After that, we have to issue travis setup npm --org back on our project root. This will make NPM ask six questions. The following code snippet shows these questions and possible answers:

In the following sections, we are going to learn how to use Auth0 to secure Node.js APIs written with Express.

Creating the Express API

Let's start by defining our Node.js API. With Express and Node.js we can do this in two simple steps. The first one is to use NPM to install three dependencies: npm i express body-parser cors. The second one is to create a Node.js script with the following code:

The code above creates the Express application and adds two middleware to it: body-parser to parse JSON requests, and cors to signal that the app accepts requests from any origin. The app also registers two endpoints on Express to deal with POST and GET requests. Both endpoints use the contacts array as some sort of in-memory database.

We can run and test our application by issuing node index in the project root and then by submitting requests to it. For example, with cURL, we can send a GET request by issuing curl localhost:3000/contacts. This command will output the items in the contacts array.

Registering the API at Auth0

After creating our application, we can focus on securing it. Let's start by registering an API on Auth0 to represent our app. To do this, let's head to the API section of our management dashboard (we can create a free account) if needed) and click on "Create API". On the dialog that appears, we can name our API as "Contacts API" (the name isn't really important) and identify it as https://contacts.mycompany.com/ (we will use this value later).

After creating it, we have to go to the "Scopes" tab of the API and define the desired scopes. For this sample, we will define two scopes: read:contacts and add:contacts. They will represent two different operations (read and add) over the same entity (contacts).

Securing Express with Auth0

Now that we have registered the API in our Auth0 account, let's secure the Express API with Auth0. Let's start by installing three dependencies with NPM: npm i express-jwt jwks-rsa express-jwt-authz. Then, let's create a file called auth0.js and use these dependencies:

The goal of this script is to export an Express middleware that guarantees that requests have an access_token issued by a trust-worthy party, in this case Auth0. The middleware also accepts an array of scopes. When filtering requests, this middleware will check that these scopes exist in the access_token. Note that this script expects to find two environment variables:

In this case, we have replaced the previous definition of our endpoints to use the new middleware. We also restricted their access to users that contain the right combination of scopes. That is, to get contacts users must have the read:contacts scope and to create new records they must have the add:contacts scope.

Running the application now is slightly different, as we need to set the environment variables:

On the popup shown, let's set the name of this new application as "Contacts Application" and choose "Single Page Web App" as the application type. After hitting the "Create" button, we have to go to the "Settings" tab and set http://auth0.digituz.com.br/callback in the "Allowed Callback URLs" field.

clientID: We have to copy this value from the "Client ID" field of the "Settings" tab of "Contacts Application".

domain: We can also copy this value from the "Settings" tab of "Contacts Application".

audience: We have to set this property to meet the identifier of the "Contacts API" that we created earlier.

scope: This property will define the authority that the access_token will get access to in the backend API. For example: read:contacts or both read:contacts add:contacts.

Then we can hit the "Sign In with Auth0" button.

After signing in, we can use the application to submit requests to our secured Node.js API. For example, if we issue a GET request to http://localhost:3000/contacts/, the Angular app will include the access_token in the Authorization header and our API will respond with a list of contacts.

Conclusion

In this article we have covered a good amount of topics and tools that will help us develop NPM packages. We have talked about Semantic Versioning, configured EditorConfig, set up Babel to use ES6+ syntax, used Automated Tests, and so on. Setting up these tools, and being mindful while developing code, will make us better developers. With these tools to hold our back, we will even feel more confident to release new versions of our packages. So, now that we learned about these topics, it's time to get our hands dirty and contribute back to the OSS (Open Source Software) world. Have fun!