Key Takeaways

Consider the overhead of issuing and clearing the different timers in your application.

CPU profilers give you useful information but won't tell you the whole story.

Be wary of high-level ECMAScript features, especially if you are not using the latest JavaScript engine or a transpiler.

Control your dependency tree and benchmark your dependencies.

In order to improve the performance of an application that involves IO, you should understand how your CPU cycles are spent and, more importantly, what is preventing higher degrees of parallelism in your application.

While focusing on improving the overall performance of the DataStax Node.js driver for Apache Cassandra, I've gained some insights that I share in this article, trying to summarize the most significant areas that could cause throughput degradation in your application.

Even though the optimizing compiler will make your code run significantly faster, in an IO-intensive application most of the performance improvements revolve around how to reorder the instructions and use less expensive calls to allow more operations per second, as we will see in the following sections.

Benchmarks

To discover optimizations that could impact the largest number of users, it’s important to define benchmarks using workloads for the common execution paths, simulating real-world usage.

Start by measuring throughput and latencies of your API entry points. You can also benchmark individual internal methods performance, if you need more detailed information. Use process.hrtime() to get the high resolution real time and to get the duration of the execution.

You should try to create some limited but realistic benchmarks as early as possible in the project lifetime. Start small by measuring throughput of a method call and later you can add more comprehensive information like the latency distribution.

CPU Profiling

There are several CPU profilers, Node.js provides one out-of-the-box that is good enough for most cases. The built-in Node.js profiler takes advantage of the profiler inside V8, sampling the stack at regular intervals during the execution. You can generate V8 tick file using the --prof flag to run node.

Then, you can process the profiling session output to aggregate the information and convert it into something readable by humans, using the --prof-process flag:

$ node --prof-process isolate-0xnnnnnnnnnnnn-v8.log > processed.txt

Opening the processed text file in an editor will give you information divided into sections.

Look up for the "Summary" section in the file that will look something like this:

[Summary]:

ticks total nonlib name

20109 41.2% 45.7% JavaScript

23548 48.3% 53.5% C++

805 1.7% 1.8% GC

4774 9.8% Shared libraries

356 0.7% Unaccounted

The values represent how many of the samples gathered occurred in JavaScript / C++ code / Garbage collector and will vary depending on the code you are profiling. Then you can navigate to the corresponding subsection of the file (e.g. [JavaScript], [C++], …) to see the details of the samples order by occurrence.

There is an additional section in the processed file of the profiling session, [Bottom up (heavy) profile], that is especially useful. It provides information about the primary callers of each function, in a tree-like structure. Take the following snippet for example:

223 32% LazyCompile: *function1 lib/file1.js:223:20

221 99% LazyCompile: ~function2 lib/file2.js:70:57

221 100% LazyCompile: *function3 /lib/file3.js:58:74

The percentage shows the share of a particular caller in the total amount of its parent calls. An asterisk before a function name means that time is being spent in an optimized function, while tilde means not optimized function.

In the example, 99% of the function1 calls has been made by function2, for which function3 is responsible for 100% of the calls to function2, according to the profiling sample.

CPU profiling sessions and flame graphs are useful tools to understand what is in the stack most of the time and which methods are spending CPU time, in order to spot low-hanging fruit. But it's important to understand that it won't tell you the whole story: you could be preventing higher degrees of parallelism in your application and the asynchronous IO operations could make it hard to identify.

System calls

Libuv exposes a platform-independent API that is used by Node.js to perform non-blocking IO and your application IO (sockets, file system, ...) ultimately translates into system calls.

There is a significant cost in scheduling those system calls. You should try to minimize the amount of syscalls by can grouping / batching writes.

When using a socket or a file stream, instead of issuing a write every time, you can buffer and flush the data from time to time.

You can use a write queue to process and group your writes. The logic for a write queue implementation should be something like:

While there are items to write and we are within the window size

Push the buffer to the “to-write list”

Concatenate all the buffers in the list and write it to the wire.

You can define a window size based either on the total buffer length or the amount of time that passed since the first item was queued. Defining a window size is tradeoff between the latency of a single write and the average latency of all writes. You should also consider the maximum amount of write requests to be grouped and the overhead of generating each write request.

You would generally want to flush writes of buffers in the order of kilobytes. We found a sweet spot around 8 kilobytes, but your mileage may vary. You can check out the implementation in the client driver for a complete implementation of a write queue.

Grouping or batching writes will translate into higher throughput thanks to less system calls.

Node.js timers

Node.js timers, which have the same API as window timers in Web API, are very useful, easy to schedule / deschedule and are used extensively across the entire ecosystem.

As such, it’s likely that there may be a large amount of timeouts scheduled at any given time in an application.

When the key exists (a timer with the same duration exist), it is appended to the bucket as a O(1) operation.

When a key does not exist, a bucket is created and the timer is appended to it.

With that in mind, you have to make sure you reuse the existing buckets, trying to avoid removing a whole bucket and creating a new one. For example, if you are using sliding delays you should create the new timeout (setTimeout()) before removing (clearTimeout()) the previous one.

You should take special caution for new language features in ES2015 and ESNext, which can be significantly slower than their ECMAScript 5 counterparts. The six-speed project website tracks the progress of the current performance on the different JavaScript engines. Additionally, you can microbenchmark different approaches when you can't find conclusive results from existing benchmarks.

The V8 team is working to improve the performance of new language features to eventually reach parity with their naive counterparts, their work on optimizing features introduced with ES2015 and beyond is coordinated via a performance plan, where the V8 team gathers areas that need improvement, along with proposed design documents to tackle those issues.

You can follow progress of V8 implementation on the blog, but consider that it can take a long while for those improvements to get into a Long-term Support (LTS) version of Node.js: Which V8 version lands in a major version of Node.js is normally decided before cutting the branch from master, according to the LTS plan. You will have to wait 6-12 months for a new major/minor version of V8 engine to be included in the Node.js runtime.

Dependencies

Node.js runtime provides a complete library for IO but, as the ECMAScript specification provides very few built-in types, you sometimes have to rely on external packages to perform other basic tasks.

There is no guarantee that published packages work correctly and in an efficient way, even for popular modules. The Node.js ecosystem is huge and often those third party modules just involve few methods that can be implemented by yourself.

You should balance between reinventing the wheel and controlling the performance impact of your dependencies.

Avoid adding a new dependency whenever possible. Don’t trust your dependencies, period. An exception to this rule is where the project you depend on publishes reliable benchmarks, like bluebird library does.

In our case, async was impacting the request latency. We used async.series(), async.waterfall() and async.whilst() extensively across our codebase. It was really hard to identify it as a culprit of poor performance, as a control flow library is a cross-cutting concern. The async utility is one of the most popular modules and thanks to that, the issues were publicly identified. There are even drop-in replacements like neo-async, that runs significantly faster and publishes benchmarks.

Wrapping up

Even though some optimizations described here are common to the rest of the technologies, some of them are specific to the Node.js ecosystem and how the JavaScript engine and core libraries work.

In the case of our client driver where we've applied these optimizations, the result was more than 2X increase in throughput according to our benchmarks.

Considering that on Node.js our code runs on a single thread, how our application spends CPU cycles and the order of the instructions really matter, allowing us to support higher degrees of parallelism and improving overall throughput.

About the Author

Jorge Bay is the lead engineer for the Node.js and C# client drivers for Apache Cassandra and DSE at DataStax. When he is not writing his own bio, he enjoys solving problems and building server-side solutions. Jorge has over 15 years of professional software development experience and he implemented the community Node.js driver for Apache Cassandra that was the foundation for the official DataStax driver.

Is your profile up-to-date? Please take a moment to review and update.

Email Address

Note: If updating/changing your email, a validation request will be sent

Company name:

Keep current company name

Update Company name to:

Company role:

Keep current company role

Update company role to:

Company size:

Keep current company Size

Update company size to:

Country/Zone:

Keep current country/zone

Update country/zone to:

State/Province/Region:

Keep current state/province/region

Update state/province/region to:

Subscribe to our newsletter?

Subscribe to our architect newsletter?

Subscribe to our industry email notices?

You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.

We notice you're using an ad blocker

We understand why you use ad blockers. However to keep InfoQ free we need your support. InfoQ will not provide your data to third parties without individual opt-in consent. We only work with advertisers relevant to our readers. Please consider whitelisting us.