Node.js, "too many open files", and ulimit

Here’s what I learned when I ran into this error hosting a Node.js app using WebSockets on Ubuntu.

But my app doesn’t read or write files to the file system!

The word ‘files’ is misleading. The problem really is that there are too many ‘file descriptors’ open. A file descriptor can be a lot of things, including a handle to an input or output resource, such as a network connection. If you are serving HTTP requests with HTTP responses, these file descriptors open and close quickly as requests are served with responses and connections from your server to your clients are opened, responsed to, and close down. You might never reach the limit and see this error. However, if you open a WebSocket connection for each user who stays on the site, you’ll find the count of file descriptors going up and up when visitors stay on your site. The WebSocket connection stays open for much longer than the HTTP request/response connection, so the network connections (and open file descriptors) accumulate.

Visualizing your open file descriptors

You can get the total count of open file descriptors with:

$ lsof | wc -l
5268

(lsof lists open files, wc -l counts the lines)

If you want to narrow this down to just 1 Node.js process’ open file descriptors, add a filter. Find your Node.js process’ PID

These are all the open file descriptors for my Node.js process (the chat_after demo from this socketio-demos repo). See the last 3 lines with a TYPE of IPv6? Those are file descriptors for 3 open web socket connections from 3 web browsers connected to a Node.js chatroom running on my M1330 laptop.

adam-m1330.local:http-alt->Adams-MacBook.local:64982 is connecting from a MacBook in the same network

localhost:http-alt->localhost:38218 and line after are two browser tabs on the same laptop as the server

Linux and Open File (Descriptor) Limits

And like all things that can be counted, there are limits in place to prevent systems from overloading themselves and crashing.

User specific file limits

Linux puts limits on the amount of files a user (like the one who executes the Node.js process) can have open at once. Low limits can be good to keep one user from hogging too many resources on a server shared by many users. However, if you are deploying servers dedicated to running Node.js processes, you can safely raise this limit and give Node.js access to more resources.

You can see the limit with the command ulimit -n:

$ ulimit -n
1024

To edit this limit, add new lines to the file /etc/security/limits.conf.

(if you want to all other user specific limits you can configure in this file, run ulimit -a)

soft nofile 10000
hard nofile 10000

(the different between hard and soft limits isn’t important in the context of a Node.js server, but I like this explanation if you are curious about it.)

The exact number is up to you. Consider:

How many simultanous WebSocket connections do you need to support on this server for this user?

How many can your application actually handle?

Are there other users on this machine that need some capacity reserved for them as well?

Restart the machine, and run ulimit -n to see if the value has stuck.

Operating system limits

Ubuntu has an operating system wide limit on file descriptors as well. You can view it with the command sysctl -a | grep fs.file-max:

$ sysctl -a | grep fs.file-max
fs.file-max = 308115

If you need to raise this as well, edit the file /etc/sysctl.conf and add a new line with the new value, like this:

fs.file-max = 40000

Again, consider how high a limit you actually need, and what your server can handle.