sponsored links

I had recently problems with servers running application server Java and suddenly began to see strange errors like “broken pipe” or exausted resources, this is often due to the high number of open files that a modern server can bind especially compared to the default Linux systems that is still standing at 1024.

Let’s see how many open files are present on our system and how to resolve, or better to prevent this problem.

Check the open files of a process

Step # 1 Find out program PID

Let’s check for a tomcat process

# ps aux | grep tomcat

Output:

12390

Step # 2 List file opened by pid 12390

Use lsof command or /proc/PID file system to display fd lists:

# lsof -p 12390 | wc -l

or

# cd /proc/28290/fd
# ls -l | wc -l

At this point we can see the total number of open files of that PID, if we are close to 1024 are going to have problems.

Tuning file descriptor limits on Linux

Linux limits the number of file descriptors that any one process may open; the default limits are 1024 per process. These limits can prevent optimum performance of both benchmarking clients (such as httperf and apachebench) and of the web servers themselves (Apache is not affected, since it uses a process per connection, but single process web servers such as Zeus use a file descriptor per connection, and so can easily fall foul of the default limit).

The open file limit is one of the limits that can be tuned with the ulimit command. The command ulimit -aS displays the current limit, and ulimit -aH displays the hard limit (above which the limit cannot be increased without tuning kernel parameters in /proc).

The following is an example of the output of ulimit -aH. You can see that the current shell (and its children) is restricted to 1024 open file descriptors.