I'm working on an old legacy application, and I commonly come across certain settings that no one around cam explain.

Apparently at some point, some processes in the application were hitting the max number of files descriptors allowed per process, and the then team decided to increase the limit by adding the following to the init files of their shells (.kshrc):

Sounds like they went a bit overboard: they squared the number of allowed FDs!
–
SamBSep 4 '12 at 17:41

That sounds reasonable. We've run into this before on Solaris too; 256 is just too small as a default for modern systems. A non-forking server can easily peak at two hundred concurrent clients if the connections are being held open but idle for any length of time.
–
Nicholas WilsonJan 28 '13 at 11:27

2 Answers
2

To see the number of file descriptors in use by a running process, run pfiles on the process id.

There can be performance impact of raising the number of fd’s available to a process, depending on the software and how it is written. Programs may use the maximum number of fd’s to size data structures such as select(3c) bitmask arrays, or perform operations such as close in a loop over all fd’s (though software written for Solaris can use the fdwalk(3c) function to do that only for the open fd’s instead of the maximum possible value).

Just to note, there can be a security impact too. Many servers are vulnerable to ACE if given more than FD_SETSIZE descriptors (usually 1024). A small sample of affected applications: securityfocus.com/archive/1/388201/30/0 So, only raise the soft limit higher than 1024 for specific applications you really trust, not system-wide.
–
Nicholas WilsonMay 17 '13 at 15:32

Have seen an issue where we needed to restrict the application's shell to only have 256 file descriptors available. The application was very old and was apparently using the maximum number of fd's and tried to put that number into a variable of type 'unsigned char' which can only hold up to integer 256 (resulting in core dump). So for this particular application we had to restrict it to only have 256 fd's available.

I don't really believe, unlike alanc, that there can be any measurable performance impact of setting this very high as you suggest. The reason not to do so would more be along the lines of preventing rogue processes from consuming too many resources.

Lastly, alanc is right that the pfiles command will tell you the number of fd's currently in use by a given process. However remember that the pfiles command temporarily halts the process in order to inspect it. I've seen processes crash as a result of the pfiles command being run against them ... but I admit it might have been corner cases that you will never run into with your applications. Sorry, I don't know of safe way to look up the current number of fd's in use by a process. My recommendation: Always monitor that the process still exists after you've run the pfiles command against it.