Why do you need so many files open simultaneously?
– sboothJul 2 '10 at 15:19

Not that it should matter, but are you testing this on the server edition or the desktop edition of OSX? I can imagine that the apple folks decided to limit how many files a desktop app could open since opening many is usually a server oriented task...
– Evan TeranJul 2 '10 at 22:12

6 Answers
6

1st. LOL. Apparently you have found a bug in the Mac OS X' stdio. If I fix your program up/add error handling/etc and also replace fopen() with open() syscall, I can easily reach the limit of 10000 (which is 240 fds below my 10.6.3' OPEN_MAX limit 10240)

2nd. RTFM: man setrlimit. Case of max open files has to be treated specifically regarding OPEN_MAX.

Thx for the answer. Are you serious when you say it could be a bug in stdio on mac os x or it's a joke? so the only solution is to use syscall instead of standard C function?
– acemtpJul 2 '10 at 21:43

@acemtp: limitation is probably a better word. The standard only requires libc to guarantee that you can open 8 files at a time (including stdin/stdout/stderr!). It would be an unusual limitation but no unheard of.
– Evan TeranJul 2 '10 at 22:11

1

@acetemp, @evan: well stdio on Linux has no problems coping with whatever I throw at it. and I personally would qualify that as a bug. 8 files at once?? stdio, stdin, stderr - 3 are busy already. Application log file + trace file - leaves only 3 free... Silly and a bug, if you ask me.
– Dummy00001Jul 3 '10 at 8:41

The whole problem here is your
printf() function. When you call
printf(), you are initializing
internal data structures to a certain
size. Then, you call setrlimit() to
try to adjust those sizes. That
function fails because you have
already been using those internal
structures with your printf(). If you
use two rlimit structures (one for
before and one for after), and don't
print them until after calling
setrlimit, you will find that you can
change the limits of the current
process even in a command line
program. The maximum value is 10240.

This may be a hard limitation of your libc. Some versions of solaris have a similar limitation because they store the fd as an unsigned char in the FILE struct. If this is the case for your libc as well, you may not be able to do what you want.

As far as I know, things like setrlimit only effect how many file you can open with open (fopen is almost certainly implemented in terms on open). So if this limitation is on the libc level, you will need an alternate solution.

Of course you could always not use fopen and instead use the open system call available on just about every variant of unix.

The downside is that you have to use write and read instead of fwrite and fread, which don't do things like buffering (that's all done in your libc, not by the OS itself). So it could end up be a performance bottleneck.

Can you describe the scenario that requires 400 files open ** simultaneously**? I am not saying that there is no case where that is needed. But, if you describe your use case more clearly, then perhaps we can recommend a better solution.

libc limit: Yes. See my comment. Changing the program to use open() instead of fopen() fixes the problem. On Linux btw works like a charm - after making the obvious fix of replacing the 10000 with rlp.rlim_max (but on Mac OS X even that is different as cap of OPEN_MAX has to be checked too). Scenario where you need 400 fds ... I maintain specialized networks server which also backs data to disk. Seeing 2K sockets and open files in use isn't uncommon.
– Dummy00001Jul 2 '10 at 20:44

@Dummy00001: ok, that is certainly a scenario, but having acemtp describe exactly what he is trying to do could still help :-P. But it looks like we have found the nature of the problem.
– Evan TeranJul 2 '10 at 21:10

Mac OS doesn't allow us to easily change the limit as in many of the unix based operating system. We have to create two files

/Library/LaunchDaemons/limit.maxfiles.plist
/Library/LaunchDaemons/limit.maxproc.plist
describing the max proc and max file limit. The ownership of the file need to be changed to 'root:wheel'

This alone doesn't solve the problem, by default latest version of mac OSX uses 'csrutil', we need to disable it. To disable it we need to reboot our mac in recovery mode and from there disable csrutil using terminal.

Now we can easily change the max open file handle limit easily from terminal itself (even in normal boot mode).