You need to size MaxClients / ServerLimit to your system. The "5-10 settings for Min/Max Servers" which you mention are basically irrelevant — that's just the number of extra servers hanging around not doing anything that Apache will retain.

In order to set MaxClients appropriately, look at the typical high-water mark for your httpd (or apache2) processes, and then divide your available memory by that. Best to drop down by a little bit to give the rest of the system room to breathe. Since you've got 4GB of RAM, and 185MB processes, that means your ServerLimit value should be 21 at most — probably 20 or 19.

Now, it may be that 190MB is atypical. You can set the ServerLimit higher, based on a different estimate of typical usage, but then you're basically gambling that you'll never have a spike. If it does happen, your system will be out of memory.

If you can find a way to constrain your per-worker memory usage, that's gonna be a win. I'm betting this is a case of PHP Ate My RAM. Can you code your app to live within a lower memory_limit? If you can't do that, you need a different model under which to run your PHP. If you can't do that, you need to buy more RAM.

If this is indeed a "PHP Ate my RAM" situation (and the eating is relatively confined -- staying within a child rather than hitting the shared pool) you may also get some benefit by setting an aggressive (low) MaxRequestsPerClient -- Older (fatter) daemons will be sacrificed to the RAM-freeing gods... Note that this is best viewed as a temporary solution because killing off and restarting apache daemons during high load periods can put a hurting on your server...
–
voretaq7♦Mar 8 '11 at 17:06

Ok, i will set memory_limit at 16megabyte in my scripts, i wanna see one of them needs more than 16mb (i really doubt)
–
dynamicMar 8 '11 at 17:07

@yes123: sounds like a good plan. Watch for php memory limit messages in the error log.
–
mattdmMar 8 '11 at 17:16

currnetly usage is way less than 5mb per php script, As i said above i doubt this is the problem. I think it's just an apache2 settings problem
–
dynamicMar 8 '11 at 17:17

@yes123: are you sure you're measuring the PHP memory usage correctly? Note that the memory_limit_ is a setting in the core php.ini file. If it's not PHP, something else out of the ordinary is causing your apache processes to consume a lot of RAM. The general advice remains: if you can't limit or constrain whatever that is, you still need to set your ServerLimit to match.
–
mattdmMar 8 '11 at 17:28

Apache's prefork MPM self-manages servers. It will always start with StartServers daemons, and will never run fewer than MinSpareServers once it gets going. It will also eventually retire/kill off servers in excess of MaxSpareServers if they're idle long enough (I don't recall what "Long Enough" is in this context, nor if/how it can be modified).

ServerLimit sets the maximum number of apache daemons that can be running at any given time -- This is why in your situation you can have hundreds of sleeping apache processes (they got spawned to service a flood of requests and haven't been idle long enough to be killed by the mother process yet).

Personally I think 1250 is a pretty high value for ServerLimit/MaxClients -- 250 may be a more reasonable number (though this may result in the occasional 503/Server Busy error if you get a massive flood of requests: if that becomes a chronic issue you can increase the number or add more servers to handle the load).

on my previous server with only 2gb i had all the time serverlimit at more than 1000 without having any problemes... With this new server with the double of ram i can't really lower that value. With 250 the apache error log would be spammed with "... consider rising MaxClients directive ..."
–
dynamicMar 8 '11 at 16:46

1

Did you ever have a spike of N simultaneous requests before? And are we even sure that's what happened? Looking more closely at your top output I don't see that many apache processes (Restrict to user: www-data and see what you get -- 200 or so processes on an idle Linux box is not uncommon, I doubt they're all httpd :-)
–
voretaq7♦Mar 8 '11 at 16:48

i don't think i had these spike before. Google is pwning my server requesting more than 20 pages/minute as i can see from the log (the top screenshot is taken right after an hard reboot)
–
dynamicMar 8 '11 at 16:50

i taken another TOP screenshot. See first post. (now you can see after 30min there are only apahce2)
–
dynamicMar 8 '11 at 16:54

ahh, now that's a problem - two actually. Crash-wise 1200 Apache processes is definitely too high a limit for your hardware (1200 * 11M(RSZ) = ~13Gigs: Way more than your physical RAM). Workload-wise you need to hunt down why you're getting so many requests: You may need to tweak your crawl rate in robots.txt, but check your apache logs to see what else is going on too...
–
voretaq7♦Mar 8 '11 at 16:59

Turn off Keepalives and set MaxClients to 150. The most likely reason you have 260 processes just sitting there is because Apache is dutifully holding browser connections open because KeepAlive on is set in you apache config file.

Hmmm, if it's still that high with a few hundred connections sitting idle you might want to consider putting Varnish or Nginx on port 80 and doing reverse proxy to Apache processes. Let either of those servers do the tcp connections and just pass requests to Apache. I'd do a quick test though with KeepAlive off just to make sure that would actually decrease the number of idle Apaches. If it does, then the reverse proxy stuff is worth doing.
–
kashaniMar 8 '11 at 19:22

@hashani: i actually lowered a bit serverlimit maxc and maxreqxchild. Let's wait if this happens again then i will disable keepaliove
–
dynamicMar 8 '11 at 19:43