There is no problem -- any connection attempts over the MaxClients
limit will normally be queued, up to a number based on the
ListenBacklog directive. When a child process is freed at the end
of a different request, the connection will be served.

It is an error because clients are being put in the queue rather
than getting served immediately, despite the fact that they do not get
an error response. The error can be allowed to persist to balance
available system resources and response time, but sooner or later you
will need to get more RAM so you can start more child processes. The
best approach is to try not to have this condition reached at all, and
if you reach it often you should start to worry about it.

It's important to understand how much real memory a child occupies.
Your children can share memory between them when the OS supports that.
You must take action to allow the sharing to happen. We have
disscussed this in one of the previous article whose main topic was
shared memory. If you do this, the chances are that your MaxClients
can be even higher. But it seems that it's not so simple to calculate
the absolute number. If you come up with a solution please let us
know! If the shared memory was of the same size throughout the
child's life, we could derive a much better formula:

The MaxRequestsPerChild directive sets the limit on the number of
requests that an individual child server process will handle. After
MaxRequestsPerChild requests, the child process will die. If
MaxRequestsPerChild is 0, then the process will live forever.

Setting MaxRequestsPerChild to a non-zero limit solves some memory
leakage problems caused by sloppy programming practices, whereas a
child process consumes more memory after each request.