Controlling the queue with ThreadPoolExecutor

The previous page showed a skeleton of a simple server with
ThreadPoolExecutor. We used a common paradigm, in which one thread continually
sits waiting to accept connections; these connections are then each farmed off to be
executed by the next available thread. Now, one problem that can occur is if we get
a large volume of incoming connections so that the available threads can't
proess them fast enough. In this case, the connections waiting to be processed will
be queued. But we haven't put any bounds on the queue, so that
in the worst case, they will just continue to "pile up". If connections aren't being
processed fast enough because the server is overloaded or "has a problem", then we're
not going to help matters by piling up
an endless number of connections that the server doesn't have a realistic chance of
processing. At some point, we need to accept that "the server is busy" and drop further
connections until things have calmed down.

To achieve this goal, we need to:

use a queue with a some maximum capacity;

handle rejected execution: add a piece of code to
deal with what happens when an incoming job won't fit in the queue.

Specifying a queue with maximum capacity

In our initial example, for convenience, we just used the Executors
helper class to construct a thread pool with default options. However, if we constract a
ThreadPoolExecutor object directly via its constructor, we can specify various additional parameters
including the implementation of BlockingQueue that we wish to
use as the job queue. In this case, we can use an ArrayBlockingQueue
or LinkedBlockingQueue with a maximum capacity. The queue is declared to
take objects of type Runnable, since this is what the thread pool deals with:

Note a side effect of specifying our own queue is that we must specify the
maximum number of threads (10 in this case) and the time-to-live of
idle threads (20 seconds in this case). As the number of simultaneous connections grows,
the thread pool will automatically expand the number of threads up to this maximum.
When the number of connections (and hence threads needed) decreases, the thread pool
will "kill" each spare thread after it has been sitting idle for 20 seconds, until we're
down to our "core" size of 4 threads (the first parameter).

If you specify your own job queue, be careful not to post jobs "manually"
to the queue (using the regular queue methods).
If you do so, the job will not be picked up by the
thread pool. Always use ThreadPoolExecutor.execute() even though
it's "your own queue".