16.2.2.8 memcached Thread Support

If you enable the thread implementation within when building
memcached from source, then
memcached uses multiple threads in addition
to the libevent system to handle requests.

When enabled, the threading implementation operates as follows:

Threading is handled by wrapping functions within the code
to provide basic protection from updating the same global
structures at the same time.

Each thread uses its own instance of the
libevent to help improve performance.

TCP/IP connections are handled with a single thread
listening on the TCP/IP socket. Each connection is then
distributed to one of the active threads on a simple
round-robin basis. Each connection then operates solely
within this thread while the connection remains open.

For UDP connections, all the threads listen to a single UDP
socket for incoming requests. Threads that are not currently
dealing with another request ignore the incoming packet. One
of the remaining, nonbusy, threads reads the request and
sends the response. This implementation can lead to
increased CPU load as threads wake from sleep to potentially
process the request.

Using threads can increase the performance on servers that have
multiple CPU cores available, as the requests to update the hash
table can be spread between the individual threads. To minimize
overhead from the locking mechanism employed, experiment with
different thread values to achieve the best performance based on
the number and type of requests within your given workload.