What determines a practical limit on the number of server blocks in Nginx configuration?

I am trying to figure out if a Nginx configuration consisting solely of simple server blocks is feasible. Each block serves a subdomain, and directs the subdomain to another URL. Of course, the maximum in a specific context depends on the parameters, so I am more interested in the factors that work to determine a practical limit.

For example, is the extra cost of an additional server block (in terms of memory overhead) always constant? Is the cost of dispatching to a specific server block to handle the request a constant, or is it a function of the number of server blocks in the configuration?

How many of these can I have, say, per gigabyte of memory per core, or other relevant parameters?

Thank you.

My answer:

The biggest factor affecting how many server_names you can reasonably have is your CPU’s cache size (and speed, of course).

First, nginx stores all of the server_names that you define into three hash tables (depending on whether you used wildcards in the name) per IP/port pair that nginx listens on. The size of these structures is optimized to be a multiple of the CPU’s cache line size, and nginx intends to be able to match the server_name for an incoming request entirely from CPU cache without having to go to the (relatively) much slower RAM at all.

Out of the box, nginx sets up hash tables for server names with 512 entries of 32 bytes each. This comes up to 16 KiB and easily fits into a CPU’s L1 cache, or at least in the L2 cache. And even if you need to expand it, it ought to still be small enough to fit in the cache most of the time.

This strategy suggests that you should strive to keep the list of names to a minimum.

For instance, even if it may be “slower” to match a wildcard entry such as .example.com, it may be faster on average than attempting to match against several hundred subdomains of example.com which are defined explicitly.