nginx cache mounted on tmpf getting fulled

I am using nginx as webserver with nginx version: nginx/1.10.2. For faster
access we have mounted cache of nginx of different application on RAM.But
even after giving enough buffer of size , now and then cache is getting
filled , below are few details of files for your reference :
maximum size given in nginx conf file is 500G , while mouting we have given
600G of space i.e. 100G of buffer.But still it is getting filled 100%.

On 06/01/2017 05:40, omkar_jadhav_20 wrote:
> Hi,
>
> I am using nginx as webserver with nginx version: nginx/1.10.2. For
> faster access we have mounted cache of nginx of different
> application on RAM.But even after giving enough buffer of size ,
> now and then cache is getting filled , below are few details of
> files for your reference : maximum size given in nginx conf file is
> 500G , while mouting we have given 600G of space i.e. 100G of
> buffer.But still it is getting filled 100%.

Do you actually have enough RAM / swap to facilitate these
requirements? It looks like you'd need about 800G of RAM/swap space to
make this work?

If you do then I don't know enough about how nginx works to advise,
sorry :-)

Yes I do have RAM of size : 1.5T and swap space of around 200G. It has been
observed that swap is not getting used in this case. But it seems either OS
is not clearing the RAM that fast or nginx is not able to control RAM as
mentioned in the nginx configuration file.
Could you please suggest what can be the possible solution for this issue.
Every time max_size is getting crossed by RAM even after mentioning it in
nginx config.

can someone please respond and suggest best way to manage cache in highly
utilized cache dependent environment. As number of nginx requests increases
and cache hit ratio becomes significant , the tmpfs crosses the max_size
limit mentioned in the nginx.conf file and use full cache mounted . This in
returns increases the server load by great extent.

I'm curious, why are you using tmpfs for your cache store? With fast local storage bring so cheap, why don't you devote a few TB to your cache?

When I look at the techempower benchmarks I see that openresty (an nginx build that comes with lots of lua value add) can serve 440,000 JSON responses per sec with 3ms latency. That's on five year old E7-4850 Westmere hardware at 2.0GHz, with 10G NICs. The min latency to get a packet from nginx through the kernel stack and onto the wire is about 4uS for a NIC of that vintage, dropping to 2uS with openonload (sokarflare's kernel bypass).

As ippolitiv suggests, your cache already has room for 1.6M items- that's a huge amount. What kind of hit rate are you seeing for your cache?

One way to manage cache size is to only cache popular items- if you set proxy_cache_min_uses =4 then only objects that are requested four times will be cached, which will increase your hit rates and reduce the space needed for the cache.

So that's a total of 1TB of memory allocated to caches. Do you have that
much spare on your server? Linux will allocate *up to* the specified
amount *as long as it's spare*. It would be worth looking at your server
to ensure that 1TB memory is spare before blaming nginx.

We have used tmpfs to mount frequently used media application cache serving.
We have also mounted rest applications on disk where we are facing similar
issues of max_size being breached . Please find below details for your
reference :

Just to be pedantic. It’s counterintuitive but, in general, tmpfs is not faster than local storage, for the use case of caching static content for web servers.
Sounds weird? Here’s why:

tmpfs is a file system view of all of the system’s virtual memory - that is both physical memory and swap space.

If you use local storage for your cache store then every time a file is requested for the first time it gets read from local storage and written to the page cache. Subsequent requests are served from page cache.. The OS manages memory allocation so as to maximize the size of the page cache (subject to config settings snappiness, min_free_kbytes, and the watermark setting for kswapd. The fact that you are using a cache suggests that you have expensive back-end queries that your cache is front of. So the cost of reading a cached file from disk << cost of recreating dynamic content. With real-world web systems the probability distribution of requests for different resources is never uniform, There are clusters of popular resources.

Any popular requests that get served from your disk cache will in fact be being served from page cache (memory), so there is no reason to interfere with the OS. In general its likely that you have less physical memory than disk space., so that using tmpfs for your nginx cache could mean that you’re serving cached files from swap space.

Allowing the OS to do its job (with the page cache) means that you will already get the tmpfs like latencies for the popular resources - which is what you want to maximize performance across your entire site. This is another example of why with issues of web performance, its usually better to test theories than to rely on logical reasoning.