At very beginning apologizes for the newbie question, but I am very new in Nginx but would like to extend some functionality in the existing module.
I am currently using ngx_http_enhanced_memcached_module, and it meets most of my needs.
However, currently, I need to manually populate cache, via PUT requests.
My nginx configuration looks as follow:
```
upstream memcached_upstream {
server 127.0.0.1:11211;
keepalive 20;
}
upstream backendstorage_upstream {
server 10.0.0.10:9000
}
server {
location / {
error_page 404 502 504 = @fallback;
error_page 405 = 200 $uri;
set $enhanced_memcached_key "$request_uri";
...
enhanced_memcached_pass memcached_upstream;
}
location @fallback {
proxy_pass http://backendstorage_upstream;
}
}
```
I started thinking of embedding STORE functionality inside the module itself. A high-level idea would be:
whenever we got cache miss to fetch the response from backend_upstream via `proxy_pass` and then get response and
sent it back to the `memcached_upstream` via PUT method.
This sounds really simple, however, I have 2 questions based on my understanding of Nginx internals:
1. `proxy_pass` is a method which runs inside NGX_HTTP_CONTENT_PHASE, so based on documentation only one method in that type can be inside the location.
In order to hijack response from `proxy_pass`, the only way that I can see is to wrap `proxy_pass` via i.e. `enhanced_memcached_proxy_pass`.
In that custom method: first of all original code from proxy module would be run and then as one of the end stages, a custom method like (i.e. input_filter) would be able to access response from the backend and send content to Memcached.
Given the size of the proxy module and its implementation based on callbacks, first thoughts that I have is that I can end up easily via merging these modules.
Is there any other easier way to access the response from location?
2. If I will need to implement such a method like `enhanced_memcached_proxy_pass`, is there any recommended (asynchronous) way to send PUT to external service.
My understanding is that as long as we spent time doing operations inside pipeline we block particular event which can cause delay to the client (of course here I would like to send response first to the client and then store it).
Thank you for any help!
Maciej
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx-devel/attachments/20190218/642ddce6/attachment.html>