Proxy_cache_lock logic means that when NGINX receives couple of request simultaneously, it sends only one upstream and the rest waiting till the first one returns and insert to cache (wait time is as configure in proxy_cache_lock_timeout).
If the cache element expired, and the NGINX receives couple of request simultaneously all of them are proxied to the upstream.
Question: How can I configure the NGINX to have the same logic as proxy_cache_lock also when the cache element exist but expired?
I checked proxy_cache_use_stale but it is not what I'm looking for because its return the expired cache when updating and I need to wait till the answer return from upstream...
This is my current NGINX configuration file:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log main;
proxy_cache_path @MY_CACHE_PATH@; # this obviously has the path in my file
proxy_cache_use_stale updating;
proxy_cache_lock on;
proxy_cache_lock_timeout 15s;
proxy_cache_valid 404 10s;
proxy_cache_valid 502 0s;
proxy_cache one;
server {
listen 80;
proxy_read_timeout 15s;
proxy_connect_timeout 5s;
include locations.conf;
}
}
I managed to achieve this behavior by changing the NGINX source code but I wonder if that can be achieve within configuration
This is expected behavior according to upstream. As Maxim told there, in the documentations it is said:
When enabled, only one request at a time will be allowed to populate a new cache element identified according to the proxy_cache_key directive by passing a request to a proxied server.
Now,
Note "a new cache element". It does not work while updating cache elements, this is not implemented.
That said, could you share the code changes that work for you?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With