We have an environment with k8s + Rancher 2 (3 nodes) and an external nginx that only forwards connections to the k8s cluster according to this documentation: https://rancher.com/docs/rancher/v2.x/en/installation/k8s-install/
In a specific application running in this environment, when we perform a POST (since this POST takes around 3 to 4 minutes to complete), it is interrupted with the message "504 Gateway Time-Out" after 60 seconds. I've tried to apply specific notes to change the timeout as below, but to no avail:
Ingress of application:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-loteamento-spring-hml
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/server-snippet: "keepalive_timeout 3600s;client_body_timeout 3600s;client_header_timeout 3600s;"
labels:
run: api-loteamento-spring-hml
spec:
rules:
- host: hml-api-loteamento-sp.gruposfa.bla.bla
http:
paths:
- backend:
serviceName: api-loteamento-spring-hml
servicePort: 80
I have also tried to create a Global ConfigMap with the parameters as below, also without success:
[rancher@srv-rcnode01 ssl]$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
default-http-backend-67cf578fc4-lcz82 1/1 Running 1 38d
nginx-ingress-controller-7jcng 1/1 Running 11 225d
nginx-ingress-controller-8zxbf 1/1 Running 8 225d
nginx-ingress-controller-l527g 1/1 Running 8 225d
[rancher@srv-rcnode01 ssl]$ kubectl get pod nginx-ingress-controller-8zxbf -n ingress-nginx -o yaml |grep configmap
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
[rancher@srv-rcnode01 ~]$ cat global-configmap.yaml
apiVersion: v1
data:
client-body-timeout: "360"
client-header-timeout: "360"
proxy-connect-timeout: "360"
proxy-read-timeout: "360"
proxy-send-timeout: "360"
kind: ConfigMap
metadata:
name: nginx-configuration
And apply:
kubectl apply -f global-configmap.yaml
Accessing the ingress pods and checking the nginx.conf, I see that annotations are created according to the parameters set inside the application block:
[rancher@srv-rcnode01 ~]$ kubectl -n ingress-nginx exec --stdin --tty nginx-ingress-controller-8zxbf -- /bin/bash
And view nginx.conf
keepalive_timeout 3600s;client_body_timeout 3600s;client_header_timeout 3600s;
# Custom headers to proxied server
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
What I noticed at the beginning of the nginx.conf file in the "server" configuration block is that it has default 60-second timeout values:
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
My question in this case is whether these values may be interfering with this problem, and how can I change these values in the k8s?
Has anyone gone through this situation or something and can give me a north?
Thanks!!
What you want to achieve was mentioned in Nginx Documentation in Custom Configuration. It recommend to use ConfigMap.
$ cat configmap.yaml
apiVersion: v1
data:
proxy-connect-timeout: "10"
proxy-read-timeout: "120"
proxy-send-timeout: "120"
kind: ConfigMap
metadata:
name: ingress-nginx-controller
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-configuration/configmap.yaml \
| kubectl apply -f -
If the Configmap it is updated, NGINX will be reloaded with the new configuration.
After that, in Ingress controller pod, you should see entries like:
8 controller.go:137] Configuration changes detected, backend reload required.
8 controller.go:153] Backend successfully reloaded.
However, please keep in mind that higher timeout values are not recommended for Nginx. This information can be found in Nginx Ingress - proxy-connect-timeout:
Sets the timeout for establishing a connection with a proxied server. It should be noted that this timeout cannot usually exceed 75 seconds.
Additional information:
1. Sometimes, when Nginx Ingress cannot load new configuration, you can find log like below:
controller.go:149"] - Unexpected failure reloading the backend": Invalid PID number "" in "/tmp/nginx/pid"
To fix it, you just need to restart Ingress pod.
2. If you dont like default settings, you can create a config template in Go language.
To update only the ingress, and not the entire controller, use the following.
Source: Advanced Configuration with Annotations
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-loteamento-spring-hml
annotations:
# for the NGINX's nginx-ingress
nginx.org/proxy-connect-timeout: 3600s
nginx.org/proxy-read-timeout: 3600s
nginx.org/proxy-send-timeout: 3600s
# for the default ingress-nginx
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
labels:
run: api-loteamento-spring-hml
spec:
ingressClassName: yourIngressClass
rules:
- host: hml-api-loteamento-sp.gruposfa.bla.bla
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: api-loteamento-spring-hml
port:
number: 80
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With