Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proxy timeout annotations have no effect on nginx #2007

Closed
eagleusb opened this issue Jan 31, 2018 · 44 comments
Closed

proxy timeout annotations have no effect on nginx #2007

eagleusb opened this issue Jan 31, 2018 · 44 comments

Comments

@eagleusb
Copy link

eagleusb commented Jan 31, 2018

NGINX Ingress controller version: 0.10.2 / quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Bare metal / On premise
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 9 (stretch)
  • Kernel (e.g. uname -a): 4.9.0-5-amd64 Basic structure  #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04) x86_64 GNU/Linux
  • Install tools: kubeadm
  • Others: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2

What happened:

NGINX Ingress Controller v0.10.2 configuration doesn't reflect the proxy timeout annotations per Ingress.

This Ingress definition doesn't work as expected :

---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: ing-manh-telnet-client
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy‑connect‑timeout: 30
    nginx.ingress.kubernetes.io/proxy‑read‑timeout: 1800
    nginx.ingress.kubernetes.io/proxy‑send‑timeout: 1800
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  tls:
    - hosts:
      - "manh-telnet.ls.domain.io"
      secretName: "tls-certs-domainio"
  rules:
    - host: "manh-telnet.ls.domain.io"
      http:
        paths:
        - path: "/"
          backend:
            serviceName: svc-manh-telnet-client
            servicePort: http

The actual vhost :

            # Custom headers to proxied server

            proxy_connect_timeout                   30s;
            proxy_send_timeout                      180s;
            proxy_read_timeout                      180s;

What you expected to happen:

The wanted vhost :

            # Custom headers to proxied server

            proxy_connect_timeout                   30s;
            proxy_send_timeout                      1800s;
            proxy_read_timeout                      1800s;

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

@garagatyi
Copy link

I have the same issue with quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.17. Instead of custom timeouts nginx.conf contains default ones(60s) in the location part.

@garagatyi
Copy link

garagatyi commented Feb 2, 2018

My particular test uses Ingress config (I needed timeouts only, but added other just for the test case):

``` - apiVersion: extensions/v1beta1 kind: Ingress metadata: name: che-ingress annotations: ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" nginx.ingress.kubernetes.io/upstream-fail-timeout: "30" nginx.ingress.kubernetes.io/add-base-url: "true" nginx.ingress.kubernetes.io/affinity: "cookie" spec: rules: - host: 192.168.99.100.nip.io http: paths: - backend: serviceName: che-host servicePort: 8080 ```

Which generates upstream:

    upstream che-che-host-8080 {
        # Load balance algorithm; empty for round robin, which is the default

        least_conn;

        keepalive 32;

        server 172.17.0.6:8080 max_fails=0 fail_timeout=0;

    }

And server:

    server {
        server_name 192.168.99.100.nip.io ;

        listen 80;

        listen [::]:80;

        set $proxy_upstream_name "-";

        location / {

            set $proxy_upstream_name "che-che-host-8080";

            set $namespace      "che";
            set $ingress_name   "che-ingress3";
            set $service_name   "";

            port_in_redirect off;

            client_max_body_size                    "1m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            proxy_set_header ssl-client-cert        "";
            proxy_set_header ssl-client-verify      "";
            proxy_set_header ssl-client-dn          "";

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://che-che-host-8080;

        }

    }

So looks like none of the annotations take effect.

@akaGelo
Copy link

akaGelo commented Feb 3, 2018

You have an incorrect char "-" in your annotations.
In my configuration nginx.ingress.kubernetes.io/proxy-read-timeout is written and it works on the same version (0.10.2).

03-02-2018-16-49-17

@eagleusb
Copy link
Author

eagleusb commented Feb 3, 2018

I don't get the point, annotations were tested one by one and - is an acceptable YAML value. Can you elaborate a bit more about an incorrect - in my annotations ?

@SaaldjorMike
Copy link
Contributor

The hyphens are not the normal hyphens - but just look like them.

What @akaGelo is trying to say is that if you use your browsers search option and put in a - then some of them will not be highlighted. These are those hyphens which are not the correct ones.

@eagleusb
Copy link
Author

eagleusb commented Feb 4, 2018

Oh now that's seems very obvious ! Thanks guys, that was a pretty simple mistake. I'll look into official documentation if we can improve that with the same type of character for working copy/paste.

@garagatyi Maybe you have the same problem ? You should also update your ingress revision.

@garagatyi
Copy link

@gooodmorningopenstack I have older nginx controller, not wrong characters. The thing is I can't control the version of the controller, so I have to allow users to redefine controller annotations (it can be even not nginx controller at all). But thanks for your suggestion!

@mau21mau
Copy link

mau21mau commented Feb 17, 2018

I'm having the same issue. Unlike the author one, my ingress doesn't have any special hyphen.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-routes
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy-read-timeout: 1200
    nginx.ingress.kubernetes.io/proxy-send-timeout: 1200
spec:
  tls:
  - secretName: nginxsecret
  rules:
  - http:
      paths:
      - path: /*
        backend:
          serviceName: frontend
          servicePort: 8000
      - path: /cron/*
        backend:
          serviceName: esg
          servicePort: 8000

      - path: /task/*
        backend:
          serviceName: esg
          servicePort: 8000

      - path: /api/connections/update/*
        backend:
          serviceName: esg
          servicePort: 8000

      - path: /api/drive/scansheet/*
        backend:
          serviceName: esg
          servicePort: 8000

@chris-mccoy
Copy link

I ran into this as well. I'm assuming an integer is required for timeouts? I was using "5m" because Nginx docs seemed to show that I could. Changed to 300 and things worked great after that.

@aledbf
Copy link
Member

aledbf commented Feb 21, 2018

Closing. As @akaGelo commented you have an issue with the -. Is my fault. I am sure you copy/paste from the docs (a good thing) but in order to make readable the table the character was different
Please check #2111

@aledbf aledbf closed this as completed Feb 21, 2018
@gae123
Copy link

gae123 commented Mar 21, 2018

I had the same problem and discovered that the following do not work:

nginx.ingress.kubernetes.io/proxy‑read‑timeout: 1800
nginx.ingress.kubernetes.io/proxy‑read‑timeout: 1800s
nginx.ingress.kubernetes.io/proxy‑read‑timeout: "1800s"

What does work is:

nginx.ingress.kubernetes.io/proxy‑read‑timeout: "1800"

@theRemix
Copy link

@gae123 that's not working for me

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-app
  namespace: my-app
  annotations:
    nginx.org/websocket-services: "my-app"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "14400"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "14400"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "14400"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "gce"
spec: ...

still getting timed out after 30s

2018/03/31 16:55:07 Client 0xc420058b80 connected
2018/03/31 16:55:37 error: websocket: close 1006 (abnormal closure): unexpected EOF
2018/03/31 16:55:37 Client 0xc420058b80 disconnected

2018/03/31 16:58:19 Client 0xc420138e80 connected
2018/03/31 16:58:49 error: websocket: close 1006 (abnormal closure): unexpected EOF
2018/03/31 16:58:49 Client 0xc420138e80 disconnected

@aledbf
Copy link
Member

aledbf commented Mar 31, 2018

kubernetes.io/ingress.class: "gce"

I seem you are using the GCE ingress controller. This annotation only works in nginx

@Tim-Schwalbe
Copy link

Tim-Schwalbe commented Apr 3, 2018

For me this is not working. Anyone sees an issue?
I am using the nginx helm chart: nginx-ingress-0.8.9

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: zalenium
  namespace: zalenium
  annotations:
    kubernetes.io/ingress.class: nginx
    ingress.kubernetes.io/auth-type: basic
    ingress.kubernetes.io/auth-secret: zalenium-basic-auth
    ingress.kubernetes.io/auth-realm: "Authentication Required"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-methods: "*"
    nginx.ingress.kubernetes.io/cors-allow-origin: "*"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: 3600
    nginx.ingress.kubernetes.io/proxy-send-timeout: 3600
    nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
spec:
  rules:
  - host: "test.whatever"
    http:
      paths:
      - path: /
        backend:
          serviceName: zalenium
          servicePort: 4444

@pdeveltere
Copy link

+1

nginx.ingress.kubernetes.io/proxy-connect-timeout: number is not working for me aswel

@worldsayshi
Copy link

worldsayshi commented Jun 14, 2018

After way to much trial and error and frustration; some tips that might work for others who end up here:

  • nginx.ingress.kubernetes.io/proxy-connect-timeout did not work for me. Nothing changed in the nginx configuration in the ingress controller. No errors were shown. Removing the initial nginx. did work. Ending up with these annotations:
    ingress.kubernetes.io/proxy-connect-timeout: "600"
    ingress.kubernetes.io/proxy-read-timeout: "600"
    ingress.kubernetes.io/proxy-send-timeout: "600"
    ingress.kubernetes.io/send-timeout: "600"
  • If you want to inspect what the end result, the nginx.conf, looks like. You can get it from the ingress controller pod. To access the ingress controller pod with kubectl you need to specify namespace when running commands since the controller doesn't live in the default namespace. So like this:
$ kubectl get pods --all-namespaces
...
$ kubectl -n kube-system exec nginx-ingress-controller-138430828-pqb7q cat /etc/nginx/nginx.conf | tee nginx.test-ingress-export.conf

@parml
Copy link

parml commented Jun 27, 2018

@Tim-Schwalbe I am using the helm chart as well, although a different version. It only worked with ConfigMaps.

Here are the steps that helped me. You need the name of the pod running the controller.
Say nginx-ingress-controller-1234abcd

Make sure you're running images from quay.io
$ kubectl describe pod nginx-ingress-controller-1234abcd | grep Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
if it doesn't start with quay.io the following steps may not be relevant.

Determine the name of the ConfigMap it reads all those properties from:
$ kubectl describe pod nginx-ingress-controller-1234abcd | grep configmap=
--configmap=default/nginx-ingress-controller

That means it reads from from a ConfigMap with name nginx-ingress-controller in the default namespace. Append such a ConfigMap to you Ingress yaml file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress-controller
data:
  proxy-read-timeout: "234"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: lb-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: app-service
          servicePort: 8080

Properties you can add to the ConfigMap compiled in the table here: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md

The result in /etc/nginx/nginx.conf
proxy_read_timeout 234s;

I hope that was helpful.

@Tim-Schwalbe
Copy link

Tim-Schwalbe commented Jul 12, 2018

Hi,
is this working for grpc and http2 with this image of the ingress? quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0 and higher

This is another nginx and the documentation says its setting also the timeout for grpc.
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/customization

But here I can not find any word about grpc.
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md

is this just not implemented?

@aledbf
Copy link
Member

aledbf commented Jul 12, 2018

@yivo
Copy link

yivo commented Dec 28, 2018

Doesn't work for me.

image

@arianitu
Copy link

arianitu commented Jan 3, 2019

I had to use the string version instead of the number version, any idea why this is?

This breaks:

nginx.ingress.kubernetes.io/proxy-read-timeout: 300

This works:

nginx.ingress.kubernetes.io/proxy-read-timeout: "300"

@arianitu
Copy link

arianitu commented Jan 3, 2019

@yivo you're missing the beginning of the annotation, you need nginx in front of ingress,

so instead of

ingress.kubernetes.io/proxy-read-timeout

you should have

nginx.ingress.kubernetes.io/proxy-read-timeout

@aledbf
Copy link
Member

aledbf commented Jan 3, 2019

I had to use the string version instead of the number version, any idea why this is?

From the first tip in the docs https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/

Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. "true", "false", "100".

@arianitu
Copy link

arianitu commented Jan 3, 2019

@aledbf Oh, how weird is that. Thanks.

@doubleyewdee
Copy link

Bumping this: we hit a worse version of this problem when moving from 1.12.1 to 1.12.4. Apparently now if you have these invalid values (not specified as strings) all of your annotations are discarded. Seems like 'kubectl apply' with invalid annotations shouldn't silently accept and discard these values.

@dadrian
Copy link

dadrian commented Apr 22, 2019

I had this problem on 0.18, upgrading to latest fixed it using "normal" annotations (nginx.ingress.*)

@manikesh
Copy link

what is the approved solution for this.. for me also its same.. its getting CLIENT_DISCONNECTED exactly in 60sec, I have tried all options mentioned in this forum but not working.. any solid clue to get it fixed?

@jypma
Copy link

jypma commented Sep 2, 2019

To turn this around: is anyone actually able to have something communicate across a kubernetes cluster boundary with >60s idle time between packets? Perhaps using something else than nginx?

@aakashrshah
Copy link

After adding

nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
it throws a 502

Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request

Reason: Error reading from remote server

Using a reverse proxy to connect to api on a apache server image hosted on K8s cluster.

@dannyburke1
Copy link

Any updates on when this might be fixed, or a version that it is patched in? I've also tried everything, running version 0.20.0 and having no luck.

@kvaps
Copy link
Member

kvaps commented Jun 23, 2020

@dannyburke1 solution described in #2007 (comment) is working fine in current release

@dannyburke1
Copy link

Hey @kvaps thanks for your response.

When copying/pasting that (in vim) and applying it says that it can't be applied due the hyphens being used:

name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')

If I replace the hyphens then I can apply it, but unfortunately isn't being filtered down to the nginx.conffile.

@jontro
Copy link

jontro commented Jun 24, 2020

I'm seeing nginx connections resets at the exact 60s mark with the ingress-nginx controller.

Using the following annotations on the grpc service.

    nginx.ingress.kubernetes.io/server-snippet: "keepalive_timeout 600s; grpc_read_timeout 3600s; grpc_send_timeout 3600s;"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"

ngnix-ingress ConfigMap

  keep-alive: '3600'
  upstream-keepalive-timeout: '3600'

I'm setting up a bidirectional grpc stream

It looks like nginx is doing the connection reset by looking at nbl metrics

@dannyburke1
Copy link

I'm not sure on the exact fix here, but seemingly redeploying the ingress and updating nginx seems to have sorted it for me.

@jontro
Copy link

jontro commented Jul 10, 2020

Adding client_body_timeout was the key fix for me here. This needs to be put in the documentation somewhere since it was hard to find

    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    nginx.ingress.kubernetes.io/server-snippet: "keepalive_timeout 3600s; grpc_read_timeout 3600s; grpc_send_timeout 3600s;client_body_timeout 3600s;"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"

@fransoaardi
Copy link

@jontro
This definitely must be included at the front page of the documentation.
This helped me(+ saved me) a lot. Thanks.

@bzon
Copy link

bzon commented Nov 29, 2020

How to set the timeouts in a millisecond format?

@VanitySoft
Copy link

@bzon it is already milli format

@simonracz
Copy link

I assume many people had the same issue that I had.

I copy pasted that line somewhere. Probably from the docs. Now, that DASH is NOT a correct dash. My editor draws them the same way. But my terminal doesn't. That's how I noticed.

@seyal84
Copy link

seyal84 commented Feb 25, 2022

I'm really surprised to see that everyone is proposing solutions but their is not single final solution proposed and used by everyone. Why is it such a mess with ngnix? 504 error is just bugging us alot as well

@gklasen
Copy link

gklasen commented Jun 16, 2022

@seyal84 could you figure out your issue? I am at the exact same situation atm.

@zolzaya
Copy link

zolzaya commented Jul 3, 2023

any updates???

@rufreakde
Copy link

rufreakde commented Aug 9, 2023

Okay so this:

    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "600"

works but only after restarting the complete nginx controller deployment! It is somehow not picking up on ingress change! Seems like a bug to me.

I used k9s to shell into the pod and executed:

cat /etc/nginx/nginx.conf | grep proxy_.*_timeout

@ozbillwang
Copy link

ozbillwang commented Nov 21, 2023

cat /etc/nginx/nginx.conf | grep proxy_.*_timeout

@rufreakde

What's the meaning of after restarting the complete nginx controller deployment!

Do you mean I have to delete the helm chart ingress-nginx and helm install it again? But that will clean the existing load balancer and re-create a new one. It will impact to all applications using it.

helm list
NAME         	NAMESPACE    	REVISION	UPDATED                              	STATUS  	CHART              	APP VERSION
ingress-nginx	ingress-basic	1       	2023-08-10 10:35:18.235538 +1100 AEDT	deployed	ingress-nginx-4.4.2	1.5.1

I try to update the helm list and restart its port, mine is still the default value

kk exec -ti ingress-nginx-controller-7d5fb757db-f66kp -- bash
bash-5.1$ cat /etc/nginx/nginx.conf | grep proxy_.*_timeout|sort -u
			proxy_connect_timeout                   5s;
			proxy_next_upstream_timeout             0;
			proxy_read_timeout                      60s;
			proxy_send_timeout                      60s;

and we need different setting for different applications, but they using same IngressClass currently

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests