-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proxy-read-timeout annotations getting ignored after v1.11.1 upgrade from 1.10.1 #11850
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The information you have provided is incomplete as most of the important questions from the template are not answered. Whatever little information you provided can not be used to analyze any problems as such as no reader would be able to recreate the environment you have or the tests you performed. (The information is also not formatted in markdown) You can help out by answering the questions asked in the new bug report template. And then you could add complete detailed precise and real-use-as-is information from your tests like the output of Once triaging results in the data available here showing the bug details, we can re-apply the bug label here. You can also check the changelog and release notes for relevance to your use-case. /remove-kind bug |
This PR is related to the grpc timeouts #11258 |
Hi @varunthakur2480 , could you try to get
|
this seems to be sorted after we changed
to see tip !!! tip Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. "true", "false", "100". but still worth highlighting that old annotations without quotes and "s" still work in older version |
Closing the issue as seems resolved |
@longwuyuan: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What happened:
2024/08/22 10:51:43 [error] 41#41: *3030 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 10.124.70.10, server: xxxx-gateway-xxxx.l7.dev2.xx.gcp.xxx.net, request: "POST /rbs.gbm.xxx.web_service_core.gateway.structured_document.MdxStructuredDocumentService/QueryViewPaginated HTTP/2.0", upstream: "grpc://100.71.1.170:5000", host: "xxx-gateway-xxx.l7.dev2.xxx.gcp.xxx.net:443"
Application logs - https://sxxxxxl/KkgESm6Lp3cdbWJDA
Retrying client request due to: [Status(StatusCode="Unknown", Detail="Stream removed", DebugException="Grpc.Core.Internal.CoreErrorDetailException: {"created":"@1724323903.302000000","description":"Error received from peer ipv4:10.124.66.63:443","file":"......\src\core\lib\surface\call.cc","file_line":953,"grpc_message":"Stream removed","grpc_status":2}")]. Retry number [1/10]
What you expected to happen:
Client should not have timed out
It looks like something has changed between 1.10.1 and v1.11.1 after which client side annotations are not being honoured
nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "500m" nginx.ingress.kubernetes.io/proxy-buffer-size: "16k" nginx.ingress.kubernetes.io/proxy-connect-timeout: 600s nginx.ingress.kubernetes.io/proxy-read-timeout: 600s
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.): 1.11.1
Kubernetes version (use
kubectl version
): 1.28Environment:
Cloud provider or hardware configuration: GCP
OS (e.g. from /etc/os-release): Continer optimised OS
Kernel (e.g.
uname -a
): 6.1.85Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Terraform + kustomization + helmBasic cluster related info:
kubectl version
Client Version: v1.29.3Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.7-gke.100800
kubectl get nodes -o wide
gke-xxx-xxx-xxx-6-n2-16-2023071905-0e2b7d00-y2lc Ready 2d22h v1.29.7-gke.1008000 10.124.64.160 Container-Optimized OS from Google 6.1.85+ containerd://1.7.15
How was the ingress-nginx-controller installed:
additional config map has been added to address Alpine 3.17 images causes SSL Error "unsafe legacy renegotiation disabled" dotnet/dotnet-docker#4332 and config is upto date with alpine 3.20
Current State of the controller:
kubectl describe ingressclasses
Name: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=ingress-nginx-4.10.1
kustomize.toolkit.fluxcd.io/name=gke-cluster-services
kustomize.toolkit.fluxcd.io/namespace=ddd-flux-system
Annotations: ingressclass.kubernetes.io/is-default-class: true
nwm.io/contact: *[email protected]
Controller: k8s.io/ingress-nginx
Events:
Current state of ingress object, if applicable:
kubectl -n <appnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
Others:
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
deploy ingress with following annotations
metadata:
annotations:
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: mdx
meta.helm.sh/release-namespace: dev2-e2-tst1-mdx-mdx-demo2
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/limit-connections: "1000"
nginx.ingress.kubernetes.io/proxy-body-size: 500m
nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
nginx.ingress.kubernetes.io/proxy-connect-timeout: 600s
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: 600s
nginx.ingress.kubernetes.io/proxy-read-timeout: 600s
nginx.ingress.kubernetes.io/proxy-send-timeout: 600s
Run regression tasks and long running queries
Anything else we need to know:
No issues are reported in version 1.10.1 where as 1.11.1 consistently times out at 60 seconds
-->
The text was updated successfully, but these errors were encountered: