-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Issues with ingress-nginx Helm Chart Version 4.11.2 #11987
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/remove-kind It would be best course of action to trace the process and such details. Since this can not be reproduced at will on minikube, there is no action that others can take. What have you debugged so far. Please look at the release notes and change log for v1.11.2 and see which of your used features, if any, are related. There could be changes that are causing retries or zombies so you need to trace the process on the container and the host. |
@longwuyuan: Those labels are not set on the issue: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/kind support We have only k8s events Process nginx (pid: 3860819) triggered an OOM kill on process nginx (pid: 3128340, oom_score: 2086003, oom_score_adj: 936). The process had reached 2097152 pages in size. This OOM kill was invoked by a cgroup, containerID: 910ca10fd008793d586b37a3704bcf1dee6656d3c151fb77ce353fbc76647d68. I0917 22:49:06.256254 7 sigterm.go:47] "Exiting" code=0 |
We can ack that that is all you have and so you seek support. But you also need to ack that some data is needed to take some action by others. Since this can not be reproduced on a kind cluster or a minikube cluster, you are sort of stuck with the action to trace the process now or later or anytime. Your trace should look for signs and suspects of memory consumption. That is in container OS processes and threads. Some people who faced same issue used strace/ptrace type of tools. You can search the issues for strace or OOM etc. |
/remove-kind bug |
We are experiencing significant memory issues after upgrading to the ingress-nginx Helm chart version 4.11.2. The memory usage has increased substantially, leading to performance degradation and instability in our applications.
Process nginx (pid: 3860819) triggered an OOM kill on process nginx (pid: 3128340, oom_score: 2086003, oom_score_adj: 936). The process had reached 2097152 pages in size.
This OOM kill was invoked by a cgroup, containerID: 910ca10fd008793d586b37a3704bcf1dee6656d3c151fb77ce353fbc76647d68.
before upgrade we had set 4GB memory but after upgrade we have increased to 6GB but still OOM kill stopped all the nginx pod
The text was updated successfully, but these errors were encountered: