-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nginx configuration for maxOpenFiles should generate based on container allocated resource quotas instead of Node/Instance Spec #1827
Comments
Why you arribe to this conclusion? This is just a limit, not the number of resources consumed by the ingress controller. |
i am not having problem with the default but how nginx ingress template generate the nginx config when you allocated amount of cpu/mem to a container or a pod |
When you hit the memory limit nginx will segfault and that produces the restart of the pod. |
Please reopen if you have more questions |
thanks, @aledbf , however, my question is how it can auto-scale up like spin up a new instance and allocate new service deploy to the new node instead of existing one, as restart won't solve the problem if you got nginx ingress and other services running on the same node. |
You can use the autoscale feature https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ adding a nodeAffinity filter in the deployment to achieve this |
Sure, much appreciate, will give a go 👍 |
hi @aledbf , by looking the horizontal-pod-autoscaler docs https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#next-steps |
@aar6ncai , you can use this refrence to scale based on custom metrics. |
Nginx templator generated Config based on Node(instance spec)
ingress-nginx/internal/ingress/controller/nginx.go
Line 558 in e02697e
AWS Instance Type: m4.xlarge vCPU 4 Mem (GiB) 16 vCore 2
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
uname -a
): Linux ingress-nginx-535084222-12q1w 4.4.102-k8s Basic structure #1 SMP Sun Nov 26 23:32:43 UTC 2017 x86_64 GNU/LinuxNGINX Ingress controller version:
latest 0.9 stable
Kubernetes version (use
kubectl version
):GitVersion:"v1.8.3"
What happened:
Nginx config generated based on instance spec.
This would cause Nginx ingress LB performance issue as many pods running one particular node.
Any recommendation or best practice for deploy LB in k8s?
perhaps keep LB deploy independently on each node.
What you expected to happen:
worker_process , worker connection, worker_rlimit_nofile and other related configuration should generate based on container spec.
The text was updated successfully, but these errors were encountered: