Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nginx configuration for maxOpenFiles should generate based on container allocated resource quotas instead of Node/Instance Spec #1827

Closed
aar6ncai opened this issue Dec 15, 2017 · 9 comments

Comments

@aar6ncai
Copy link

aar6ncai commented Dec 15, 2017

Nginx templator generated Config based on Node(instance spec)

daemon off;

worker_processes 4;
pid /run/nginx.pid;

worker_rlimit_nofile 261120;

worker_shutdown_timeout 10s ;

events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}
 ps -ef | grep nginx
root         1     0  0 10:23 ?        00:00:00 /usr/bin/dumb-init /nginx-ingress-controller --default-backend-service=kube-system/nginx-default-backend --configmap=kube-system/ingress-nginx

*Maximum number of open files permitted*

# cat /proc/sys/fs/file-max
2097152
# ulimit -a
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        8192
coredump(blocks)     unlimited
memory(kbytes)       unlimited
locked memory(kbytes) 64
process              1048576
nofiles              1048576
vmemory(kbytes)      unlimited
locks                unlimited
rtprio               0
wp, err := strconv.Atoi(cfg.WorkerProcesses)
    glog.V(3).Infof("number of worker processes: %v", wp)
    if err != nil {
        wp = 1
    }
    maxOpenFiles := (sysctlFSFileMax() / wp) - 1024
    glog.V(3).Infof("maximum number of open file descriptors : %v", sysctlFSFileMax())
    if maxOpenFiles < 1024 {
        // this means the value of RLIMIT_NOFILE is too low.
        maxOpenFiles = 1024
    }

maxOpenFiles := (sysctlFSFileMax() / wp) - 1024

 (1048576 /4 ) - 1024 =261120 
  • Cloud provider or hardware configuration: AWS
    AWS Instance Type: m4.xlarge vCPU 4 Mem (GiB) 16 vCore 2
  • OS (e.g. from /etc/os-release):
    PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
    NAME="Debian GNU/Linux"
    VERSION_ID="9"
    VERSION="9 (stretch)"
    ID=debian
    HOME_URL="https://www.debian.org/"
    SUPPORT_URL="https://www.debian.org/support"
    BUG_REPORT_URL="https://bugs.debian.org/"
  • Kernel (e.g. uname -a): Linux ingress-nginx-535084222-12q1w 4.4.102-k8s Basic structure  #1 SMP Sun Nov 26 23:32:43 UTC 2017 x86_64 GNU/Linux
  • Install tools: HELM
  • Others:

NGINX Ingress controller version:
latest 0.9 stable

Kubernetes version (use kubectl version):
GitVersion:"v1.8.3"

What happened:

Nginx config generated based on instance spec.

This would cause Nginx ingress LB performance issue as many pods running one particular node.

Any recommendation or best practice for deploy LB in k8s?
perhaps keep LB deploy independently on each node.

What you expected to happen:

worker_process , worker connection, worker_rlimit_nofile and other related configuration should generate based on container spec.

@aar6ncai aar6ncai changed the title Nginx configuration for maxOpenFiles should generate based on container resource quote instead of Node Spec Nginx configuration for maxOpenFiles should generate based on container allocated resource quotas instead of Node/Instance Spec Dec 15, 2017
@aledbf
Copy link
Member

aledbf commented Dec 15, 2017

This would cause Nginx ingress LB performance issue as many pods running one particular node.

Why you arribe to this conclusion?

This is just a limit, not the number of resources consumed by the ingress controller.
If we don't adjust the default (1024) the performance of nginx is unacceptable

@aar6ncai
Copy link
Author

aar6ncai commented Dec 15, 2017

i am not having problem with the default but how nginx ingress template generate the nginx config when you allocated amount of cpu/mem to a container or a pod
this could be a feature request

@aledbf
Copy link
Member

aledbf commented Dec 15, 2017

how nginx ingress template generate the nginx config when you allocated amount of cpu/mem to a container or a pod

When you hit the memory limit nginx will segfault and that produces the restart of the pod.
In case of CPU limits you are just limited to the use of the resource.
Please check https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run

@aledbf aledbf closed this as completed Dec 15, 2017
@aledbf
Copy link
Member

aledbf commented Dec 15, 2017

Please reopen if you have more questions

@aar6ncai
Copy link
Author

thanks, @aledbf , however, my question is how it can auto-scale up like spin up a new instance and allocate new service deploy to the new node instead of existing one, as restart won't solve the problem if you got nginx ingress and other services running on the same node.

@aledbf
Copy link
Member

aledbf commented Dec 17, 2017

my question is how it can auto-scale up like spin up a new instance and allocate new service deploy to the new node instead of existing one

You can use the autoscale feature https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ adding a nodeAffinity filter in the deployment to achieve this

@aar6ncai
Copy link
Author

Sure, much appreciate, will give a go 👍

@aar6ncai
Copy link
Author

aar6ncai commented Dec 18, 2017

hi @aledbf , by looking the horizontal-pod-autoscaler docs https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#next-steps
It does not support autoscaling based on custom metrics such as open file limit and other system generated metrcis. (Only CPU for now)

@hadroncollider-q
Copy link

@aar6ncai , you can use this refrence to scale based on custom metrics.
https://docs.bitnami.com/kubernetes/how-to/configure-autoscaling-custom-metrics/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants