-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UDP TX drops every 10min when rate exceeds 15-20Kbps #2338
Comments
@liggetm please check the pod logs searching for "reloading" and checking the timestamp against the drop of traffic to see if that's the issue. |
@aledbf thanks for coming back to me. I didn't see any reloading at the same timestamp, but I did see alerts when I increasing the logging to verbose.
Looking at the documentation it appears that the defaults are 16384 worker_connections per worker-process (@ 1 worker process per cpu). I'll try to increase the worker_connections, but I don't fully understand what it means in relation to UDP given it's connectionless. |
@liggetm please check the generated nginx.con searching the value of You can adjust worker_connections in the configuration configmap setting |
Thanks @aledbf - my config shows |
Yes. Edit: worker_rlimit_nofile is per worker process |
Can we close this? |
Yes, thanks @aledbf for all your help! |
NGINX Ingress controller version:
0.12.0
(from quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0)
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Environment:
Cloud provider or hardware configuration:
Bare metal (on an HP Elitedesk 800 G3, i7, 32GB, 250GB SSD)
OS (e.g. from /etc/os-release):
CentOS Atomic Host 1803
Kernel (e.g.
uname -a
):Linux atomic80 3.10.0-693.21.1.el7.x86_64 Basic structure #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64
Install tools:
Installed via Ansible scripts from https://github.com/kubernetes/contrib/tree/master/ansible
Others:
None I can think of other than I can replicate this same issue in version 0.9.0.
What happened:
Drops in UDP TX traffic from the ingress controller every 10mins for a period of approximately 90sec even though UDP RX traffic remains constant. This appears to happen at UDP rates > 15-20Kbps.
What you expected to happen:
No drops in traffic and no real difference between RX/TX rates when using UDP regardless of rate
How to reproduce it (as minimally and precisely as possible):
Configure a valid upstream UDP host, configure an ingress using a hostport to point to the upstream host via a Kubernetes service. My config snippets:
Anything else we need to know:
I've uploaded an image from Grafana showing the TX drops over the course of an hour: https://imagebin.ca/v/3y7wU7cilhwF
The text was updated successfully, but these errors were encountered: