-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
helm chart should support setting internalTrafficPolicy service value #10798
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @Gacko |
Hello @abasalaev! Could you please elaborate on your use case? I can not see how
The Ingress NGINX has no control over the configuration of services it's routing traffic to, only about the service routing traffic to itself. And this is all we can change: The way traffic is being routed to Ingress NGINX Controller pods. Having Even though I'd still like to have more information on a use case then, I'd understand if you were asking for only implementing Marco |
/triage needs-information |
/close |
@Gacko: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
If I may necropost (sorry), there are valid reasons for setting local mode. When using iptables mode for kube-proxy and Topology Aware Routing (TAR) one might want to ensure that traffic entering a node hosting ingress-nginx remains on that node - with the default of
So here there's a 50% chance of traffic entering this node actually going to another for the ingress (i.e. this is inbound to the controller and before it can decide how to route to a backend) - that's an additional/unnecessary hop potentially incurring latency and/or cost. If we change the ingress-nginx
There are tradeoffs regarding concentration risk/availability but they can all be mitigated in various ways and are orthogonal to the issue at hand - the point is this is a reasonable option to control for folks who understand the consequences. Can we please add it? |
According to the docs
|
Sorry to contradict you @Gacko but having tested this and checked the iptables rules, setting We want it set so that traffic from the load balancer hits the node hosting nginx and then stays on that host in order to talk to nginx - with the setting of Other folks might not understand this but try it for yourself and observe, I am not joking. Please can we now have the option in the chart so I don't need to maintain a fork forever. Thanks. |
This is not how
Sorry, but based on your reasoning I still assume Ingress NGINX not to the root cause. So as long it hasn't been clarified that all your components are working Kubernetes API conform and the behavior you're observing is therefore Kubernetes API compliant, we don't want to implement a field that, if then implemented correctly in the user's environment/network, might also break Ingress NGINX if misconfigured. Additionally and apart from what you're observing I still don't see a valid use case for this. |
I'm not the author of kube-proxy etc I am an end user making observations about the system based on settings. This setting creates the environment I desire. What would you like me to check, exactly? The iptables rules change to what is desired based on this single change. I can't really put it any other way. Looks like you're not going to be helpful so we will fork and move on, thanks anyways. |
From my understanding the behavior you're expecting (having traffic coming from the outside of your cluster, so load balancer traffic e.g., stay on the same node and getting directed to pods on this node instead of getting distributed to other pods on different nodes inside the same cluster) can be achieved by setting At least and from my understanding this is what
You're telling, that traffic coming from the outside of your cluster is still getting distributed across your cluster. This is clearly in the responsibility of Of course I can also setup a cluster and see if I can observe the same behavior you're describing. I didn't do this for this particular issue here, but in my day to day job I used to support Ingress NGINX in a lot of different environments (everything from tiny clusters to huge clusters, different cloud providers, different Kubernetes versions) and I never observed the issue you're describing and trying to change the Ingress NGINX chart for. |
Also, are you taking into account that |
Here's the KEP for |
And here's what the Kubernetes docs are telling about |
Hi @Gacko This request doesn't relate to the externalTrafficPolicy at all. My case was the following - there is a special type of nodes that always includes two pods: ingress-nginx and app-server. Ingress-nginx proxies requests to the app-server. If I enable Hope it helps |
Hi @abasalaev , when you can alter the manifests and charts in your own fork, would you feel that its is a feasible practical solution when compared to altering the project itself. We don't have any resources and hence we don't have any interest to support/maintain features/capabilities that meet this kind of criteria. We are actually deprecating multiple popular and much used features like tcp/udp forwarding. |
Hi @longwuyuan I just shared this case and am not insisting on including this change. I appreciate your efforts. |
For this use case you need to set |
This is a new value, see https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/
It should work in conjunction with https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#service-upstream
The text was updated successfully, but these errors were encountered: