Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

advertise service VIP from the node only if there is atleast one service endpoint pod on the node #262

Closed
murali-reddy opened this issue Dec 27, 2017 · 12 comments

Comments

@murali-reddy
Copy link
Member

Opening an issue for the slack conversation:

https://kubernetes.slack.com/archives/C8DCQGTSB/p1513691858000406

Advertise service VIP (cluster ip, external ip and loadbalancer ip when supported in kube-router) from the node only if there is atleast one service endpoint pod on the node.

this will nicely complement #254. While #254 helps in preserving the client IP which is very essential when providing L4 ingress, this issue will prevent extra hop.

@andrewsykim
Copy link
Collaborator

Going to take a stab at writing some tests for routes controller before I tackle this, if anyone else is interested on this issue, please go for it :)

@wrigtim
Copy link

wrigtim commented Apr 9, 2018

+1 from me. Useful when you have some applications that are less friendly with their traffic profile (e.g. exceptionally high-rate UDP).

@murali-reddy
Copy link
Member Author

@andrewsykim its already addressed right?

@andrewsykim
Copy link
Collaborator

Yup, you have to set service.Spec.ExternalTrafficPolicy=Local as per #334. We'll also want #350 implemented so we have faster convergence for BGP routes

@murali-reddy
Copy link
Member Author

murali-reddy commented Apr 9, 2018

Thanks for confirming. Closing the issue.

@wrigtim kube-router already has the support for ExternalTrafficPolicy=Local, with extra smarts to advertise from the node only if there is atleast on service endpoint.

@wrigtim
Copy link

wrigtim commented Apr 9, 2018

@murali-reddy ...but can’t do external policy with ClusterIP? There’s still an extra hop we can take out here?

@murali-reddy
Copy link
Member Author

@wrigtim some how documentation is not in sync with changes went in for this feature. I will add documentation.

You could add kube-router specific annotation to any service kube-router.io/service.local=true. Which will result in same behaviour as externalTrafficPolicy=Local irrespective of service type.

@wrigtim
Copy link

wrigtim commented Apr 9, 2018

Ah fantastic. Appreciate the rapid response - will test it out...

@murali-reddy murali-reddy reopened this Apr 12, 2018
@murali-reddy
Copy link
Member Author

@jdconti pointed out missing functionality so reopening the issue.

https://kubernetes.slack.com/archives/C8DCQGTSB/p1523553155000833

@uablrek
Copy link
Contributor

uablrek commented Sep 11, 2018

Please make this "feature" optional. If you always have one service for each external VIP it's fine. Problem is for applications that exposes multiple services with different port but with the same external VIP, e.g. http and sip (just a made-up example).

ECMP is on L3 level, it routes ip addresses not ports. So now it is not only "at least one endpoint" but;

"At least one endpoint for every service that exposes the common VIP"

This can quickly become messy. The pretty neat code for checking the existence of an enpoint becomes a nasty set operation spanning multiple services. The case when you end up with an emty set because an endpoint for service A is on one node and for service B on another is also a severe problem.

MetalLB has encountered this problem made some serious restrictions. See item 3 in this list. I tried, failed and wote an issue that is now closed but explains the problem.

IMO applications must have the possibility to use both externalTrafficPolicy=local and use multiple services for a common VIP, but...

then the application must take upon itself the responsibility to make sure (or as sure k8s allows) that an endpoint exist where traffic enters the cluster. This is not really hard.

Or use a cloud-provider that has a L4 load-balancer (or director) in front of the cluster so it can control traffic on port level.

@andrewsykim
Copy link
Collaborator

andrewsykim commented Sep 12, 2018

@uablrek maybe I'm missing some context here so apologies in advance. I'm not sure we need to be accommodating for the odd exception of people using the same VIP for multiple services. If that is your case then I recommend using an ingress controller which was built for that exact purpose.

then the application must take upon itself the responsibility to make sure (or as sure k8s allows) that an endpoint exist where traffic enters the cluster. This is not really hard.

This is surprisingly non-trivial IMO. You can't always assume applications are network/routing aware, at least in kube-router I would not expect so. This is part of the reason why service mesh/sidecar proxies are becoming popular. kube-router (and I would think MetalLB) are not replacements for those.

@aauren
Copy link
Collaborator

aauren commented Apr 24, 2020

I believe is already implemented using the kube-router.io/service.local annotation or by setting the externalTrafficPolicy on the service. Feel free to re-open if this is not the case or if I missed something here.

@aauren aauren closed this as completed Apr 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants