-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
advertise service VIP from the node only if there is atleast one service endpoint pod on the node #262
Comments
Going to take a stab at writing some tests for routes controller before I tackle this, if anyone else is interested on this issue, please go for it :) |
+1 from me. Useful when you have some applications that are less friendly with their traffic profile (e.g. exceptionally high-rate UDP). |
@andrewsykim its already addressed right? |
Thanks for confirming. Closing the issue. @wrigtim kube-router already has the support for |
@murali-reddy ...but can’t do external policy with ClusterIP? There’s still an extra hop we can take out here? |
@wrigtim some how documentation is not in sync with changes went in for this feature. I will add documentation. You could add kube-router specific annotation to any service |
Ah fantastic. Appreciate the rapid response - will test it out... |
@jdconti pointed out missing functionality so reopening the issue. https://kubernetes.slack.com/archives/C8DCQGTSB/p1523553155000833 |
Please make this "feature" optional. If you always have one service for each external VIP it's fine. Problem is for applications that exposes multiple services with different port but with the same external VIP, e.g. http and sip (just a made-up example). ECMP is on L3 level, it routes ip addresses not ports. So now it is not only "at least one endpoint" but; "At least one endpoint for every service that exposes the common VIP" This can quickly become messy. The pretty neat code for checking the existence of an enpoint becomes a nasty set operation spanning multiple services. The case when you end up with an emty set because an endpoint for service A is on one node and for service B on another is also a severe problem. MetalLB has encountered this problem made some serious restrictions. See item 3 in this list. I tried, failed and wote an issue that is now closed but explains the problem. IMO applications must have the possibility to use both externalTrafficPolicy=local and use multiple services for a common VIP, but... then the application must take upon itself the responsibility to make sure (or as sure k8s allows) that an endpoint exist where traffic enters the cluster. This is not really hard. Or use a cloud-provider that has a L4 load-balancer (or director) in front of the cluster so it can control traffic on port level. |
@uablrek maybe I'm missing some context here so apologies in advance. I'm not sure we need to be accommodating for the odd exception of people using the same VIP for multiple services. If that is your case then I recommend using an ingress controller which was built for that exact purpose.
This is surprisingly non-trivial IMO. You can't always assume applications are network/routing aware, at least in kube-router I would not expect so. This is part of the reason why service mesh/sidecar proxies are becoming popular. kube-router (and I would think MetalLB) are not replacements for those. |
I believe is already implemented using the |
Opening an issue for the slack conversation:
https://kubernetes.slack.com/archives/C8DCQGTSB/p1513691858000406
Advertise service VIP (cluster ip, external ip and loadbalancer ip when supported in kube-router) from the node only if there is atleast one service endpoint pod on the node.
this will nicely complement #254. While #254 helps in preserving the client IP which is very essential when providing L4 ingress, this issue will prevent extra hop.
The text was updated successfully, but these errors were encountered: