-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discuss etcd load balancing within Options for Highly Available Topology #40028
Comments
@natereid72 Can you explain what page isn't right, and what about that page should be improved? I can see that https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ exists but I'm not sure what improvement you're proposing. I think the advice in https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ is accurate. You might be suggesting that we explain the pros and cons of using a managed load balancer in front of the cluster, versus having each API server be aware of the individual etcd instances. Is that what you had in mind? (If so, I think I'd like to see an evergreen - ie, maintained - blog article about that) /language en |
Use of external load balancer is not required to run Kubernetes but it is an option. The document should suggest between using client side load-balancing or external load balancer. If not possible to give suggestion, there should be a section that lists pros and cons for each option. |
Yes to both. I am not aware of the tradeoffs between client-side vs. external/managed LB. If someone that is could add that to the existing page, it would be very helpful. Perhaps the pros and cons would be best detailed in this doc: Options for Highly Available Topology |
/retitle Discuss etcd load balancing within Options for Highly Available Topology /sig scalability |
Ignore that for now. It may have just been a temporary unrelated symptom. |
@sftim @natereid72 I think the explanations and pros and cons listed on this page in the gRPC documentation provides answers to your questions: https://grpc.io/blog/grpc-load-balancing/ |
Thanks @sftim. On first take of that doc, I read it as leading to the the right choice for K8s config to be relying on client-side architecture. So perhaps just removing the proxy LB architecture reference from the K8s docs altogether is the right path? |
Using a load balancer is still a viable option (this is almost a tenet of non-abstract scalable architecture design). For example, we might not trust Kubernetes to kill etcd nodes, but we might allow a load balancer to send that shutdown signal. |
I think this would require the assumption that etcd was being managed by Kubernetes; no? Of course the etcd cluster can be an STS, or Deployment, as statice pods, and/or completely outside of the kube-apiserver's purview. From what I understand of etcd client-side LB, this is handled either way. From the doc referenced above:
and:
So linking the grpc LB doc doesn't clear it up for me. |
No: some kinds of load balancer can send a signal to fence or shut down unhealthy targets, even when you don't use Kubernetes. You can run etcd on cloud compute and with a load balancer, without any Kubernetes at all. And then - if you want to - you can point one or more (If you'd like to discuss different ways to run Kubernetes and its components, https://discuss.kubernetes.io/ is a good place to have that conversation.) So, I think we have an opportunity to cover the more unusual cases enough that readers don't see them as unviable or prohibited. At the same time, it's helpful to steer readers towards the most common architectures. A typical reader just wants to set up a cluster, rather than learn how to architect control planes for special scenarios. |
Fair enough, I misread that originally. I agree that this is a pro in the case of external LB. I think the client-side LB of etcd client has this functionality though.
Aware of this, this is how I've pointed kube-apiserver to external etcd node cluster. And is actually what started my thought on the OP here, when I read about using an external LB in the docs.
Thanks for that reference. I was unaware of it. That is great to know of. I will definitely use that for topics like this moving forward.
Yes, I think this is where I was landing on it, thanks much. |
@baumann-t I inadvertently replied to your post, to Tim above. I think that inclusion of gRPC link would suffice. I will admit that after considering that info, and after configuring an HAproxy setup for the etcd cluster, it seems the use case of Kubernetes with etcd client to etcd cluster is best served with client-side LB. I'm still not certain it ever makes sense to use a managed/external LB in this scenario. |
wrote this blog post covering my thoughts on this topic. I'm happy to close this issue if there is no further need to clarify the K8s docs. |
I thought that kube-apiserver utilized gRPC client-side load balancing. I see in the Operating etcd clusters for Kubernetes mention of using a load balancer in front of etcd cluster, with single etcd address (LBs) supplied to control plane (See here)
Is the use of a load balancer in front of the etcd cluster required for load balancing etcd to kube-apiserver? Or is there a benefit to it? Perhaps some explanation of the reason one would consider that option would be useful.
I can see that not having to update kube-apiserver etcd config when adding/removing etcd nodes is one possible benefit. But I don't know what the cons might be of using external load balancer in front of etcd vs. client-side lb.
The text was updated successfully, but these errors were encountered: