From 7736683e521738fd0973e16242c1afe3bfbeb6a0 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 22 Sep 2020 12:39:41 +0100 Subject: [PATCH] Apply suggestions from code review --- .../docs/setup/best-practices/multiple-zones.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/content/en/docs/setup/best-practices/multiple-zones.md b/content/en/docs/setup/best-practices/multiple-zones.md index 4c571668bf9d8..d613a06a05919 100644 --- a/content/en/docs/setup/best-practices/multiple-zones.md +++ b/content/en/docs/setup/best-practices/multiple-zones.md @@ -28,7 +28,7 @@ one zone also impairs services in another zone. ## Control plane behavior -All [control plane components](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) +All [control plane components](/docs/concepts/overview/components/#control-plane-components) support running as a pool of interchangable resources, replicated per component. @@ -50,9 +50,9 @@ a third-party load balancing solution with health checking. ## Node behavior Kubernetes automatically spreads the Pods for a -{{< glossary_tooltip text="Deployment" term_id="deployment" >}} or -{{< glossary_tooltip text="ReplicaSet" term_id="replica-set" >}} -or service across different nodes in a cluster. This spreading helps +workload resources (such as {{< glossary_tooltip text="Deployment" term_id="deployment" >}} +or {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}}) +across different nodes in a cluster. This spreading helps reduce the impact of failures. When nodes start up, the kubelet on each node automatically adds @@ -126,12 +126,16 @@ of different failure zones, does vary depending on exactly how your cluster is s When you set up your cluster, you might also need to consider whether and how your setup can restore service if all of the failure zones in a region go off-line at the same time. For example, do you rely on there being at least -one running node so that cluster-critical Pods can perform repair work? +Make sure that any cluster-critical repair work does not rely +on there being at least one healthy node in your cluster. For example: if all nodes +are unhealthy, you might need to run a repair Job with a special +{{< glossary_tooltip text="toleration" term_id="toleration" >}} so that the repair +can complete enough to bring at least one node into service. Kubernetes doesn't come with an answer for this challenge; however, it's something to consider. ## {{% heading "whatsnext" %}} -If you want to learn more about how the scheduler places Pods in your cluster, +To learn how the scheduler places Pods in a cluster, honoring the configured constraints, visit [Scheduling and Eviction](/docs/concepts/scheduling-eviction/).