Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
  • Loading branch information
sftim authored Sep 22, 2020
1 parent 4c2ebdc commit 7736683
Showing 1 changed file with 10 additions and 6 deletions.
16 changes: 10 additions & 6 deletions content/en/docs/setup/best-practices/multiple-zones.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ one zone also impairs services in another zone.

## Control plane behavior

All [control plane components](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components)
All [control plane components](/docs/concepts/overview/components/#control-plane-components)
support running as a pool of interchangable resources, replicated per
component.

Expand All @@ -50,9 +50,9 @@ a third-party load balancing solution with health checking.
## Node behavior

Kubernetes automatically spreads the Pods for a
{{< glossary_tooltip text="Deployment" term_id="deployment" >}} or
{{< glossary_tooltip text="ReplicaSet" term_id="replica-set" >}}
or service across different nodes in a cluster. This spreading helps
workload resources (such as {{< glossary_tooltip text="Deployment" term_id="deployment" >}}
or {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}})
across different nodes in a cluster. This spreading helps
reduce the impact of failures.

When nodes start up, the kubelet on each node automatically adds
Expand Down Expand Up @@ -126,12 +126,16 @@ of different failure zones, does vary depending on exactly how your cluster is s
When you set up your cluster, you might also need to consider whether and how
your setup can restore service if all of the failure zones in a region go
off-line at the same time. For example, do you rely on there being at least
one running node so that cluster-critical Pods can perform repair work?
Make sure that any cluster-critical repair work does not rely
on there being at least one healthy node in your cluster. For example: if all nodes
are unhealthy, you might need to run a repair Job with a special
{{< glossary_tooltip text="toleration" term_id="toleration" >}} so that the repair
can complete enough to bring at least one node into service.

Kubernetes doesn't come with an answer for this challenge; however, it's
something to consider.

## {{% heading "whatsnext" %}}

If you want to learn more about how the scheduler places Pods in your cluster,
To learn how the scheduler places Pods in a cluster, honoring the configured constraints,
visit [Scheduling and Eviction](/docs/concepts/scheduling-eviction/).

0 comments on commit 7736683

Please sign in to comment.