-
Notifications
You must be signed in to change notification settings - Fork 14.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding documentation for Topology Aware Hints #27032
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,156 @@ | ||
--- | ||
reviewers: | ||
- robscott | ||
title: Topology Aware Hints | ||
content_type: concept | ||
weight: 10 | ||
--- | ||
|
||
|
||
<!-- overview --> | ||
|
||
{{< feature-state for_k8s_version="v1.21" state="alpha" >}} | ||
|
||
_Topology Aware Hints_ enable topology aware routing by including suggestions | ||
for how clients should consume endpoints. This approach adds metadata to enable | ||
consumers of Endpoint(Slice) to be able to route traffic closer to where it is | ||
originated. For example, users can route traffic within a locality to reduce | ||
costs and improve performance. | ||
|
||
<!-- body --> | ||
|
||
## Motivation | ||
|
||
Kubernetes clusters are increasingly deployed in multi-zone environments. | ||
_Topology Aware Hints_ provides a mechanism to help keep traffic within the zone | ||
it originated from. This concept is commonly referred to as "Topology Aware | ||
Routing". When calculating the endpoints for a Service, the EndpointSlice | ||
controller considers the topology (region and zone) of each endpoint and | ||
populates the hints field to allocate it to a zone. These hints are then | ||
consumed by components like kube-proxy as they configure how requests are | ||
routed. | ||
|
||
## Using Topology Aware Hints | ||
|
||
You can enable Topology Aware Hints for a Service by setting the | ||
`service.kubernetes.io/topology-aware-hints` annotation to `auto`. This tells | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Auto" or "auto"? It should be "Auto" probably There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Unfortunately it is "auto", should I try to fix that in upstream? https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/endpointslice/utils.go#L395 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Probably, and now you get to handle both! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Covered by kubernetes/kubernetes#100728 |
||
the EndpointSlice controller to set topology hints if it is deemed safe. | ||
Importantly, this does not guarantee that hints will always be set. | ||
|
||
## How it Works | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Optional improvement: I think it'd really help to have diagram showing the difference in traffic (or similar) for a Service with or without BTW, we can use Mermaid for diagrams so you don't need to use graphics software. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Agree, this is a good idea, will try to set aside some time for this. Do you have any examples of diagrams that have been done well in k8s docs? Is Mermaid optional? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. https://kubernetes.io/docs/concepts/services-networking/ingress/ is my usual example of diagrams in Mermaid. https://kubernetes.io/docs/concepts/overview/components/ shows our “house style”; it uses icons from https://github.com/kubernetes/community/tree/master/icons There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually I notice there's no icon for EndpointSlice yet. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's OK to add the diagram later, eg when this is scheduled to go beta. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Been overloaded this week so I don't think I'll get time to add this in for this release, but will keep it in mind for beta, agree that it would be helpful. |
||
|
||
The functionality enabling this feature is split into two components: The | ||
EndpointSlice controller and Kube-Proxy. This provides a high level overview of | ||
how each component implements this feature. | ||
sftim marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
### EndpointSlice controller | ||
|
||
The EndpointSlice controller is responsible for setting hints on EndpointSlices | ||
when this feature is enabled. The controller allocates a proportional amount of | ||
endpoints to each zone. This proportion is based on the | ||
[allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) | ||
CPU cores for nodes running in that zone. For example, if one zone had 2 CPU | ||
cores and another zone only had 1 CPU core, the controller would allocated twice | ||
as many endpoints to the zone with 2 CPU cores. | ||
|
||
The following example shows what an EndpointSlice looks like when hints have | ||
been populated: | ||
|
||
```yaml | ||
apiVersion: discovery.k8s.io/v1 | ||
kind: EndpointSlice | ||
metadata: | ||
name: example-hints | ||
labels: | ||
kubernetes.io/service-name: example-svc | ||
addressType: IPv4 | ||
ports: | ||
- name: http | ||
protocol: TCP | ||
port: 80 | ||
endpoints: | ||
- addresses: | ||
- "10.1.2.3" | ||
conditions: | ||
ready: true | ||
hostname: pod-1 | ||
zone: zone-a | ||
hints: | ||
forZones: | ||
- name: "zone-a" | ||
``` | ||
|
||
### Kube-Proxy | ||
|
||
Kube-Proxy filters the endpoints it routes to based on the hints set by the | ||
EndpointSlice controller. In most cases, this means that kube-proxy will route | ||
to endpoints in the same zone. Sometimes the controller allocates endpoints from | ||
a different zone to ensure more even distribution of endpoints between zones. | ||
This would result in some traffic being routed to other zones. | ||
|
||
## Safeguards | ||
|
||
The Kubernetes control plane and the kube-proxy on each node apply some | ||
safeguard rules before using Topology Aware Hints. If these don't check out, | ||
kube-proxy selects endpoints from anywhere in your cluster, regardless of the | ||
zone. | ||
|
||
1. **Insufficient number of endpoints:** If there are less endpoints than zones | ||
in a cluster, the controller will not assign any hints. | ||
|
||
2. **Impossible to achieve balanced allocation:** In some cases, it will be | ||
impossible to achieve a balanced allocation of endpoints among zones. For | ||
example, if zone-a is twice as large as zone-b, but there are only 2 | ||
endpoints, an endpoint allocated to zone-a may receive twice as much traffic | ||
as zone-b. The controller wil not assign hints if it can't get this "expected | ||
overload" value below an acceptable threshold for each zone. Importantly this | ||
is not based on real-time feedback. It is still possible for individual | ||
endpoints to become overloaded. | ||
|
||
3. **One or more Nodes has insufficient information:** If any node does not have | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this true even when a single node may be missing topology labels for a short period of time? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In other words, during that short period of time, would kube-proxy flap between topolgoy-aware hints and all endpoints? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, that is what is is for now. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep, the current approach is very quick to bail out. Would like to try to find some kind of acceptable threshold for missing data, but that's been hard to quantify so far. |
||
a `topology.kubernetes.io/zone` label or is not reporting a value for | ||
allocatable CPU, the control plane does not set any topology-aware endpoint | ||
hints and so kube-proxy does not filter endpoints by zone. | ||
|
||
4. **One or more endpoints does not have a zone hint:** When this happens, | ||
kube-proxy assumes that a transition from or to Topology Aware Hints is | ||
underway. Filtering endpoints for a Service in this state would be dangerous | ||
so Kube-Proxy falls back to using all endpoints. | ||
|
||
5. **A zone is not represented in hints:** If kube-proxy is unable to find at | ||
least one endpoint with a hint targeting the zone it is running in, it will | ||
fall back to using endpoints from all zones. This is most likely to happen as | ||
a new zone is being added to a cluster. | ||
|
||
## Constraints | ||
|
||
* Topology Aware Hints are not used when either `externalTrafficPolicy` or | ||
`internalTrafficPolicy` is set to `Local` on a Service. It is possible to use | ||
both features in the same cluster on different Services, just not on the same | ||
Service. | ||
|
||
* This approach will not work well for Services that have a large proportion of | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The APPROACH might, but the implementation will need to adapt to handle it, right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For an alpha feature, it's OK to document current shortcomings without making a commitment about future improvements to address them (we don't want to make promises unless we're sure of delivering on them). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is worth further discussion, but the only ways I can think to adapt to this would be new metrics that could enable feedback-based balancing or extra configuration. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What I meant is that the implementation could evolve to consider feedback and the |
||
traffic originating from a subset of zones. Instead this assumes that incoming | ||
traffic will be roughly proportional to the capacity of the Nodes in each | ||
zone. | ||
|
||
* The EndpointSlice controller ignores unready nodes as it calculates the | ||
proportions of each zone. This could have unintended consequences if a large | ||
portion of nodes are unready. | ||
|
||
* The EndpointSlice controller does not take into account {{< glossary_tooltip | ||
text="tolerations" term_id="toleration" >}} when deploying calculating the | ||
proportions of each zone. If the Pods backing a Service are limited to a | ||
subset of Nodes in the cluster, this will not be taken into account. | ||
|
||
* This may not work well with autoscaling. For example, if a lot of traffic is | ||
robscott marked this conversation as resolved.
Show resolved
Hide resolved
|
||
originating from a single zone, only the endpoints allocated to that zone will | ||
be handling that traffic. That could result in {{< glossary_tooltip | ||
text="Horizontal Pod Autoscaler" term_id="horizontal-pod-autoscaler" >}} | ||
either not picking up on this event, or newly added pods starting in a | ||
different zone. | ||
|
||
## {{% heading "whatsnext" %}} | ||
|
||
* Read about [enabling Topology Aware Hints](/docs/tasks/administer-cluster/enabling-topology-aware-hints) | ||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
--- | ||
reviewers: | ||
- robscott | ||
title: Enabling Topology Aware Hints | ||
content_type: task | ||
min-kubernetes-server-version: 1.21 | ||
--- | ||
|
||
<!-- overview --> | ||
{{< feature-state for_k8s_version="v1.21" state="alpha" >}} | ||
|
||
_Topology Aware Hints_ enable topology aware routing with topology hints | ||
included in EndpointSlices. This approach tries to keep traffic close to where | ||
it originated from. This can result in decreased costs and improved performance. | ||
|
||
## {{% heading "prerequisites" %}} | ||
|
||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} | ||
robscott marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
The following prerequisite is needed in order to enable topology aware hints: | ||
|
||
* {{< glossary_tooltip text="Kube-proxy" term_id="kube-proxy" >}} running in | ||
iptables mode or IPVS mode | ||
* Ensure that you have not disabled EndpointSlices | ||
|
||
## Enable Topology Aware Hints | ||
|
||
To enable service topology, enable the `TopologyAwareHints` [feature | ||
gate](docs/reference/command-line-tools-reference/feature-gates/) for the | ||
kube-apiserver, kube-controller-manager, and kube-proxy: | ||
|
||
``` | ||
--feature-gates="TopologyAwareHints=true" | ||
``` | ||
|
||
## {{% heading "whatsnext" %}} | ||
|
||
* Read about [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints) for Services | ||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry if this was covered in the PR review, I was unable to review it due to other priorities :(
Do we "drop annotations" like we do field when the feature gate is off, or is the annotation just ignored in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ignored
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, the annotation is a way for users to opt into this feature on a Service, so dropping that opt-in would not be good here.