diff --git a/config/config-feature-flags.yaml b/config/config-feature-flags.yaml index 8056ae23b69..3288331422f 100644 --- a/config/config-feature-flags.yaml +++ b/config/config-feature-flags.yaml @@ -26,8 +26,8 @@ data: # # The default behaviour is for Tekton to create Affinity Assistants # - # See more in the workspace documentation about Affinity Assistant - # https://github.com/tektoncd/pipeline/blob/main/docs/workspaces.md#affinity-assistant-and-specifying-workspace-order-in-a-pipeline + # See more in the Affinity Assistant documentation + # https://github.com/tektoncd/pipeline/blob/main/docs/affinityassistants.md # or https://github.com/tektoncd/pipeline/pull/2630 for more info. disable-affinity-assistant: "false" # Setting this flag will determine how PipelineRun Pods are scheduled with Affinity Assistant. @@ -39,7 +39,8 @@ data: # and only allows one pipelinerun to run on a node at a time. # Setting it to "disabled" will not apply any coschedule policy. # - # TODO: add links to documentation and migration strategy + # See more in the Affinity Assistant documentation + # https://github.com/tektoncd/pipeline/blob/main/docs/affinityassistants.md # NOTE: this feature is still under development and not yet functional. coschedule: "workspaces" # Setting this flag to "true" will prevent Tekton scanning attached diff --git a/docs/additional-configs.md b/docs/additional-configs.md index 202fa6e8a07..1ee1693ebe8 100644 --- a/docs/additional-configs.md +++ b/docs/additional-configs.md @@ -193,7 +193,7 @@ that are running while the change occurs. The flags in this ConfigMap are as follows: -- `disable-affinity-assistant` - set this flag to `true` to disable the [Affinity Assistant](./workspaces.md#specifying-workspace-order-in-a-pipeline-and-affinity-assistants) +- `disable-affinity-assistant` - set this flag to `true` to disable the [Affinity Assistant](./affinityassistants) that is used to provide Node Affinity for `TaskRun` pods that share workspace volume. The Affinity Assistant is incompatible with other affinity rules configured for `TaskRun` pods. @@ -206,6 +206,13 @@ The flags in this ConfigMap are as follows: node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes are missing the specified `topologyKey` label, it can lead to unintended behavior. +- `coschedule`: set this flag determines how PipelineRun Pods are scheduled with [Affinity Assistant](./affinityassistants). +Acceptable values are "workspaces" (default), "pipelineruns", "isolate-pipelinerun", or "disabled". +Setting it to "workspaces" will schedule all the taskruns sharing the same PVC-based workspace in a pipelinerun to the same node. +Setting it to "pipelineruns" will schedule all the taskruns in a pipelinerun to the same node. +Setting it to "isolate-pipelinerun" will schedule all the taskruns in a pipelinerun to the same node, +and only allows one pipelinerun to run on a node at a time. Setting it to "disabled" will not apply any coschedule policy. + - `await-sidecar-readiness`: set this flag to `"false"` to allow the Tekton controller to start a TasksRun's first step immediately without waiting for sidecar containers to be running first. Using this option should decrease the time it takes for a TaskRun to start running, and will allow TaskRun @@ -291,6 +298,7 @@ Features currently in "alpha" are: | [Trusted Resources](./trusted-resources.md) | [TEP-0091](https://github.com/tektoncd/community/blob/main/teps/0091-trusted-resources.md) | N/A | `trusted-resources-verification-no-match-policy` | | [Larger Results via Sidecar Logs](#enabling-larger-results-using-sidecar-logs) | [TEP-0127](https://github.com/tektoncd/community/blob/main/teps/0127-larger-results-via-sidecar-logs.md) | [v0.43.0](https://github.com/tektoncd/pipeline/releases/tag/v0.43.0) | `results-from` | | [Configure Default Resolver](./resolution.md#configuring-built-in-resolvers) | [TEP-0133](https://github.com/tektoncd/community/blob/main/teps/0133-configure-default-resolver.md) | N/A | | +| [Coschedule](./affinityassistants.md) | [TEP-0135](https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md) | N/A |`coschedule` | ### Beta Features diff --git a/docs/affinityassistants.md b/docs/affinityassistants.md new file mode 100644 index 00000000000..8bd9244cfa3 --- /dev/null +++ b/docs/affinityassistants.md @@ -0,0 +1,113 @@ + + +# Affinity Assistants +Affinity Assistant is a feature to coschedule `PipelineRun` `pods` to the same node +based on [kubernetes pod affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) so that it possible for the taskruns to execute parallel while sharing volume. +Available Affinity Assistant Modes are **coschedule workspaces**, **coschedule pipelineruns**, +**isolate pipelinerun** and **disabled**. + +> :seedling: **coschedule pipelineruns** and **isolate pipelinerun** modes are [**alpha features**](./additional-configs.md#alpha-features). +> **coschedule workspaces** is a **stable feature** + +* **coschedule workspaces** - When a `PersistentVolumeClaim` is used as volume source for a `Workspace` in a `PipelineRun`, +all `TaskRun` pods within the `PipelineRun` that share the `Workspace` will be scheduled to the same Node. + +**Note:** Only one pvc-backed workspace can be mounted to each TaskRun in this mode. + +* **coschedule pipelineruns** - All `TaskRun` pods within the `PipelineRun` will be scheduled to the same Node. + +* **isolate pipelinerun** - All `TaskRun` pods within the `PipelineRun` will be scheduled to the same Node, +and only one PipelineRun is allowed to run on a node at a time. + +* **disabled** - The Affinity Assistant is disabled. No pod coscheduling behavior. + +This means that Affinity Assistant is incompatible with other affinity rules +configured for the `TaskRun` pods (i.e. other affinity rules specified in custom [PodTemplate](pipelineruns.md#specifying-a-pod-template) will be overwritten by Affinity Assistant). +If the `PipelineRun` has a custom [PodTemplate](pipelineruns.md#specifying-a-pod-template) configured, the `NodeSelector` and `Tolerations` fields will also be set on the Affinity Assistant pod. The Affinity Assistant +is deleted when the `PipelineRun` is completed. + +Currently, the Affinity Assistant Modes can be configured by the `disable-affinity-assistant` and `coschedule` feature flags. In 9 months, the `disable-affinity-assistant` feature flag will be deprecated, and the Affinity Assistant Modes will be only determined by the `coschedule` feature flag. + +The following chart summarizes the Affinity Assistant Modes with different combinations of the `disable-affinity-assistant` and `coschedule` feature flags during migration (when both feature flags are present) and after the migration (when only the `coschedule` flag is present). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
disable-affinity-assistantcoschedulebehavior during migrationbehavior after migration
false (default)disabledN/A: invaliddisabled
false (default)workspaces (default)coschedule workspacescoschedule workspaces
false (default)pipelinerunsN/A: invalidcoschedule pipelineruns
false (default)isolate-pipelinerunN/A: invalidisolate pipelinerun
truedisableddisableddisabled
trueworkspaces (default)disabledcoschedule workspaces
truepipelinerunscoschedule pipelinerunscoschedule pipelineruns
trueisolate-pipelinerunisolate pipelinerunisolate pipelinerun
+ +**Note:** For users who previously accepted the default behavior (`disable-affinity-assistant`: `false`) but now want one of the new features, you need to set `disable-affinity-assistant` to "true" and then turn on the new behavior by setting the `coschedule` flag. For users who previously disabled the affinity assistant but want one of the new features, just set the `coschedule` flag accordingly. + +**Note:** Affinity Assistant use [Inter-pod affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) +that require substantial amount of processing which can slow down scheduling in large clusters +significantly. We do not recommend using the affinity assistant in clusters larger than several hundred nodes + +**Note:** Pod anti-affinity requires nodes to be consistently labelled, in other words every +node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes +are missing the specified `topologyKey` label, it can lead to unintended behavior. + +**Note:** Any time during the execution of a `pipelineRun`, if the node with a placeholder Affinity Assistant pod and +the `taskRun` pods sharing a `workspace` is `cordoned` or disabled for scheduling anything new (`tainted`), the +`pipelineRun` controller deletes the placeholder pod. The `taskRun` pods on a `cordoned` node continues running +until completion. The deletion of a placeholder pod triggers creating a new placeholder pod on any available node +such that the rest of the `pipelineRun` can continue without any disruption until it finishes. \ No newline at end of file diff --git a/docs/workspaces.md b/docs/workspaces.md index 15b4619ed1e..c2722d55d95 100644 --- a/docs/workspaces.md +++ b/docs/workspaces.md @@ -364,28 +364,7 @@ write to or read from that `Workspace`. Use the `runAfter` field in your `Pipeli to define when a `Task` should be executed. For more information, see the [`runAfter` documentation](pipelines.md#using-the-runafter-parameter). When a `PersistentVolumeClaim` is used as volume source for a `Workspace` in a `PipelineRun`, -an Affinity Assistant will be created. The Affinity Assistant acts as a placeholder for `TaskRun` pods -sharing the same `Workspace`. All `TaskRun` pods within the `PipelineRun` that share the `Workspace` -will be scheduled to the same Node as the Affinity Assistant pod. This means that Affinity Assistant is incompatible -with e.g. other affinity rules configured for the `TaskRun` pods. If the `PipelineRun` has a custom -[PodTemplate](pipelineruns.md#specifying-a-pod-template) configured, the `NodeSelector` and `Tolerations` fields -will also be set on the Affinity Assistant pod. The Affinity Assistant -is deleted when the `PipelineRun` is completed. The Affinity Assistant can be disabled by setting the -[disable-affinity-assistant](install.md#customizing-basic-execution-parameters) feature gate to `true`. - -**Note:** Affinity Assistant use [Inter-pod affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) -that require substantial amount of processing which can slow down scheduling in large clusters -significantly. We do not recommend using them in clusters larger than several hundred nodes - -**Note:** Pod anti-affinity requires nodes to be consistently labelled, in other words every -node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes -are missing the specified `topologyKey` label, it can lead to unintended behavior. - -**Note:** Any time during the execution of a `pipelineRun`, if the node with a placeholder Affinity Assistant pod and -the `taskRun` pods sharing a `workspace` is `cordoned` or disabled for scheduling anything new (`tainted`), the -`pipelineRun` controller deletes the placeholder pod. The `taskRun` pods on a `cordoned` node continues running -until completion. The deletion of a placeholder pod triggers creating a new placeholder pod on any available node -such that the rest of the `pipelineRun` can continue without any disruption until it finishes. +an Affinity Assistant will be created. For more information, see the [`Affinity Assistants` documentation](affinityassistants.md). #### Specifying `Workspaces` in `PipelineRuns`