Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod Disruption Budgets per Replicaset in a Rollout #2419

Closed
ssanders1449 opened this issue Nov 19, 2022 · 1 comment
Closed

Pod Disruption Budgets per Replicaset in a Rollout #2419

ssanders1449 opened this issue Nov 19, 2022 · 1 comment
Labels
enhancement New feature or request

Comments

@ssanders1449
Copy link
Contributor

Summary

Currently Pod Disruption Budgets (PDBs) do not work well with Rollouts for the following reasons:

  1. It does not support maxUnavailable or percentages. You can only specify minAvailable and it must be an integer, not a percentage. See - https://kubernetes.io/docs/tasks/run-application/configure-pdb/#arbitrary-controllers-and-selectors
  2. It operates on all pods (both canary and stable) so we can't have a separate PDB for the stable/canary replicasets

My proposal is to add a 'pdb' section to the Rollout specification. Whenever the argo creates/deletes a ReplicaSet, it also creates/deletes a PDB with the same selectors (including rollouts-pod-template-hash). This way, we will have a separate PDB per ReplicaSet. Since a ReplicaSet is a built-in controller, the PDB will support both 'maxUnavailable' as well as percentages.

Use Cases

Set a PDB with 'maxUnavailable: 10%', so when a cluster scales down, no more that 10% of the pods in each ReplicaSet will be evicted at a time.

When would you use this?


Message from the maintainers:

Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.

@ssanders1449
Copy link
Contributor Author

The premise behind this request was wrong. We just tested a Rollout with a pdb that specified 'maxUnavailable: 10%' and the PDB worked perfectly. We even tested with separate PDBs for the 'stable' and 'canary' replicasets (by adding labels via stableMetadata and canaryMetaData) and both PDBs worked properly with 'maxUnavailable: 10%'

It looks like the relevant reference is https://kubernetes.io/docs/tasks/run-application/configure-pdb/#identify-an-application-to-protect

From version 1.15 PDBs support custom controllers where the scale subresource is enabled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant