Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubespray 3.0 discussion #6400

Closed
EppO opened this issue Jul 14, 2020 · 63 comments
Closed

Kubespray 3.0 discussion #6400

EppO opened this issue Jul 14, 2020 · 63 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@EppO
Copy link
Contributor

EppO commented Jul 14, 2020

What would you like to be added:

Kubeadm control plane mode

kubeadm join is the recommended way for non-first control plane nodes and worker nodes. We should set kubeadm_control_plane to true by default. Not sure if it makes sense to keep the legacy 'kubeadm init everywhere' use case around. Is there any edge cases with the control plane mode?

Use etcdadm to manage external etcd cluster

There is a valid use case to have an "external" etcd cluster not managed by kubeadm, specially when etcd is not deployed on the control plane nodes. Currently, etcd setup is fairly manual, fragile (like during upgrades), and hard to debug. https://github.com/kubernetes-sigs/etcdadm is supposed to make etcd management easier. In the long run, kubeadm will eventually use etcdadm under the hood. It would be a good idea to implement it for the "external" etcd use case as well. Moreover, adding support for BYO etcd cluster (#6398) should be fairly easy if we go down that path.

Review CI matrix

  • Use DAG to simplify matrix as GitLab supports it. Having a CI matrix per platform (Ubuntu, RH/CentOS, Debian, ...) would be clearer for end-users in order to see what's officially supported.
  • Improve molecule test coverage for OS coverage. Most likely, some role rewrite is required, to make the role independent and easier to test in isolation
  • Use rules to run only relevant CI jobs (just markdown for docs, just specific tests for network plugin change, ...) in order to speed up the feedback loop
  • Separate the provisioning/setup from the validation. It would allow us to add Node conformance test as part of the CI pipeline.
  • Ensure all playbooks are tested properly (cluster.yml, recover-control-plane.yml, remove-node.yml, reset.yml, scale.yml, upgrade-cluster.yml)

Switch cgroup driver default to systemd

kubespray officially supports only systemd-based linux distros. We should not have two cgroup managers (see kubernetes/kubeadm#1394 (comment) for technical details).
This is a backward incompatible change, so maybe default it for new install but keep the current setting for the upgrades?

Remove docker requirements

There is still some hardcoded docker commands in the code (network plugins, etcd, node role, ...). One of kubespray's goals is to "Deploy a Production Ready Kubernetes Cluster", so it should NOT have a container engine capable of building new container image by default, for security purposes. Containerd would be a more secure default setting. In order to make that transition, we need to use crictl where docker is used today.

Why is this needed:
We need to address technical debt. Code-base is wide, some areas are old and not maintained. I'd like to take the opportunity for the next major release to lean to the maximum the code-base and make the CI more agile to get quicker feedback.

/cc @floryut, @Miouge1, @mattymo, @LuckySB

@EppO EppO added the kind/feature Categorizes issue or PR as related to a new feature. label Jul 14, 2020
@floryut
Copy link
Member

floryut commented Jul 16, 2020

Remove docker requirements

There is still some hardcoded docker commands in the code (network plugins, etcd, node role, ...). One of kubespray's goals is to "Deploy a Production Ready Kubernetes Cluster", so it should NOT have a container engine capable of building new container image by default, for security purposes. Containerd would be a more secure default setting. In order to make that transition, we need to use crictl where docker is used today.

I'm all in for that, a PR was raised a long time ago to set containerd as the default runtime (but was drop as too much work and too much breaking change), but that would allow us to get rid of a lot of docker default commands and at the same time move toward something more CRI oriented.

@Miouge1
Copy link
Contributor

Miouge1 commented Jul 16, 2020

RELEASE.md says:

Kubespray doesn't follow semver. [...] Breaking changes, if any introduced by changed defaults or non-contrib ansible roles' playbooks, shall be described in the release notes.

AFAIK we already did non-backwards compatible change in the v2.x of Kubespray (when moving to kubeadm for instance). The "production ready" party is a lot about providing a path for people to move from v2.X and v2.(X+1).

What I'm saying is that we can do breaking changes (like changing default container engine) as long as they are accepted by the community and well documented.

@EppO I thought non-kubeadm was removed in #3811 is there some other things that need clean-up? kubeadm is the only supported deployment method since v2.9.

For the GitLab CI rules: and only:changes, last I checked GitLab CI (via Failfast) is unaware of the target branch, and therefore doesn't know against what to compare, the fallback mechanism explained here is problematic for PRs with multiple commits.
Another area to consider, is that Prow has support for such features (see run_if_changed in https://github.com/kubernetes/test-infra/blob/master/prow/jobs.md)

For conformance tests, there is sonobuoy_enabled: true available and I think it's enabled on 2 CI jobs currently: config and output

@MarkusTeufelberger has some very valuable input on role design and molecule, raised a couple of issues around it. Examples: #4622 #3961

@EppO
Copy link
Contributor Author

EppO commented Jul 16, 2020

RELEASE.md says:

Kubespray doesn't follow semver. [...] Breaking changes, if any introduced by changed defaults or non-contrib ansible roles' playbooks, shall be described in the release notes.

AFAIK we already did non-backwards compatible change in the v2.x of Kubespray (when moving to kubeadm for instance). The "production ready" party is a lot about providing a path for people to move from v2.X and v2.(X+1).

Good to know. I was more worried about end-users that may not know this and end up breaking some production clusters while trying to upgrade, hence a 3.0 proposal that is more explicit on that kind of breaking changes.

@EppO I thought non-kubeadm was removed in #3811 is there some other things that need clean-up? kubeadm is the only supported deployment method since v2.9.

I missed it because I didn't change my inventory for a while and some deprecated options are still there. I think it would be beneficial for end-users as well to list deprecated inventory options for each releases. I guess I'm not the only one with some old settings :)

For the GitLab CI rules: and only:changes, last I checked GitLab CI (via Failfast) is unaware of the target branch, and therefore doesn't know against what to compare, the fallback mechanism explained here is problematic for PRs with multiple commits.
Another area to consider, is that Prow has support for such features (see run_if_changed in https://github.com/kubernetes/test-infra/blob/master/prow/jobs.md)

I hear you. We can't use pipelines for merge requests because we don't create the merge request in GitLab, so that's a dead end. But I'm convinced we should architect the CI around better changes detection to get a quicker feedback loop, if prow is an option, we should look at it.

For conformance tests, there is sonobuoy_enabled: true available and I think it's enabled on 2 CI jobs currently: config and output

I guess we have some work to do in that area then :)

The maximum supported Kubernetes version is 1.16.99, but the server version is v1.18.5. Sonobuoy will continue but unexpected results may occur.

Ideally we should run conformance tests regularly to test various setup combinations and not wait release time to pass the full conformance tests. That's why I was suggesting to separate them from the install/upgrade use cases.

@EppO
Copy link
Contributor Author

EppO commented Jul 30, 2020

etcd_kubeadm_enabled: false

What about etcd? Should we change that default to true? It makes etcd upgrades impossible outside of kubernetes upgrades, kubeadm still doesn't support upgrading etcd without the kubernetes components AFAIK.

@EppO
Copy link
Contributor Author

EppO commented Jul 31, 2020

  • Add CI job to test scale playbook

@floryut
Copy link
Member

floryut commented Jul 31, 2020

  • Add CI job to test scale playbook

I also though about that, scale and remove needs some love from CI

@hafe
Copy link
Contributor

hafe commented Aug 4, 2020

Flip default of var kubeadm_control_plane to true and remove "experimental" from code?

@hafe
Copy link
Contributor

hafe commented Aug 4, 2020

etcd_kubeadm_enabled: true
makes all etcdctl related use cases not to work.
There is also no backup procedure with kubeadm managed etcd
I started looking at it

@EppO
Copy link
Contributor Author

EppO commented Aug 4, 2020

Flip default of var kubeadm_control_plane to true and remove "experimental" from code?

That's actually what I was referring to with "Drop non-kubeadm deployment" but I mixed two different use cases: since 2.9 kubespray is always using kubeadm to provision the cluster but it doesn't use kubeadm join on the non-first control plane nodes by default (just another run of kubeadm init).
I think the join model is the good way forward.

@MarkusTeufelberger
Copy link
Contributor

Personally I'd like to drop a few features that are relatively exotic or easy to work around/implement yourself such as downloading binaries and rsync'ing them around instead of just fetching them on each node. This could really simplify the download role.

Another bigger architectural change could be to change kubespray into a collection (maybe even adding some roles to https://github.com/ansible-collections/community.kubernetes eventually and/or using them here?) and in general switching to Ansible 2.10.

@EppO
Copy link
Contributor Author

EppO commented Aug 5, 2020

Personally I'd like to drop a few features that are relatively exotic or easy to work around/implement yourself such as downloading binaries and rsync'ing them around instead of just fetching them on each node. This could really simplify the download role.

I'd prefer to rely on the distro package manager when applicable instead of downloading all the stuff but if you have a better design for the download role, feel free to submit a PR.

Another bigger architectural change could be to change kubespray into a collection (maybe even adding some roles to https://github.com/ansible-collections/community.kubernetes eventually and/or using them here?) and in general switching to Ansible 2.10.

Ansible 2.10 is not released yet and we need to be careful on what ansible version is available on each supported distros.
Regarding the usage of kubespray, I know @Miouge1 wanted to promote the container image use case, where you build your own custom image with your inventory and custom playbooks. That makes definitely sense in a CI pipeline.

@hafe
Copy link
Contributor

hafe commented Aug 14, 2020

Reducing scope and configurability of Kubespray would be nice.
List of features that could be removed:

  • canal
  • ???

@EppO
Copy link
Contributor Author

EppO commented Sep 11, 2020

The more I think about it, the more I'm convinced kubespray should only provision kubernetes clusters on top of kubeadm, so we should only support the following 2 use cases on the etcd front:

  • BYO etcd (either by using etcdadm or other means, out of scope of kubespray)
  • etcd managed by kubeadm

That means removing the etcd_deployment_type mode kubespray supports today. We would still test the BYO etcd use case in the CI though.

@hafe
Copy link
Contributor

hafe commented Sep 11, 2020

The more I think about it, the more I'm convinced kubespray should only provision kubernetes clusters on top of kubeadm, so we should only support the following 2 use cases on the etcd front:

  • BYO etcd (either by using etcdadm or other means, out of scope of kubespray)
  • etcd managed by kubeadm

That means removing the etcd_deployment_type mode kubespray supports today. We would still test the BYO etcd use case in the CI though.

The same we could formulate in some kind of design statement how Kubespray embrace, use and extend kubeadm. Not workaround it

@holmesb
Copy link
Contributor

holmesb commented Oct 6, 2020

We need to address technical debt. Code-base is wide, some areas are old and not maintained. I'd like to take the opportunity for the next major release to lean to the maximum the code-base and make the CI more agile to get quicker feedback.

Helm 3.x was released since Kubespray 2.x. It no longer requires a tiller pod and is integrated into k8s rbac. I think it would be better for Kubespray to refocus on its core competency: deploying production Kubernetes. Can include the most widely used plugins (CNI\CSI) in this. But apps that have a decent helm chart should now be deployed using that. Helm vs Ansible for deploying apps to Kubernetes is a no-brainer. Thanks to its state, Helm is truly declarative, Ansible is not. For example, uninstall a helm release and your app is removed from k8s, undefine an addon in Kubespray (eg cert_manager_enabled=false), and it remains. Most helm charts are better maintained than the addons in this project. I get the desire for Kubespray to be a one-stop-shop, so could either replace the addons with simple readme guidance explaining how to install the former addons using helm, or if workable could install the helm client and version-pinned helm charts using Kubespray.

Would significantly simplify this project and the maintenance burden.

@hafe
Copy link
Contributor

hafe commented Nov 3, 2020

I think we are very close to be able to use kubeadm managed etcd as the default.
What do you think about that?

@jseguillon
Copy link
Contributor

jseguillon commented Nov 9, 2020

Maybe we could deal with Helm apps in a separate github project ?

This project would only focus on :

  • sticking apps version to some kubernetes if needed
  • structure some playbooks and roles to easily deploy with kubepsray structured inventory (ie run helm on masters)
  • maybe deploy helm ?

Some attached CI would not require kubsepray deployment : only inventory plus any kubernetes should be enough. This would avoid people to rewrite their own helm addons playbooks and roles.

EDIT: first mentioned dashboard as helm chart, bad example this is plain yaml, I removed it. Btw we may think about setting the dashboard out of kubespray scope in favor to Helm :)
EDIT 2 : after searching a bit it seems there is no helm chart for dashboard

@ahmadalli
Copy link

ahmadalli commented Jan 5, 2021

Ansible 2.10 is not released yet and we need to be careful on what ansible version is available on each supported distros.

it's been released for a few months and it'd be wonderful if we could have kubespray as an ansible collection

@cristiklein
Copy link
Contributor

I think it would be better for Kubespray to refocus on its core competency: deploying production Kubernetes.

I'm wondering if everyone has the same view on what is a "production Kubernetes". The way I see it, people expect to get the following (ordered by "minimum" to "maximum" expectations):

  1. "Post-kubeadm", basically just containerd, kubelet, etcd, apiserver and scheduler.
  2. CNIs.
  3. "Load-balancer" providers. Notice that Kubernetes is slowly moving towards out-of-tree cloud providers, so these can also be deployed from kubectl or via Helm Charts. I perceive MetalLB a "bare-metal cloud provider".
  4. Storage providers, e.g., native, Ceph, Rook, local storage, etc.
  5. Scripts to deploy secure infrastructure with right firewall rules and right cloud provider roles for the things above.
  6. Log forwarders, e.g., fluentd.
  7. Ingress controllers, e.g., nginx, Ambassador, Traefic
  8. Cert-manager
  9. External DNS
  10. Security hardening: Falco. Note that the preferred way to deploy Falco is directly on the host, whereas Helm Chart feels more like a fallback solution. So Falco would fit better as an Ansible Role.
  11. Gatekeeper/OPA. Note that these require the kube-system namespace to be labeled. It feels more "natural" to me to have this labeling done in the lower layers.

I am split on where to put the demarcation of "production Kubernetes". On one hand, it would be nice to have kubespray the one-stop-shop for all above. Maintainability can be sustained either by regularly rendering Helm Charts inside an Ansible files folder or delegating to Helm directly. In the latter case, Helm could either be installed on the controlling machine, the Kubernetes master or launched as a Pod via kubectl.

On the other hand, I do agree that kubespray needs to stay focused, reduce CI/CD pipeline response time and maintenance burden.

Is there a shared view within the kubespray community on what a "production Kubernetes" is?

Hopefully a "helper" question: What is a Kubernetes addon and should be managed by the Kubernetes control plane vs. what is an app on top of Kubernetes?

@MarkusTeufelberger
Copy link
Contributor

I would also add monitoring (Prometheus) and tracing (Jæger) to number 6 in your list by the way as well as some log viewing/analyzing stack (Loki or Kibana). Probably also some CD mechanism like flux (https://toolkit.fluxcd.io/) to not mess with deploying Kubernetes state via Ansible.

Another feature of "production" is likely updating/upgrading/adding/removing each of these components in a way that keeps the actual workload of the cluster as unaffected as possible. A lot of these things are a mix of programs that run in the cluster itself and stuff that wants to be installed on the host (often even without providing proper packages or repositories upstream, only statically compiled golang binaries). This might also need some design/solution on Kubespray side.

@holmesb
Copy link
Contributor

holmesb commented Jan 14, 2021

You could make the case to increase scope infinitely. Perhaps in the v3 timeframe, the goal should be support the status quo using helm3. Convert the existing addons to helm releases. In terms of code, yes install helm on the controller machine like @cristiklein suggests and then for each addon, either store a values.yaml, or create one at runtime from a template, and run "helm upgrade --install" in an ansible task to deploy it. We can't currently remove an addon @MarkusTeufelberger, so again I'd suggest this should be out-of-scope for v3 (although can document how to uninstall helm charts).

That in itself would be a massive reduction of complexity. The extra apps would become declarative: our content for each one would be little more than a values file. The helm chart maintainers would do the heavy lifting.

I think the scope discussion can be had at a later stage. In any case, it's irrelevent for companies like us who will continue to use helm, not Kubespray for anything that has a helm chart. For us, Kubespray is one stage in a pipeline, so we'd prefer if all the "extras", including installing the helm binary on the controller machine, were kept optional.

@champtar
Copy link
Contributor

Anyone that has a bit of time, kubespray helm integration can start today ;)

@Miouge1
Copy link
Contributor

Miouge1 commented Jan 15, 2021

@EppO my preferred way to consume Kubespray is to use its Docker image. It makes it easy to reproduce and manage dependencies. I simply mount the inventory with docker run -v $PWD/inventory:/inventory.ini rather than baking it in the image. Wayyyy back in the days Kubespray had a python wrapper, I think that was interesting to hide a bit of the Ansible details from users. But on the other hand it looks like Kubespray is used by lots of Ansible enthusiasts.

@cristiklein the thing is different orgs or teams will have different requirements for each component (CNI, CRI, ingress, logs, ...). To take ingress as an example, you could use nginx or traefik or ambassador or all of them at once.

I think the approach taken so far, has been to give a starting point for each component and allow users to takeover when their needs go beyond the defaults. nginx_ingress: enabled option in Kubespray is rarely used in production deployments. Usually you want more control over the management of your ingress (even if you use nginx).

Finally "production" means different things for different orgs.

My opinion is that Kubespray should focus on the core Kubernetes components, things that are used by most people, and drop the settings that drive complexity but are used only by a small portion of the community. If that those less popular settings are critical to some people, then they probably should get involved (either themselves or sponsor somebody to represent their interests).

@jseguillon yeah, there is definitely an opportunity for a "install all the addons in k8s" type project separate from Kubespray. Regardless of how you install Kubernetes (Kubespray, kops, AKS, EKS, GKE), you will want to install stuff afterwards (log management, monitoring, security hardening, operators, ...). Like @MarkusTeufelberger mentioned Flux is a popular option, but I've also seen simple shell script work well (think kubectl -f mydir).

@cristiklein
Copy link
Contributor

@MarkusTeufelberger I agree that Prometheus and Flux are nice addons. However, I feel that log and metrics viewing/analyzing (e.g., Loki, Kibana, Elasticsearch, OpenDistro, Thanos and Grafana) should be treated as applications, since they are often stateful, require careful / tedious maintenance, and need to be scaled carefully with the incoming log/metrics workload.

@Miouge1 @holmesb @champtar To steer discussions, I created an initial draft of a Helm-based addons deployment for kubespray. PTAL: master...cristiklein:helm-addons

The umbrella Chart could become a separate sub-project that is consumed by kubespray (via git submodule). Either way, the user is free to use only that part of kubespray. I think it achieves "batteries included but removable and feel free to choose between NiMH, Li-ion or AC adapter".

Let me know what you think.

@hafe
Copy link
Contributor

hafe commented Jan 15, 2021

Production ready to me (among others) means that things have been tested in CI. Therefore I have very hard to understand e.g. why multiple version k8s are supported by a specific version of kubespray. This in turn adds complexity in ansible roles. As said before, if you need an older version of k8s, use the corresponding release branch!

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 13, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 13, 2022
@olivierlemasle
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 26, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2023
@oomichi
Copy link
Contributor

oomichi commented Feb 21, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 21, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 22, 2023
@h3ct0rjs
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 29, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 21, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ertemplin
Copy link

/remove-lifecycle stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests