Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test failing with "old version instances remain" for machinepools in Runtime SDK test #8718

Closed
chrischdi opened this issue May 23, 2023 · 6 comments
Labels
kind/flake Categorizes issue or PR as related to a flaky test. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@chrischdi
Copy link
Member

chrischdi commented May 23, 2023

Which jobs are flaking?

Over the period of last 14 days:

  • 8x periodic-cluster-api-e2e-release-1-4
  • 6x periodic-cluster-api-e2e-main
  • 4x periodic-cluster-api-e2e-mink8s-release-1-4
  • 4x periodic-cluster-api-e2e-mink8s-main
  • 2x periodic-cluster-api-e2e-dualstack-ipv6-main
  • 1x periodic-cluster-api-e2e-workload-upgrade-1-21-1-22-release-1-4
  • 1x periodic-cluster-api-e2e-workload-upgrade-1-26-1-27-release-1-4

Which tests are flaking?

Over the period of last 14 days:

  • 24x capi-e2e [It] When upgrading a workload cluster using ClusterClass with RuntimeSDK [PR-Informing] [ClusterClass] Should create, upgrade and delete a workload cluster
  • 2x capi-e2e [It] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest

Since when has it been flaking?

Looks to be already longer than since the beginning of year 2023

Testgrid link

No response

Reason for failure (if possible)

To be analysed.

Anything else we need to know?

Link to triage to see if it is fixed: link

Label(s) to be applied

/kind flake

@k8s-ci-robot k8s-ci-robot added kind/flake Categorizes issue or PR as related to a flaky test. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 23, 2023
@chrischdi
Copy link
Member Author

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 23, 2023
@killianmuldoon
Copy link
Contributor

There have been a couple of changes in the MachinePool controller as of late - I wonder if any of those made an upgrade take longer and increased the occurrence of this flake.

@chrischdi
Copy link
Member Author

chrischdi commented May 24, 2023

As stated above, the issue seems to already exist since > 5 months. So must be non-recent changes. (in the triage link above you can go back in time :-) )

But from quickly hopping through it: yes it seemed to increase.

@killianmuldoon
Copy link
Contributor

killianmuldoon commented May 26, 2023

This is looking good over the last couple of days and backported to 1.3 / 1.4 today. Let's see how it performs over the weekend.

@killianmuldoon
Copy link
Contributor

Looking good 🙂

/close

@k8s-ci-robot
Copy link
Contributor

@killianmuldoon: Closing this issue.

In response to this:

Looking good 🙂

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

3 participants