-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test failing with "old version instances remain" for machinepools in Runtime SDK test #8718
Comments
/triage accepted |
There have been a couple of changes in the MachinePool controller as of late - I wonder if any of those made an upgrade take longer and increased the occurrence of this flake. |
As stated above, the issue seems to already exist since > 5 months. So must be non-recent changes. (in the triage link above you can go back in time :-) ) But from quickly hopping through it: yes it seemed to increase. |
This is looking good over the last couple of days and backported to 1.3 / 1.4 today. Let's see how it performs over the weekend. |
Looking good 🙂 /close |
@killianmuldoon: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which jobs are flaking?
Over the period of last 14 days:
periodic-cluster-api-e2e-release-1-4
periodic-cluster-api-e2e-main
periodic-cluster-api-e2e-mink8s-release-1-4
periodic-cluster-api-e2e-mink8s-main
periodic-cluster-api-e2e-dualstack-ipv6-main
periodic-cluster-api-e2e-workload-upgrade-1-21-1-22-release-1-4
periodic-cluster-api-e2e-workload-upgrade-1-26-1-27-release-1-4
Which tests are flaking?
Over the period of last 14 days:
capi-e2e [It] When upgrading a workload cluster using ClusterClass with RuntimeSDK [PR-Informing] [ClusterClass] Should create, upgrade and delete a workload cluster
capi-e2e [It] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
Since when has it been flaking?
Looks to be already longer than since the beginning of year 2023
Testgrid link
No response
Reason for failure (if possible)
To be analysed.
Anything else we need to know?
Link to triage to see if it is fixed: link
Label(s) to be applied
/kind flake
The text was updated successfully, but these errors were encountered: