-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Framework: Kind cluster as secondary management cluster? #7613
Comments
Thx for opening the issue. This sounds like a good idea to me in general. It should speed up a lot of provider tests and make them simpler regarding the preload. I only see one edge case. Today it's possible to use a pre-existing Kubernetes cluster instead of a kind cluster (in the core Cluster API tests this is exposed via So I would suggest to add an additional field to |
/triage accepted |
@fabriziopandini: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
I would still like to explore this. Just didn't get around to it |
PS. I will let developers a choice between the current behaviour and new behaviour, but we can figure this out also in the PR |
/priority important-longterm |
/unassign @lentzi90 I might have some bandwidth to give a stab to this one |
User Story
As an infrastructure provider developer I would like to use a less resource intensive and time consuming secondary management cluster in clusterctl upgrade tests to avoid resource congestion and long run times.
Detailed Description
Currently the clusterctl upgrade test is making use of a secondary management cluster. My understanding is that this is done to be able to treat the test similar to all other tests even though it is using older controller versions to start with. If we used the bootstrap cluster directly it would affect all other tests that potentially run in parallel with it.
Unfortunately this secondary management cluster can be an issue for providers. The bootstrap cluster is a light weight kind cluster that makes it easy to load any images that may be needed for the test. But the secondary management cluster will be a "normal provider cluster", meaning that it 1) may require way more resources (VMs vs containers) and 2) cannot easily load the images.
Taking CAPO as example, the secondary management cluster would consist of 3 VMs (1 bastion, 1 control plane and 1 worker). Add to this the actual workload cluster with 1 more bastion, 1 control plane and 2 workers when scaled. A total of 7 VMs for this one test and then we are not counting other cloud resources like load balancers. We also need a way to get the controller image to the secondary management cluster since we cannot use
kind load
in this case.I would like to reuse something like CreateKindBootstrapClusterAndLoadImages for the secondary management cluster instead. Basically replacing this with this.
Anything else you would like to add:
It is very possible that I'm missing some reason why the secondary management cluster is created as a workload cluster from the provider. Please enlighten me in that case! 🙂
/kind feature
The text was updated successfully, but these errors were encountered: