-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MGMT-3837 Add cluster deployment controller #990
MGMT-3837 Add cluster deployment controller #990
Conversation
Currently in parallel to the existing controller, next step will be to remove the existing controller Added roles to reconcile cluster deployment Manually added hive.openshift.io_clusterdeployments.yaml, need to update each version that we want to support
Tested with simple dummy deployment, not all links are set: apiVersion: hive.openshift.io/v1
kind: ClusterDeployment
metadata:
name: mycluster
namespace: assisted-installer
spec:
baseDomain: hive.example.com
clusterName: mycluster
platform:
agentBareMetal:
apiVIP: 1.2.3.8
apiVIPDNSName: bla.com
ingressVIP: 1.2.3.9
vipDHCPAllocation: false
provisioning:
imageSetRef:
name: openshift-v4.7.0
installConfigSecretRef:
name: mycluster-install-config
sshPrivateKeySecretRef:
name: mycluster-ssh-key
pullSecretRef:
name: pull-secret
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: filanov, RazRegev The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@filanov: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
k8s.io/client-go => k8s.io/client-go v0.19.0 | ||
sigs.k8s.io/cluster-api-provider-aws => github.com/openshift/cluster-api-provider-aws v0.2.1-0.20200506073438-9d49428ff837 | ||
sigs.k8s.io/cluster-api-provider-azure => github.com/openshift/cluster-api-provider-azure v0.1.0-alpha.3.0.20200120114645-8a9592f1f87b | ||
sigs.k8s.io/cluster-api-provider-openstack => github.com/openshift/cluster-api-provider-openstack v0.0.0-20200526112135-319a35b2e38e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to not have to maintain all of these versions in our app?
This feels similar to the problem I was trying to avoid in openshift/installer#4330
I understand that this is probably more necessary than installconfig validation, but this all feels very fragile. What happens if we forget to update one of these refs?
Sorry for commenting after merge, but I found this because it seems we need go 1.15 for something related to this line:
github.com/openshift/hive => github.com/dgoodwin/hive v0.0.0-20210121160047-23364b143670
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our build container is already in golang v1.15 the real issue is probably with the local IDE that is working with a different golang version.
the line related to dgoodwin
will be removed once his PR is pushed to hive.
About the other replace statements i didn't find a way not to have them, hive have them as well, do you know how to avoid it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not without splitting the API we require into a separate module or sub-module as suggested in openshift/installer#4330
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually it's a good idea, but they produce the same binary so how does it work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would need a change on the hive side so that we could require just the apis and not all of the runtime dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets talk about it in our next meeting
There is the need to deploy clusters with customized settings, that will need to include additional manifests at deploy time. Add the possibility to include an ASSETS_EXTRA_FOLDER, that will allow to put additional manifests there and will be copied to the manifests before install. Signed-off-by: Yolanda Robla <[email protected]>
Currently in parallel to the existing controller, next step will be to remove the existing controller
Added roles to reconcile cluster deployment
Manually added hive.openshift.io_clusterdeployments.yaml, need to update each version that we want to support