Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[workspace-controller] CRDs are invalid on kubernetes >=1.18 #16791

Open
amisevsk opened this issue Apr 28, 2020 · 10 comments
Open

[workspace-controller] CRDs are invalid on kubernetes >=1.18 #16791

amisevsk opened this issue Apr 28, 2020 · 10 comments
Assignees
Labels
area/devworkspace-operator engine/devworkspace Issues related to Che configured to use the devworkspace controller as workspace engine. kind/bug Outline of a bug - must adhere to the bug report template. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. severity/P3 Lower priority than a P2. Optional work that might get done, or not. See also help wanted issues.

Comments

@amisevsk
Copy link
Contributor

Is your task related to a problem? Please describe.

The Components and WorkspaceRoutings custom resources in the Che workspace controller are invalid starting with Kubernetes 1.18:

kc apply -f deploy/crds/
customresourcedefinition.apiextensions.k8s.io/workspaces.workspace.che.eclipse.org unchanged
Error from server (Invalid): error when creating "deploy/crds/workspace.che.eclipse.org_components_crd.yaml": CustomResourceDefinition.apiextensions.k8s.io "components.workspace.che.eclipse.org" is invalid: [spec.validation.openAPIV3Schema.properties[status].properties[componentDescriptions].items.properties[podAdditions].properties[containers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property, spec.validation.openAPIV3Schema.properties[status].properties[componentDescriptions].items.properties[podAdditions].properties[initContainers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property]
Error from server (Invalid): error when creating "deploy/crds/workspace.che.eclipse.org_workspaceroutings_crd.yaml": CustomResourceDefinition.apiextensions.k8s.io "workspaceroutings.workspace.che.eclipse.org" is invalid: [spec.validation.openAPIV3Schema.properties[status].properties[podAdditions].properties[initContainers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property, spec.validation.openAPIV3Schema.properties[status].properties[podAdditions].properties[containers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property]

k8s 1.18 added validation to the keys used for listType=map; the component and workspacerouting subresources embed PodSpec, which violates this requirement (Container.Ports defines protocol as a map key, but ContainerPort.Protocol is optional). See this comment on the PR for some additional detail.

Additional context

Confirmed this with operator-sdk v0.17.0 and kubectl:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
@amisevsk amisevsk added kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. engine/devworkspace Issues related to Che configured to use the devworkspace controller as workspace engine. labels Apr 28, 2020
@amisevsk
Copy link
Contributor Author

cc: @davidfestal if you have any suggestions on this one.

@amisevsk
Copy link
Contributor Author

As a workaround, I'm able to deploy the controller with

minikube start --kubernetes-version v1.17.0 [additional options]

@amisevsk
Copy link
Contributor Author

amisevsk commented May 7, 2020

This issue blocks deployment on OpenShift 4.5

@sleshchenko
Copy link
Member

I tried to work around this issue with the following changes sleshchenko/devworkspace-operator@b8a8e9b and I got working CloudShell with OpenShift OAuth on OpenShift 4.5.0-0.ci-2020-05-07-094159

Probably we must commit them or an alternative - copy/paste PodSpec from k8s API and fix kubebuilder annotations while it's not fixed on K8s API side.

@amisevsk
Copy link
Contributor Author

amisevsk commented May 7, 2020

Workaround PR: devfile/devworkspace-operator#69, though I wouldn't consider its merging to actually "resolve" this issue.

@sleshchenko
Copy link
Member

Workaround PR: devfile/devworkspace-operator#69, though I wouldn't consider its merging to actually "resolve" this issue.

Merge PR to unblock controller on OpenShift 4.5.
Do you think it makes sense to create an issue on Kubernetes side to make PodSpec consistent with CRD requirement if there is no such?

@amisevsk
Copy link
Contributor Author

Do you think it makes sense to create an issue on Kubernetes side to make PodSpec consistent with CRD requirement if there is no such?

Probably, but I'm not sure how to phrase the issue :)

@sleshchenko sleshchenko added severity/P3 Lower priority than a P2. Optional work that might get done, or not. See also help wanted issues. and removed severity/P1 Has a major impact to usage or development of the system. labels Oct 1, 2020
@sleshchenko
Copy link
Member

It's still actual but we have workaround for that and propagating fix to openshift side could wait

@che-bot
Copy link
Contributor

che-bot commented Apr 6, 2021

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

@che-bot che-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 6, 2021
@che-bot che-bot closed this as completed Apr 15, 2021
@amisevsk amisevsk reopened this Apr 15, 2021
@amisevsk amisevsk added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 15, 2021
@sleshchenko
Copy link
Member

https://github.com/devfile/devworkspace-operator/pull/478/files#diff-983170fb73220bb6bc10df0d85249e45f79577391c04edead6e33c39713d9b8aR749 makes me think that we can resolve it now after dropping patching.

@sleshchenko sleshchenko self-assigned this Jul 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/devworkspace-operator engine/devworkspace Issues related to Che configured to use the devworkspace controller as workspace engine. kind/bug Outline of a bug - must adhere to the bug report template. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. severity/P3 Lower priority than a P2. Optional work that might get done, or not. See also help wanted issues.
Projects
None yet
Development

No branches or pull requests

3 participants