Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cli: refactor upgrade execute to upgrade apply #1160

Merged
merged 5 commits into from
Feb 15, 2023
Merged

Conversation

derpsteb
Copy link
Member

@derpsteb derpsteb commented Feb 9, 2023

Proposed change(s)

  • Replace the existing upgrade execute command with an extended command called upgrade apply. The new command complements the upgrade check command by applying the configured upgrades from the config. It extends the existing functionality by enabling k8s and helm upgrades. The upgrades are controlled via the config values image, microserviceVersion and kubernetesVersion.
  • upgrade apply checks that the configured version C is a valid upgrade to the installed version I. That means skip cases where C==I, C<I or C.Minor>I.Minor+1.
  • upgrade apply also checks that there is currently no k8s or image upgrade in progress. If an upgrade is in progress the upgrade is skipped and a message is printed.

Checklist

  • Update docs
  • Add labels (e.g., for changelog category)
  • Link to Milestone

@derpsteb derpsteb added the feature This introduces new functionality label Feb 9, 2023
@derpsteb derpsteb added this to the v2.6.0 milestone Feb 9, 2023
@edgelesssys edgelesssys deleted a comment from netlify bot Feb 9, 2023
@derpsteb
Copy link
Member Author

derpsteb commented Feb 9, 2023

This currently depends on having access to the NodeVersion type from the operators inside the CLI to do unstructured.Unstructured <-> NodeVersion conversions. These are desirable as it prevents us from parsing the map[string]any from unstructured.Unstructured.

However, we can't depend on the types without adding a replace statement to the go.mod file and messing with the build Dockerfile. Since I feel like it makes sense to have the API in internal/ and referencing it from there inside the operator I will try to move the code from operators/constellation-node-operator/api/v1alpha1 to constellation/internal/api. I will have to figure out how not to break the kubebuilder code generation. But feel free to take a look at the existing code as it should not be affected by this change.

Maybe the replace statement is also an option. Will have to check.

Copy link
Member

@daniel-weisse daniel-weisse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unsure about using the NodeVersion struct from the operator directly since it causes a lot of issues with out go.mod file.
The data type is not that complex, the only issue I could see is parsing the status and conditions fields properly. Do you know if we can access them even in the unstructured object?

cli/internal/cloudcmd/upgrade.go Outdated Show resolved Hide resolved
cli/internal/cmd/upgradeapply.go Outdated Show resolved Hide resolved
cli/internal/cmd/upgradeapply.go Outdated Show resolved Hide resolved
cli/internal/cmd/upgradeapply.go Outdated Show resolved Hide resolved
@derpsteb
Copy link
Member Author

They are accessible and can be parsed. I originally built it that way. But that parsing logic will break once we modify the Operator API. With the conversion we will at least get compilation errors. I would invest a little bit more time to see if there is a good way to do this. If not I will go back to the Unstructured based implementation.

Copy link
Member

@daniel-weisse daniel-weisse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was able to upgrade my cluster using constellation upgrade apply from v2.5.0 to ref/main/stream/nightly/v2.6.0-pre.0.20230210173318-77bd537fb412, while also upgrading Kubernetes version from 1.24 to 1.25.

I think it was introduced in a different PR, but I feel like the current UX of upgrading Kubernetes versions is a bit unwieldy.
Constellation does not allow a user to choose what patch version of Kubernetes to install, we always ship the latest for a given release. Yet a user needs to specify it when upgrading from one minor release to another (or choosing a different version than default when creating a cluster).
This is especially annoying since the only way of checking what patch versions are available for a given CLI is the upgrade check command (which doesn't list versions older than the current).

If we choose to stay with the current way of specifying Kubernetes versions in the config, we should at least provide a way to easily check what patch versions are supported by a given CLI

cli/internal/cloudcmd/upgrade.go Outdated Show resolved Hide resolved
cli/internal/cmd/upgradeapply.go Outdated Show resolved Hide resolved
cli/internal/cmd/upgradeapply.go Outdated Show resolved Hide resolved
@daniel-weisse
Copy link
Member

As this PR is quite large, I would like at least one other reviewer to take a look over the changes before merging

@derpsteb
Copy link
Member Author

Agree with the point about the patch version. katexochen also raised it. The discussion on how to solve this is ongoing. Current ideas that are in line with what you suggested are:

  • an interactive component during config generate that lists the versions
  • a subcommand for constellation config to list versions

Copy link
Contributor

@malt3 malt3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have any additional code review comments. Tested this locally and it works as expected.

* introduce handleImageUpgrade & handleServiceUpgrade
* rename cloudUpgrader.Upgrade to UpgradeImage
* remove helm flag
* remove hint about development status
The CLI will have to create similar objects for k8s upgrades.
* `upgrade apply` will try to make the locally configured and
actual version in the cluster match by appling necessary
upgrades.
* Skip image or kubernetes upgrades if one is already
in progress.
* Skip downgrades/equal-as-running versions
* Move NodeVersionResourceName constant from operators
to internal as its needed in the CLI.
Also adapt check-licenses script to allow AGPL for api module.
@derpsteb derpsteb merged commit f757b5b into main Feb 15, 2023
@derpsteb derpsteb deleted the feat/upgrade-apply branch February 15, 2023 15:44
@3u13r 3u13r mentioned this pull request Feb 22, 2023
9 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature This introduces new functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants