Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(core): Upgrade API kubernetes 1.28 and controller-runtime to 0.16 #5321

Closed
wants to merge 4 commits into from

Conversation

gansheer
Copy link
Contributor

@gansheer gansheer commented Apr 4, 2024

Ref #5211
Ref #5307

Upgrade API kubernetes 1.28 and controller-runtime to 0.16

Release Note

feat(core): Upgrade API kubernetes 1.28 and controller-runtime to 0.16

options := cache.Options{
ByObject: selectors,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to include those selector somewhere for caching reason. It seems this is no longer applied anywhere, or am I missing something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, something got lost here. I will add it again.

@claudio4j
Copy link
Contributor

Is this expected to be merged for Camel K 2.3 ?
It would be important to consider openshift 4.14 uses kubernetes api 1.27.

@gansheer gansheer force-pushed the feat/5211_5307_upgrades_1_28 branch from 5823203 to f065ed9 Compare April 4, 2024 12:47
@lburgazzoli
Copy link
Contributor

Is this expected to be merged for Camel K 2.3 ? It would be important to consider openshift 4.14 uses kubernetes api 1.27.

According to the kubernetes-version-compatibility, it should still be compatible with Kubernetes API version N-3 or N-4

@gansheer
Copy link
Contributor Author

gansheer commented Apr 4, 2024

Is this expected to be merged for Camel K 2.3 ? It would be important to consider openshift 4.14 uses kubernetes api 1.27.

For now this is for main, so post 2.3. Before backporting I need to run the test on API 1.27 cluster and Openshift 4.14.

According to the kubernetes-version-compatibility, it should still be compatible with Kubernetes API version N-3 or N-4

The test needs to run on API 1.27 cluster and Openshift 4.14 because other dependency upgrade that come with it, like the one on client-go, could have an impact somewhere, and the compatibility is not 100% (see https://github.com/kubernetes/client-go#compatibility-client-go---kubernetes-clusters).

This could be a good opportunity to see if I can add some github action workflow to allow on demand run on specific kindest/node images.

After this one is fully done there should be the upgrade for API 1.29 and controller-runtime 0.17.

@squakez
Copy link
Contributor

squakez commented Apr 5, 2024

I don't think we should backport this to 2.3 as it may pose some compatibility problems with the existing running platform. Ie, the user is running a 2.3.0 version and the upgrade to 2.3.1 should work on the very same platform he's running 2.3.0.

@@ -201,22 +202,27 @@ func Run(healthPort, monitoringPort int32, leaderElection bool, leaderElectionID
}
}

defaultNamespaces := map[string]cache.Config{
operatorNamespace: {},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're probably missing to include the watchNamespace

Copy link
Contributor Author

@gansheer gansheer Apr 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a little more complex than that. As per this discussion the CRDs are expected to be installed when we configure the cache.

I think I may need to remove &servingv1.Service{} from the selectors as it is not always installed. I could do conditional addition of &servingv1.Service{} but that suppose any change in the presence of the CRD will mean re-creation of the operator pod(s).

@gansheer gansheer force-pushed the feat/5211_5307_upgrades_1_28 branch from f065ed9 to a0ef631 Compare April 5, 2024 09:17
@squakez
Copy link
Contributor

squakez commented Apr 5, 2024

@gansheer may I suggest a different strategy? Whenever we upgrade dependency, we better do it one by one instead of doing one shot, in order to understand which is the dependency that really fails. IMO, the first thing to do is to upgrade kubernetes only. We can bump to 1.28 and later to 1.29. Once we are sure that all is stable, we can bump any other dependency.

@gansheer gansheer closed this Apr 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants