Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide an option in the clusterctl library to suppress API server warnings #10932

Closed
dlipovetsky opened this issue Jul 24, 2024 · 21 comments · Fixed by #11149
Closed

Provide an option in the clusterctl library to suppress API server warnings #10932

dlipovetsky opened this issue Jul 24, 2024 · 21 comments · Fixed by #11149
Assignees
Labels
area/clusterctl Issues or PRs related to clusterctl help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@dlipovetsky
Copy link
Contributor

dlipovetsky commented Jul 24, 2024

What would you like to be added (User Story)?

As an developer, I want to suppress API server warnings when I use the clusterctl library.

Detailed Description

As an developer, I want to suppress API server warnings when I use the clusterctl library.

More precisely, I want to suppress warnings when I create the clusterctl client here:

func New(kubeconfig Kubeconfig, configClient config.Client, options ...Option) Client {
return newClusterClient(kubeconfig, configClient, options...)
}

So that, whenever clusterctl creates a "proxy" client, that client is configured to suppress API server warnings.

Clusterctl creates Kubernetes clients in at least 3 places, and it probably makes sense to suppress warnings in all of them.

  1. c, err = client.New(config, client.Options{Scheme: localScheme})
  2. _, err := client.New(config, client.Options{Scheme: localScheme})
  3. cs, err = kubernetes.NewForConfig(config)

Anything else you would like to add?

Warnings can appear due to factors outside of my users' control. For example, as of Kubernetes v1.29.0 and newer, users begin seeing new warnings related to finalizer names during every "move" operation. Those warnings will continue to appear until all projects in the Cluster API have finalizers that conform to requirements (see #10914).

clusterctl move output with warnings
> ./clusterctl move --kubeconfig=src.kubecconfig --to-kubeconfig=dst.kubeconfig
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Moving Cluster API objects ClusterClasses=1
Waiting for all resources to be ready to move
Creating objects in the target cluster
[KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
[KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
[KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
[KubeAPIWarningLogger] metadata.finalizers: "addons.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
[KubeAPIWarningLogger] metadata.finalizers: "cluster.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
[KubeAPIWarningLogger] metadata.finalizers: "dockercluster.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
[KubeAPIWarningLogger] metadata.finalizers: "kubeadm.controlplane.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflictswith other finalizer writers
[KubeAPIWarningLogger] metadata.finalizers: "machine.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
[KubeAPIWarningLogger] metadata.finalizers: "machine.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
[KubeAPIWarningLogger] metadata.finalizers: "dockermachine.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
[KubeAPIWarningLogger] metadata.finalizers: "dockermachine.infrastructure.cluster.x-k8s.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
Deleting objects from the source cluster 

As a developer, if I choose to suppress warnings, I take responsibility for acknowledging, and addressing them, as needed.

We may also consider providing this option in the clusterctl CLI, but I think that's a separate topic.

Label(s) to be applied

/kind feature

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. needs-priority Indicates an issue lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 24, 2024
@dlipovetsky
Copy link
Contributor Author

/area clusterctl

@k8s-ci-robot k8s-ci-robot added the area/clusterctl Issues or PRs related to clusterctl label Jul 24, 2024
@dlipovetsky
Copy link
Contributor Author

dlipovetsky commented Jul 24, 2024

Some implementation notes for later:

We instantiate the default Proxy implementation in

client.proxy = newProxy(client.kubeconfig)

We have a chance to pass some options at this time.

However, we also instantiate the default Proxy implementation in

p := newProxy(Kubeconfig{Path: kubeconfigPath, Context: ""})

And we don't currently pass accept options here. We don't even use the InjectProxy pattern we established in the client package.

@sbueringer
Copy link
Member

I didn't look into implementation details, but agree with the goal

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 25, 2024
@sbueringer
Copy link
Member

@dlipovetsky You saw this only during clusterctl move until now?

@sbueringer
Copy link
Member

Okay also saw it in controller logs. Not sure if I like it :)

@sbueringer
Copy link
Member

sbueringer commented Jul 25, 2024

@dlipovetsky Do you know if there are any discussions about rolling back this warning?

It will be very very nosiy everywhere and almost everyone won't be able to just change the name of the finalizer (i.e. they can't do anything about it, for years usually, I think)

@dlipovetsky
Copy link
Contributor Author

dlipovetsky commented Jul 25, 2024

@dlipovetsky Do you know if there are any discussions about rolling back this warning?

That's a great question. I'll look around and report back.

Started a thread: https://kubernetes.slack.com/archives/C0EG7JC6T/p1721923881095799

@sbueringer
Copy link
Member

@dlipovetsky Will definitely review your PR in controller-runtime tomorrow, so that we get the ability to suppress warnings ASAP

@dlipovetsky
Copy link
Contributor Author

It turned out that controller-runtime, when configured to suppress warnings, was not doing so. That's now fixed, and should be in the next controller-runtime release.

@dlipovetsky
Copy link
Contributor Author

I realize that opinions can reasonably differ on whether to suppress warnings, or even which warnings to suppress. Today, controller-runtime is not that flexible; it can either suppress, or surface warnings.

@sbueringer
Copy link
Member

sbueringer commented Jul 31, 2024

Yeah not sure what exactly to do, making it configurable in general in the clusterctl library is definitely okay (TBD how exactly)

or even which warnings to suppress

We can make this configurable in the library with your follow-up PR in CR, right?

@fabriziopandini fabriziopandini added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Jul 31, 2024
@k8s-ci-robot k8s-ci-robot removed the needs-priority Indicates an issue lacks a `priority/foo` label and requires one. label Jul 31, 2024
@dlipovetsky
Copy link
Contributor Author

We can make this configurable in the library with your follow-up PR in CR, right?

Yes. I just filed kubernetes-sigs/controller-runtime#2896. With this change, clusterctl itself can define config.WarningHandler, or, for more flexibility, allow the clusterctl user to define it.

@sbueringer
Copy link
Member

sbueringer commented Aug 6, 2024

@dlipovetsky Thx for opening the CR PRs. Now that they are merged, how do you want to handle this in clusterctl?

I think the final implementation will be only availalbe in CR v0.19.x (which is too late for CAPI v1.8.x). Do you need something for CAPI v1.8.x? (if yes, we could go with providing an option to surpress all warnings with CAPI v1.8 and then something more fine-granular with 1.9)

@dlipovetsky
Copy link
Contributor Author

Third-party code creates a clusterctl client in two ways. The default client implementation includes a "proxy" implementation. The proxy is what configures the controller-runtime client, which is exclusively used to talk to the Kubernetes API.

flowchart TD
  user["third-party code"]
  ccn["`cmd/clusterclient/client/cluster/New()`"]
  gog["`cmd/clusterclient/client/cluster/GetOwnerGraph()`"]
  np["`cmd/clusterclient/client/cluster/newProxy()`"]

  nc["`cmd/clusterclient/client/cluster/proxy.NewClient()`"]

  lib["`/cmd/clusterclient/client/*`"]

  user --> ccn
  user --> gog
  
  ccn --> np
  gog --> np

  lib --> nc
Loading

We want third-party code to configure the controller-runtime client, specifically how to handle warnings returned by the Kubernetes API.

The controller-runtime client de-duplicates and surfaces warnings by default. To customize handling, users must implement a rest.WarningHandler.

I think a scalable solution is for clusterctl to implement a rest.WarningHandler. The handler should be configurable. e.g. users may want to suppress some warnings, like those about finalizers, while surfacing others.

We should give the user the opportunity to configure the warning handler when they call cluster.New() or cluster.GetOwnerGraph().

@sbueringer
Copy link
Member

Sounds good to me. So I assume something for capi 1.9 is early enough for you? (as the CR version used in 1.8.x doesn't allow this)

Feedback from @fabriziopandini would be also good once he's back from PTO

@dlipovetsky
Copy link
Contributor Author

Do you need something for CAPI v1.8.x? (if yes, we could go with providing an option to surpress all warnings with CAPI v1.8 and then something more fine-granular with 1.9)

We're suppressing warnings using a very small patch in our downstream fork, and we can carry that patch until the first 1.9 upstream release. However, if there is enough community demand, I could work on a temporary upstream solution for 1.8.x.

@neolit123
Copy link
Member

neolit123 commented Aug 7, 2024

+1 to enable an option for 1.9.x (as a feature addition).
we can do bugfixes for 1.8.x

@sbueringer
Copy link
Member

@dlipovetsky If you want, we now should be ready to implement something for v1.9 on main

@sbueringer
Copy link
Member

/help

@k8s-ci-robot
Copy link
Contributor

@sbueringer:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Aug 28, 2024
@dlipovetsky
Copy link
Contributor Author

/assign

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/clusterctl Issues or PRs related to clusterctl help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants