-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix scale client error handling #19275
fix scale client error handling #19275
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely better approach than #19271
/cc @DirectXMan12 |
@@ -197,7 +197,10 @@ func (c *namespacedScaleClient) Update(resource schema.GroupResource, scale *aut | |||
Body(scaleUpdateBytes). | |||
Do() | |||
if err := result.Error(); err != nil { | |||
return nil, fmt.Errorf("could not update the scale for %s %s: %v", resource.String(), scale.Name, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might, I have been looking from where that would come from
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did this get in? I though the clients are generated...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sttts ^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did this get in? I though the clients are generated...
The scale client is a hand-rolled, generic client .
I would not pick this by itself.
/retest |
ee864a5
to
ed9f675
Compare
/retest |
1 similar comment
/retest |
Missing permission. Will fix tomorrow |
@@ -18,14 +19,15 @@ import ( | |||
) | |||
|
|||
// NewDeploymentConfigReaper returns a new reaper for deploymentConfigs | |||
func NewDeploymentConfigReaper(appsClient appsclient.Interface, kc kclientset.Interface) kubectl.Reaper { | |||
return &DeploymentConfigReaper{appsClient: appsClient, kc: kc, pollInterval: kubectl.Interval, timeout: kubectl.Timeout} | |||
func NewDeploymentConfigReaper(appsClient appsclient.Interface, kc kclientset.Interface, scaleClient scaleclient.ScalesGetter) kubectl.Reaper { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mfojtik can we finally drop the reapers? we have GC since 3.6. We should do that upstream as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't entirely drop the reapers, but I need to start doing this upstream. So upstream first gets rid of them and then we'll do it during after the next rebase. That reminds me I need to sync with @deads2k (be prepared my list of topics is growing 😉).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't entirely drop the reapers
@soltysh I am interested in those reasons
but I need to start doing this upstream.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soltysh I am interested in those reasons
Reapers, aside from removing a resource, also scale them down, eg. deployments. That bit of functionality cannot be removed and I doubt the GC is handling that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soltysh isn't the resource marked as deleted so it can't create new subresources? No need for scale down...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's something I'm not 100% sure of and as usual I'd prefer to double check it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Controllers are not allowed to operate on resources with deletionTimestamp != nil
to avoid racing with GC
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am quite positive but double checking doesn't hurt :)
@deads2k I've briefly looked at upstream RBAC, but I don't see statefulsets/scale there which would suggest this is wiring the scaler differently even for upstream resources - if so do we want to diverge? |
It suggests that they don't test it and we do. Which seems more likely? :) |
/retest |
1 similar comment
/retest |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, mfojtik The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@mfojtik for your consideration.
61790 is the target. It fixes the scale error handling. It can't come in cleanly without 60455.
62114 is where we achieved stability upstream. I'm not completely sure whether we're stable or flaky without it. We'll need for 3.11 regardless.