Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix scale client error handling #19275

Merged
merged 6 commits into from
Apr 11, 2018

Conversation

deads2k
Copy link
Contributor

@deads2k deads2k commented Apr 9, 2018

@mfojtik for your consideration.

61790 is the target. It fixes the scale error handling. It can't come in cleanly without 60455.

62114 is where we achieved stability upstream. I'm not completely sure whether we're stable or flaky without it. We'll need for 3.11 regardless.

@openshift-ci-robot openshift-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Apr 9, 2018
Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is definitely better approach than #19271

@mfojtik
Copy link
Contributor

mfojtik commented Apr 9, 2018

/cc @DirectXMan12

@@ -197,7 +197,10 @@ func (c *namespacedScaleClient) Update(resource schema.GroupResource, scale *aut
Body(scaleUpdateBytes).
Do()
if err := result.Error(); err != nil {
return nil, fmt.Errorf("could not update the scale for %s %s: %v", resource.String(), scale.Name, err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k @tnozicka isn't this "just enough" fix to stop the flakiness of scaler and leave us with rest in 1.10 world?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might, I have been looking from where that would come from

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did this get in? I though the clients are generated...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sttts ^^

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did this get in? I though the clients are generated...

The scale client is a hand-rolled, generic client .

I would not pick this by itself.

@deads2k
Copy link
Contributor Author

deads2k commented Apr 9, 2018

/retest

@deads2k deads2k force-pushed the scale-01-tidy branch 2 times, most recently from ee864a5 to ed9f675 Compare April 9, 2018 18:11
@deads2k
Copy link
Contributor Author

deads2k commented Apr 9, 2018

/retest

1 similar comment
@deads2k
Copy link
Contributor Author

deads2k commented Apr 9, 2018

/retest

@deads2k
Copy link
Contributor Author

deads2k commented Apr 10, 2018

Missing permission. Will fix tomorrow

@@ -18,14 +19,15 @@ import (
)

// NewDeploymentConfigReaper returns a new reaper for deploymentConfigs
func NewDeploymentConfigReaper(appsClient appsclient.Interface, kc kclientset.Interface) kubectl.Reaper {
return &DeploymentConfigReaper{appsClient: appsClient, kc: kc, pollInterval: kubectl.Interval, timeout: kubectl.Timeout}
func NewDeploymentConfigReaper(appsClient appsclient.Interface, kc kclientset.Interface, scaleClient scaleclient.ScalesGetter) kubectl.Reaper {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mfojtik can we finally drop the reapers? we have GC since 3.6. We should do that upstream as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't entirely drop the reapers, but I need to start doing this upstream. So upstream first gets rid of them and then we'll do it during after the next rebase. That reminds me I need to sync with @deads2k (be prepared my list of topics is growing 😉).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't entirely drop the reapers

@soltysh I am interested in those reasons

but I need to start doing this upstream.

+1

Copy link
Contributor

@soltysh soltysh Apr 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh I am interested in those reasons

Reapers, aside from removing a resource, also scale them down, eg. deployments. That bit of functionality cannot be removed and I doubt the GC is handling that.

Copy link
Contributor

@tnozicka tnozicka Apr 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh isn't the resource marked as deleted so it can't create new subresources? No need for scale down...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's something I'm not 100% sure of and as usual I'd prefer to double check it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Controllers are not allowed to operate on resources with deletionTimestamp != nil to avoid racing with GC

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am quite positive but double checking doesn't hurt :)

@tnozicka
Copy link
Contributor

@deads2k I've briefly looked at upstream RBAC, but I don't see statefulsets/scale there which would suggest this is wiring the scaler differently even for upstream resources - if so do we want to diverge?

@deads2k
Copy link
Contributor Author

deads2k commented Apr 10, 2018

@deads2k I've briefly looked at upstream RBAC, but I don't see statefulsets/scale there which would suggest this is wiring the scaler differently even for upstream resources

It suggests that they don't test it and we do. Which seems more likely? :)

@deads2k
Copy link
Contributor Author

deads2k commented Apr 10, 2018

/retest

1 similar comment
@mfojtik
Copy link
Contributor

mfojtik commented Apr 11, 2018

/retest

@mfojtik
Copy link
Contributor

mfojtik commented Apr 11, 2018

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 11, 2018
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: deads2k, mfojtik

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants