-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable HorizontalPodAutoscaler Support for DeploymentConfigs #5310
Enable HorizontalPodAutoscaler Support for DeploymentConfigs #5310
Conversation
Once #5286 lands, I'll rebase on top of that. @ncdc @smarterclayton this PR currently follows the upstream Kube in that it updates the replicas field on the actual deployment controller. It occurs to me that this does not appear to do much in Origin (since future deployments seem to be based on the Scale of the previous deployment). Should I update so that it affects the latest deployment itself (like the scale command in Origin)? |
@DirectXMan12 please use separate commits for Godeps modifications, and make sure the commit message starts with UPSTREAM. Thanks! |
Answer is yes. On Wed, Oct 21, 2015 at 8:40 PM, Andy Goldstein [email protected]
|
@@ -297,3 +297,16 @@ func DefaultOpenShiftUserAgent() string { | |||
version = seg[0] | |||
return fmt.Sprintf("%s/%s (%s/%s) openshift/%s", path.Base(os.Args[0]), version, runtime.GOOS, runtime.GOARCH, commit) | |||
} | |||
|
|||
type HybridClient struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add var _ = unversioned.ScaleNamespacer(&HybridClient{})
to clarify and enforce the intent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nix this. Its been rejected on principle mulitple times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nix what?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@deads2k joint client?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@deads2k joint client?
Yes. HybridClient
should not exist. You should specify what you want in your API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I think that our Client should include the ExtensionsClient
and then internally mix the clients like we already do for scale
and delete
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See how the Kubernetes client is currently implemented: https://github.com/kubernetes/kubernetes/blob/071d21257fdfd439a9286fffbd5972c85643cd49/pkg/client/unversioned/client.go#L128-l134
We should do the same for our client
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See how the Kubernetes client is currently implemented: https://github.com/kubernetes/kubernetes/blob/071d21257fdfd439a9286fffbd5972c85643cd49/pkg/client/unversioned/client.go#L128-l134
We should do the same for our client
This is what they do
// Client is the implementation of a Kubernetes client.
type Client struct {
*RESTClient
*ExtensionsClient
// TODO: remove this when we re-structure pkg/client.
*DiscoveryClient
}
I don't think that this is a good thing. Having a single interface like that encourages non-specificity and it does not scale well as you add different clients. I think that a reasonable division is on the APIGroup boundary. Using different clients for different APIGroups will help keep the dependency structure clear, prevents a mega-client problem, and allows a clear expectation as APIGroups are enabled and disabled.
@smarterclayton as I recall, you were not a fan combined Clients
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not, we should be providing an implementation of ScaleNamespacer and
ScaleInterface that multiplexes to clients based on known kinds.
On Thu, Oct 22, 2015 at 1:35 PM, David Eads [email protected]
wrote:
In pkg/client/client.go
#5310 (comment):@@ -297,3 +297,16 @@ func DefaultOpenShiftUserAgent() string {
version = seg[0]
return fmt.Sprintf("%s/%s (%s/%s) openshift/%s", path.Base(os.Args[0]), version, runtime.GOOS, runtime.GOARCH, commit)
}
+
+type HybridClient struct {See how the Kubernetes client is currently implemented:
https://github.com/kubernetes/kubernetes/blob/071d21257fdfd439a9286fffbd5972c85643cd49/pkg/client/unversioned/client.go#L128-l134We should do the same for our client
This is what they do
// Client is the implementation of a Kubernetes client.type Client struct {
*RESTClient
*ExtensionsClient
// TODO: remove this when we re-structure pkg/client.
*DiscoveryClient
}I don't think that this is a good thing. Having a single interface like
that encourages non-specificity and it does not scale well as you add
different clients. I think that a reasonable division is on the APIGroup
boundary. Using different clients for different APIGroups will help keep
the dependency structure clear, prevents a mega-client problem, and allows
a clear expectation as APIGroups are enabled and disabled.@smarterclayton https://github.com/smarterclayton as I recall, you were
not a fan combined Clients.—
Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/5310/files#r42779351.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or even better, uses "IsOriginResource" to select the origin kinds, and
defaults all other to kube.
On Thu, Oct 22, 2015 at 2:13 PM, Clayton Coleman [email protected]
wrote:
I am not, we should be providing an implementation of ScaleNamespacer and
ScaleInterface that multiplexes to clients based on known kinds.On Thu, Oct 22, 2015 at 1:35 PM, David Eads [email protected]
wrote:In pkg/client/client.go
#5310 (comment):@@ -297,3 +297,16 @@ func DefaultOpenShiftUserAgent() string {
version = seg[0]
return fmt.Sprintf("%s/%s (%s/%s) openshift/%s", path.Base(os.Args[0]), version, runtime.GOOS, runtime.GOARCH, commit)
}
+
+type HybridClient struct {See how the Kubernetes client is currently implemented:
https://github.com/kubernetes/kubernetes/blob/071d21257fdfd439a9286fffbd5972c85643cd49/pkg/client/unversioned/client.go#L128-l134We should do the same for our client
This is what they do
// Client is the implementation of a Kubernetes client.type Client struct {
*RESTClient
*ExtensionsClient
// TODO: remove this when we re-structure pkg/client.
*DiscoveryClient
}I don't think that this is a good thing. Having a single interface like
that encourages non-specificity and it does not scale well as you add
different clients. I think that a reasonable division is on the APIGroup
boundary. Using different clients for different APIGroups will help keep
the dependency structure clear, prevents a mega-client problem, and allows
a clear expectation as APIGroups are enabled and disabled.@smarterclayton https://github.com/smarterclayton as I recall, you
were not a fan combined Clients.—
Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/5310/files#r42779351.
Can you rebase on top of the latest master? |
) | ||
} | ||
|
||
func (*DeploymentConfig) IsAnAPIObject() {} | ||
func (*DeploymentConfigList) IsAnAPIObject() {} | ||
func (*DeploymentConfigRollback) IsAnAPIObject() {} | ||
func (*Scale) IsAnAPIObject() {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is going to be a dc subresource I would go with DeploymentConfigScale
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upstream didn't do that. They just call it scale, and it's a subresource under both ReplicationController and Deployment.
9ee049b
to
5c3f06e
Compare
I rebased and made the changes WRT the scale client. The change to have it affect the current deployment should be up shortly |
5c3f06e
to
ff4ef9c
Compare
Ok, scale client affects current deployment now. |
Scale *ScaleREST | ||
} | ||
|
||
func NewStorage(s storage.Interface, rcNamespacer kclient.ReplicationControllersNamespacer) DeploymentConfigStorage { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Godoc
edcd5ed
to
af348ec
Compare
It also occurs to me that currently, the HPA expects heapster to be at "kube-system/heapster". Where should/does it live for OpenShift (and should I modify the kube code so that it's not in a constant, or do we just want to carry a patch to the constant)? |
That's hard coded? I would expect it to be configurable. |
It's ok to upstream patch this first in this PR and get the follow up in On Oct 22, 2015, at 8:55 PM, Solly [email protected] wrote: @liggitt https://github.com/liggitt behold: — |
8ff972b
to
d0860f3
Compare
|
||
oldReplicas := controller.Spec.Replicas | ||
controller.Spec.Replicas = scale.Spec.Replicas | ||
if _, err = r.rcNamespacer.ReplicationControllers(deploymentConfig.Namespace).Update(controller); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add TODO to handle scale down by scaling down old deployment RCs in preference to new ones. If we merge like this, we should make an issue to fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't be scaling at all unless there's only one RC in business, correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is going to race against the controller, you cannot get into a state where the new RC is being created but the old RC has been completely scaled down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't be scaling at all unless there's only one RC in business, correct?
I'm reasonably certain that its possible for HPA read a DC on thread A, DCController starts a new RC and scales down the old one on thread B, HPA reads old RC on threadA, HPA scales old RC on threadA.
Once that happens, then the Get
will indicate that you have too many replicas running, the Update
is called again and you scale down the new RC
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see what you're saying. The next cycle through should see that we have two RCs running, though, so it won't succeed on the second scale:
- [HPA] get dc/scale --> need to scale up
- HPA calls update, races with DC controller due to out of date info between [get dc] and [get rc], scales old RC
- HPA waits for next cycle
- Either we're done deploying and HPA scales up the new RC or we're not done scaling, HPA does a get, calculates, and then does an update, which bounces since we're not done deploying.
lgetm |
can you link the follow up issue for races here and set it to p1? |
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/6693/) (Image: devenv-rhel7_2637) |
[Test]ing while waiting on the merge queue |
created #5597 |
gofmt
|
please run all |
cca8151
to
e0e5c29
Compare
This commit introduces the Scale subresource for DeploymentConfigs. The "Spec" field reflects the state of the most recent deployment, or, if none is present, the state of the DeploymentConfig template. The "Status" field reflects the state of all deployments for the given DeploymentConfig. This roughtly equivalent to how the Scale resource for upstream Deployments work.
This commit makes the HorizontalPodAutoscaler controller use the DelegatingScaleNamespacer so that it can reach both Kubernetes objects with Scale subresources, as well as Origin DeploymentConfigs.
e0e5c29
to
663f1d9
Compare
[test] |
Evaluated for origin test up to 27cc70a |
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/6693/) |
removed merge. Is it actually too late? |
got ok from @smarterclayton [merge] |
Evaluated for origin merge up to 27cc70a |
This PR adds support for HPAs scaling DeploymentConfigs. A new "HybridClient" struct is introduced to support Namespacers that need to point at resources or subresources that could be in either the Origin API or the Kube API (like
Scale
).This commit also imports the Scale kind into the Origin v1 API (but it's still "backed" in the unversioned API by the underlying Kube resource).
Finally, this commit contains a patch to Kube that converts the HPA controller from using a single
Client
type to using separateNamespacers
for its various needs.TODOs
UPSTREAM: PR
orUPSTREAM: <carry>
formatUpdate
to document the order in which the deploymentconfig controller does the updates to resources as it creates new RCs and changes its RC's replicas. That information is required to reason about race potential.