-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generation and separate status updates for DCs #7149
Conversation
Finally got generation working for both new and old clients. Still needs tests but this is ready for reviews @openshift/api-review @ironcladlou |
[test][extended:core(client deployments)] |
Not sure if changing strategy should always update generation. Need to On Thu, Feb 11, 2016 at 1:55 PM, OpenShift Bot [email protected]
|
Maybe in the classic meaning I guess. Not in the user driven meaning. On Thu, Feb 11, 2016 at 3:40 PM, Clayton Coleman [email protected]
|
[test][extended:core(client deployments)] |
Not sure if my extended test run and/or if @smarterclayton does the indication in core() is a regexp? Or should I pass the whole test name? |
@@ -76,7 +76,7 @@ var ( | |||
GroupsToResources = map[string][]string{ | |||
BuildGroupName: {"builds", "buildconfigs", "buildlogs", "buildconfigs/instantiate", "buildconfigs/instantiatebinary", "builds/log", "builds/clone", "buildconfigs/webhooks"}, | |||
ImageGroupName: {"imagestreams", "imagestreammappings", "imagestreamtags", "imagestreamimages", "imagestreamimports"}, | |||
DeploymentGroupName: {"deployments", "deploymentconfigs", "generatedeploymentconfigs", "deploymentconfigrollbacks", "deploymentconfigs/log", "deploymentconfigs/scale"}, | |||
DeploymentGroupName: {"deployments", "deploymentconfigs", "generatedeploymentconfigs", "deploymentconfigrollbacks", "deploymentconfigs/log", "deploymentconfigs/scale", "deploymentconfigs/status"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not in this group. We grant edit on this. OpenshiftStatusGroupName
until we manage to finish killing these. We're only keeping them for compatibility with old clients and unreconciled roles
[test][extended:core(deployment generation)] |
It is a regex, and it needs to be escaped correctly. On Fri, Feb 12, 2016 at 12:03 PM, Michail Kargakis <[email protected]
|
@smarterclayton the image change controller needs to update both status (latestVersion, details) and spec (resolves container image). Should I add an atomic update for DCs updating both? |
Couple of comments |
Hm, actually when I am running with this PR, I see the container image being resolved. And I am using a status update in the image controller... : confused : |
[test][extended:core(client deployments)] |
Yes, like we did for builds. Make sure it's internal only for now. On Wed, Feb 17, 2016 at 11:11 AM, Michail Kargakis <[email protected]
|
Not sure about builds, I remember implementing it for imagestreamtags |
@smarterclayton the controller is using an OpenShift client, how can I make the update with the storage being internal? In other words, how can the server know that this update comes from the controller and use the internal storage? Annotations? That would be bad. |
An idea would be to allow container image changes in status updates |
That's what I meant. On Thu, Feb 18, 2016 at 8:57 AM, Michail Kargakis [email protected]
|
That's a lot harder. I don't think we can do that easily today (and I On Thu, Feb 18, 2016 at 9:19 AM, Michail Kargakis [email protected]
|
I also don't like allowing status to modify the spec. Beats the reason of adding status in the first place. The best solution I can think of right now, is have the image change controller update the container image and the trigger params, detect it in the update hook and update status details and maybe latestVersion (#7439). |
@smarterclayton comments addressed, PTAL |
#7455 failure, got merged after the test started :/ |
[test][extended:core(client deployments)] |
@@ -225,11 +225,12 @@ func (o DeployOptions) deploy(config *deployapi.DeploymentConfig, out io.Writer) | |||
} | |||
} | |||
|
|||
config.Status.LatestVersion++ | |||
_, err = o.osClient.DeploymentConfigs(config.Namespace).Update(config) | |||
config.Generation++ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So upstream they don't want any clients to update generation. I would prefer not to deviate and it would be better actually since some of the hacks in the registry hook won't be needed. Any ideas here? I really don't want to add a new spec field[1] and I am leaning towards an annotation. openshift.io/instantiated
?
[1] So once we have parity with upstream Deployments, the DeploymentConfigSpec will look like this:
type DeploymentConfigSpec struct {
// Strategy describes how a deployment is executed.
Strategy DeploymentStrategy
// Minimum number of seconds for which a newly created pod should be ready
// without any of its container crashing, for it to be considered available.
// Defaults to 0 (pod will be considered available as soon as it is ready)
MinReadySeconds int32
// The number of old ReplicaSets to retain to allow rollback.
// This is a pointer to distinguish between explicit zero and not specified.
RevisionHistoryLimit *int32
// Indicates that the deployment is paused and will not be processed by the
// deployment controller.
Paused bool
// Triggers determine how updates to a DeploymentConfig result in new deployments. If no triggers
// are defined, a new deployment can only occur as a result of an explicit client update to the
// DeploymentConfig with a new LatestVersion.
Triggers []DeploymentTriggerPolicy
// Replicas is the number of desired replicas.
Replicas int
// Test ensures that this deployment config will have zero replicas except while a deployment is running. This allows the
// deployment config to be used as a continuous deployment test - triggering on images, running the deployment, and then succeeding
// or failing. Post strategy hooks and After actions can be used to integrate successful deployment with an action.
Test bool
// Selector is a label query over pods that should match the Replicas count.
Selector map[string]string
// Template is the object that describes the pod that will be created if
// insufficient replicas are detected.
Template *kapi.PodTemplateSpec
}
As a new user coming to DCs I would probably feel overwhelmed by such a spec. @smarterclayton @ironcladlou do we want 100% parity? I could see us continuing to use ConfigChange triggers only (map it to Pause
) and we may not care about MinReadySeconds
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, and I didn't include 1) ProgressDeadlineSeconds
which is still cooked and 2) the rollback spec but the latter should never be set by users anyway.
@deads2k as per our discussion yesterday, we are going to prohibit users from manipulating system labels/annotations ("openshift.io/") at some point and I find it reasonable. It's going to be a breaking change for |
Wait, what? Who says that?
|
A separate discussion about how we might try to secure system managed labels and annotations in the future. |
@ironcladlou @smarterclayton @deads2k @liggitt I have updated to use an annotation instead of incrementing the generation. PTAL |
[test] |
#8571 flake [test] |
yum again [test] |
Webdriver and #8789 [test] |
Add generation numbers to deploymentconfigs. Helpful for decoupling latestVersion from oc
Add a separete call for updating the status of a deploymentconfig. Use it where it makes sense (controllers should be responsible for updating the status). Also make spec updates tolerate status detail updates in case the updates originates from the image change controller.
// TODO: need to ensure status.latestVersion is not set out of order | ||
dc := obj.(*api.DeploymentConfig) | ||
dc.Generation = 1 | ||
dc.Status = api.DeploymentConfigStatus{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this will break old clients?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You either break those or you let them manipulate the deployment status. Having a preset status.latestVersion is the same as having a config change trigger so I don't think the breakage is big.
Evaluated for origin test up to f12aca3 |
newStatus := newDc.Status | ||
newDc.Status = oldDc.Status | ||
|
||
// If this update comes from the imagechange controller, tolerate status detail updates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does image change controller expect to set both spec and status? If so, comment here, and make sure we have an to cover that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I will address this in #8746 by removing the status update. So the flow on an image change will be: ImageChange controller updates spec ---(watch triggered)---> ConfigController sees a template diff, updates latestVersion ---(watch triggered)---> main dc controller runs the new deployment.
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/3687/) |
LGTM, please watch the queue afterwards in case we start seeing flakes. [merge] |
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_origin/5853/) (Image: devenv-rhel7_4144) |
Evaluated for origin merge up to f12aca3 |
This PR adds generation numbers and separate status call for DCs.
@smarterclayton @ironcladlou @deads2k @liggitt @openshift/api-review
Closes #6337
Closes #6465