Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes 1.19 Release Cycle #284

Closed
3 of 4 tasks
damemi opened this issue May 14, 2020 · 29 comments
Closed
3 of 4 tasks

Kubernetes 1.19 Release Cycle #284

damemi opened this issue May 14, 2020 · 29 comments
Assignees

Comments

@damemi
Copy link
Contributor

damemi commented May 14, 2020

This outlines the descheduler release plans for the upstream 1.19 release cycle (April 13 - August 4), ending with a rebase to the upstream 1.19 dependencies soon after 1.19 GA (on August 4).

At the start of this cycle, the descheduler was still on 1.17 dependencies so we needed to take some steps to catch up:

  • Create release-1.17 branch
  • Create release-1.18 branch, with updated 1.18 deps
  • Tag v0.18.0 release (this should actually probably be v0.18.2 since that's the matching upstream tag for what we bumped to in Update to k8s 1.18.2 dependencies #280)
  • (Soon after 1.19 GA) Create release-1.19 branch with updated 1.19 deps and tag v0.19.0 release

This will have us caught up so that, at the conclusion of the 1.20 upstream release cycle, we only need to update our deps, branch, and tag

@damemi
Copy link
Contributor Author

damemi commented May 14, 2020

/assign @aveshagarwal @ravisantoshgudimetla @seanmalloy @ingvagabund
This is a summary of the steps we discussed in #273 for our branching/release cycle and gets us caught up so that we ideally only have to do this once per k8s release for now on.

If this looks good, could you all please give a +1 to acknowledge it, or add any comments we should address here? Once we have a consensus we can begin this process

@damemi damemi changed the title Kubernetes 1.19 Release Cycle [draft] Kubernetes 1.19 Release Cycle May 14, 2020
@seanmalloy
Copy link
Member

seanmalloy commented May 14, 2020

I don't think the patch version for descheduler needs to match the k8s version. The patch version for descheduler should be used to keep track of descheduler bug fixes. I recommend releasing v0.18.0 instead of v0.18.2.

@aveshagarwal
Copy link
Contributor

What is the benefit of descheduler 0.17 based on k8s 1.17? I am pretty sure descheduler 0.18 based on k8s 1.18 should work for 1.17 too without any issue, or are there any 1.18 specific APIs we are using in descheduler might not work for k8s 1.17? I mean k8s 1.18 is the latest release, and if going forward we want to map every descheduler release with corresponding k8s release thats fine. So why not just start with 1.18? But don't have strong opinion either way, your choice if you guys have cycles.

Also if we are doing the above, should we also clarify, perhaps make it explicit, about descheduler's supportability statement in README, may be by providing some sort of matrix that even though there is going to be descheduler release for every k8s release, but each descheduler release should work fine on multiple releases of k8s? What are your thoughts? I am just hoping that users do not get confused that descheduler 0.x would only work for k8s 1.x.

@aveshagarwal
Copy link
Contributor

I don't think the patch version for descheduler needs to match the k8s version.

I agree.

The patch version for descheduler should be used to keep track of descheduler bug fixes. I recommend releasing v0.18.0 instead of v0.18.2.

Matching patch releases might be too much it seems and might not make much sense IMO, as APIs dont change in k8s patch releases.

@aveshagarwal
Copy link
Contributor

In general, +1.

@aveshagarwal
Copy link
Contributor

aveshagarwal commented May 15, 2020

One thing I noticed that PRs by people (me, @damemi, and i think ravi) who have approval permission are getting self approved. Could we look into it so that PRs dont get self approval? Or self-approval should require one more approval? makes sense?

@damemi
Copy link
Contributor Author

damemi commented May 19, 2020

What is the benefit of descheduler 0.17 based on k8s 1.17? I am pretty sure descheduler 0.18 based on k8s 1.18 should work for 1.17 too without any issue

Good point, we don't really need to catch ourselves up when we can just start with a v0.18.0 release and keep it matched up from there.

@seanmalloy what do you think? Would you be willing to write up v0.18.0 release notes for us?

@seanmalloy
Copy link
Member

@seanmalloy what do you think? Would you be willing to write up v0.18.0 release notes for us?

@damemi in general this sounds good to me. Yes, I can write up the release notes for v0.18.0. I'll add a comment to this issue with the release notes when I have them ready.

@seanmalloy
Copy link
Member

@damemi here are the release notes for v0.18.0. I made some slight changes to the format since that last release. The release notes are also available in this gist.

Docker images are available on Google Container Registry:

docker run asia.gcr.io/k8s-artifacts-prod/descheduler/descheduler:v0.18.0 --help
docker run eu.gcr.io/k8s-artifacts-prod/descheduler/descheduler:v0.18.0 --help
docker run us.gcr.io/k8s-artifacts-prod/descheduler/descheduler:v0.18.0 --help

New Features 🌈

Bug Fixes 🐛

Others 🏃

@seanmalloy
Copy link
Member

Also if we are doing the above, should we also clarify, perhaps make it explicit, about descheduler's supportability statement in README, may be by providing some sort of matrix that even though there is going to be descheduler release for every k8s release, but each descheduler release should work fine on multiple releases of k8s? What are your thoughts? I am just hoping that users do not get confused that descheduler 0.x would only work for k8s 1.x.

@aveshagarwal I'll get a PR submitted soon to update the Compatibility Matrix section of the README.

@damemi
Copy link
Contributor Author

damemi commented May 20, 2020

@seanmalloy thanks for the great notes, are those production images already promoted for v0.18?

@seanmalloy
Copy link
Member

seanmalloy commented May 20, 2020

are those production images already promoted for v0.18?

@damemi no they are not. Someone needs to follow the steps in the release guide. We need to add a step to the release guide about creating release branches.

@damemi
Copy link
Contributor Author

damemi commented May 20, 2020

@seanmalloy ok, just checking.

I think we've addressed everything that was brought up so if there are no more suggestions I'll create the release-1.18 branch at EOD today (to give a last call for anyone to speak up).

Then I will publish the draft v0.18.0 release, which should create the tag. @seanmalloy after that do you want to take responsibility for opening the image promotion PR? (Also, what's the difference in the "manual" and "semi-auto" sections of the release guide, besides pushing a staging image? Do we need to push a staging image?)

Following image promotion we'll publish the release from draft.

@seanmalloy
Copy link
Member

@damemi I created PR #288 to update the compatibility matrix and PR #289 to update the release guide with some details on creating the release branches.

Also, what's the difference in the "manual" and "semi-auto" sections of the release guide

The container images are built and pushed to the staging registry by an automated prow job with the "semi-auto" process. Using the "manual" process a person builds and pushes the container images from their laptop to the staging registry. The "manual" process would only need to be used if the automated prow job fails for some reason.

@seanmalloy after that do you want to take responsibility for opening the image promotion PR?

Yes, I can take care of that step.

@damemi
Copy link
Contributor Author

damemi commented May 20, 2020

Branch release-1.18 cut here: https://github.com/kubernetes-sigs/descheduler/tree/release-1.18, and I have the 1.18 release draft ready (not sure how to share it besides marking it pre-release or just publishing it, but it's just copy-pasted from the notes above)

@seanmalloy once the image promotions are ready I'll publish the release

@damemi damemi changed the title [draft] Kubernetes 1.19 Release Cycle Kubernetes 1.19 Release Cycle May 20, 2020
@seanmalloy
Copy link
Member

@damemi I see the release-1.18 branch. But I don't see a new tag pushed, https://github.com/kubernetes-sigs/descheduler/tags. The newest tag is still v0.10.0. My guess is creating a draft release does not create a tag. I think you need to create and push the v0.18.0 tag. Pushing the tag will trigger the prow job to build the container image.

@aveshagarwal
Copy link
Contributor

@seanmalloy I just pushed v0.18.0 tag.

@seanmalloy
Copy link
Member

Here is the v0.18.0 container image in the staging registry that needs to promoted gcloud container images describe gcr.io/k8s-staging-descheduler/descheduler:v20200521-v0.18.0. I'm hoping the submit the PR soon to promote the image.

@seanmalloy
Copy link
Member

Pull request for v0.18.0 image promotion kubernetes/k8s.io#888

@seanmalloy
Copy link
Member

@damemi the --help CLI option needs to be removed from the docker run commands that I put in the release notes. You should be able to edit the v0.18.0 draft release to remove the --help CLI option.

@damemi
Copy link
Contributor Author

damemi commented May 22, 2020

@seanmalloy thanks for catching that, I see the image promotion has merged so I've published the release here: https://github.com/kubernetes-sigs/descheduler/releases/tag/v0.18.0 (I also included a note explaining the versioning change)

Now that we're caught up we will do the same process again around 8/4 for the 1.19 GA. Thanks everyone!

@seanmalloy
Copy link
Member

@damemi do you think we should define the list if features we would like to target for the v0.19.0 release?

Here is a starter list:

Keeping in mind there is only about 2 months until v0.19.0.

@damemi
Copy link
Contributor Author

damemi commented Jul 9, 2020

With 1.19 code freeze today, I opened #337 to test bumping our k8s deps to a 1.19 branch (currently beta.2). This is just to test our compatibility right now to prepare for a smooth release when it's time, there are still features we're working on for 1.19 and we aren't strictly beholden to the code freeze today

@damemi
Copy link
Contributor Author

damemi commented Jul 29, 2020

Started working on release notes for 1.19, if anything's missing or in the wrong spot so far let me know. Obviously we still have a couple things that are being worked on so this is incomplete right now

## New Features :rainbow:
* #338 Filter pods by namespaces @ingvagabund
* #364 Allow custom priority threshold @lixiang233 
* #386 Bump k8s Modules For k8s 1.19 @seanmalloy 
* #371 Update To Go 1.15.0 @seanmalloy 

## Bug Fixes :bug:
* #285 Change break condition and thresholds validation for lowUtilization @lixiang233
* #310 Skip evicting when no evictable pod found on node @lixiang233
* #312 Pass golint check for pkg/descheduler/strategies/ @lixiang233
* #319 Remove redundant eviction log message and add "Reason" param to EvictPod @damemi
* #330 Fix examples typo @paulfantom
* #336 avoid appending list multiple times in RemoveDuplicates @lixiang233
* #361 remove unnecessary line feed in log messages @jjmengze
* #362 Update version parsing to exclude helm-chart tags @damemi
* #366 Add missing validation in PodAntiAffinity @lixiang233
* #369 Add check for ownerref length in DuplicatePods strategy @damemi
* #374 Promote Namespaces field to a pointer @ingvagabund 

## Others :running:
* #296 Make DeschedulerStrategy.Params a pointer @ingvagabund
* #297 Add verify-gofmt make target @ingvagabund
* #299 Standardize node affinity strategy logs @damemi
* #300 Add more verbose logging to IsEvictable checks @damemi
* #307 Add parent make verify target @damemi
* #321 Add Pod Eviction Reason To Events @seanmalloy
* #322 Move sortPodsBasedOnPriority to pod util @lixiang233
* #325 Add initial GitHub issue templates @seanmalloy
* #327 Update klog to v2 @farah
* #328 Update To Go 1.14.4 @seanmalloy
* #332 Add maxPodsToEvictPerNode to LowNodeUtilization testcase struct @lixiang233
* #333 Support only one sorting strategy in lowNodeUtilization @lixiang233
* #340 Update e2e script to use Kind setup @damemi
* #344 have k8s version configurable when creating cluster through kind @ingvagabund
* #342 Remove Travis CI Configuration @seanmalloy
* #298 Add helm	chart @stevehipwell
* #343 Clean e2e test so it's easier to extend it @ingvagabund
* #351 Update Release Documentation For Helm Charts @seanmalloy
* #356 Update Helm release action to work on release branches @damemi
* #359 More Helm Documentation @seanmalloy
* #360 Add NPD+CA autohealing use case to user guide @dharmab
* #337 Rebase k8s dependencies to 1.19-rc.2 @damemi
* #363 Update Container Registry to k8s.gcr.io @seanmalloy
* #370 Bump k8s dependencies to 1.19-rc.4 @damemi 
* #375 Update Maintainer Details In Helm Chart @seanmalloy 
* #372 Redefine IsEvictable to be customizable for a particular strategy @ingvagabund 
* #382 Add table of contents to README @damemi 
* #380 Deprecate node-selector, max-pods-to-evict-per-node and evict-local-storage-pods flags and promote then to policy v1alpha1 fields @ingvagabund 
* #387 Add KUBECONFIG Export To Contributing Docs @seanmalloy 
* #385 LowNodeUtilization: use clientset in testing, drop all custom reactors @ingvagabund 

@seanmalloy
Copy link
Member

I opened #367 to track updating to Go 1.15. The k8s v1.19 release is going to use Go 1.15. This should probably be done prior to releasing descheduler v0.19.0.

@damemi
Copy link
Contributor Author

damemi commented Aug 20, 2020

We talked about it today and decided to plan on tagging the 1.19 release on Monday, August 31.

This is due to upstream 1.19 GA still being planned for Tuesday (8/25), and some of us planning to be OOO next week. Releasing the following Monday will ensure there are people available to catch any big fallout.

This also gives us time to finish up any ongoing work for this release, and prepare/review PRs for the GA bump.

If this sounds good to everyone, please prioritize any last minute PRs. Thanks!

@damemi
Copy link
Contributor Author

damemi commented Aug 31, 2020

I updated the release notes draft in #284 (comment). If these look good we'll publish the 1.19 tag EOD today

@seanmalloy
Copy link
Member

The descheduler v0.19.0 release is done.

/close

@k8s-ci-robot
Copy link
Contributor

@seanmalloy: Closing this issue.

In response to this:

The descheduler v0.19.0 release is done.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants