-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes 1.19 Release Cycle #284
Comments
/assign @aveshagarwal @ravisantoshgudimetla @seanmalloy @ingvagabund If this looks good, could you all please give a +1 to acknowledge it, or add any comments we should address here? Once we have a consensus we can begin this process |
I don't think the patch version for descheduler needs to match the k8s version. The patch version for descheduler should be used to keep track of descheduler bug fixes. I recommend releasing v0.18.0 instead of v0.18.2. |
What is the benefit of descheduler 0.17 based on k8s 1.17? I am pretty sure descheduler 0.18 based on k8s 1.18 should work for 1.17 too without any issue, or are there any 1.18 specific APIs we are using in descheduler might not work for k8s 1.17? I mean k8s 1.18 is the latest release, and if going forward we want to map every descheduler release with corresponding k8s release thats fine. So why not just start with 1.18? But don't have strong opinion either way, your choice if you guys have cycles. Also if we are doing the above, should we also clarify, perhaps make it explicit, about descheduler's supportability statement in README, may be by providing some sort of matrix that even though there is going to be descheduler release for every k8s release, but each descheduler release should work fine on multiple releases of k8s? What are your thoughts? I am just hoping that users do not get confused that descheduler 0.x would only work for k8s 1.x. |
I agree.
Matching patch releases might be too much it seems and might not make much sense IMO, as APIs dont change in k8s patch releases. |
In general, +1. |
One thing I noticed that PRs by people (me, @damemi, and i think ravi) who have approval permission are getting self approved. Could we look into it so that PRs dont get self approval? Or self-approval should require one more approval? makes sense? |
Good point, we don't really need to catch ourselves up when we can just start with a @seanmalloy what do you think? Would you be willing to write up v0.18.0 release notes for us? |
@damemi in general this sounds good to me. Yes, I can write up the release notes for v0.18.0. I'll add a comment to this issue with the release notes when I have them ready. |
@damemi here are the release notes for v0.18.0. I made some slight changes to the format since that last release. The release notes are also available in this gist. Docker images are available on Google Container Registry:
New Features 🌈
Bug Fixes 🐛
Others 🏃
|
@aveshagarwal I'll get a PR submitted soon to update the |
@seanmalloy thanks for the great notes, are those production images already promoted for v0.18? |
@damemi no they are not. Someone needs to follow the steps in the release guide. We need to add a step to the release guide about creating release branches. |
@seanmalloy ok, just checking. I think we've addressed everything that was brought up so if there are no more suggestions I'll create the Then I will publish the draft Following image promotion we'll publish the release from draft. |
@damemi I created PR #288 to update the compatibility matrix and PR #289 to update the release guide with some details on creating the release branches.
The container images are built and pushed to the staging registry by an automated prow job with the "semi-auto" process. Using the "manual" process a person builds and pushes the container images from their laptop to the staging registry. The "manual" process would only need to be used if the automated prow job fails for some reason.
Yes, I can take care of that step. |
Branch @seanmalloy once the image promotions are ready I'll publish the release |
@damemi I see the |
@seanmalloy I just pushed v0.18.0 tag. |
Here is the v0.18.0 container image in the staging registry that needs to promoted |
Pull request for v0.18.0 image promotion kubernetes/k8s.io#888 |
@damemi the |
@seanmalloy thanks for catching that, I see the image promotion has merged so I've published the release here: https://github.com/kubernetes-sigs/descheduler/releases/tag/v0.18.0 (I also included a note explaining the versioning change) Now that we're caught up we will do the same process again around 8/4 for the 1.19 GA. Thanks everyone! |
@damemi do you think we should define the list if features we would like to target for the v0.19.0 release? Here is a starter list:
Keeping in mind there is only about 2 months until v0.19.0. |
With 1.19 code freeze today, I opened #337 to test bumping our k8s deps to a 1.19 branch (currently |
Started working on release notes for 1.19, if anything's missing or in the wrong spot so far let me know. Obviously we still have a couple things that are being worked on so this is incomplete right now
|
I opened #367 to track updating to Go 1.15. The k8s v1.19 release is going to use Go 1.15. This should probably be done prior to releasing descheduler v0.19.0. |
We talked about it today and decided to plan on tagging the 1.19 release on Monday, August 31. This is due to upstream 1.19 GA still being planned for Tuesday (8/25), and some of us planning to be OOO next week. Releasing the following Monday will ensure there are people available to catch any big fallout. This also gives us time to finish up any ongoing work for this release, and prepare/review PRs for the GA bump. If this sounds good to everyone, please prioritize any last minute PRs. Thanks! |
I updated the release notes draft in #284 (comment). If these look good we'll publish the 1.19 tag EOD today |
The descheduler v0.19.0 release is done. /close |
@seanmalloy: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This outlines the descheduler release plans for the upstream 1.19 release cycle (April 13 - August 4), ending with a rebase to the upstream 1.19 dependencies soon after 1.19 GA (on August 4).
At the start of this cycle, the descheduler was still on 1.17 dependencies so we needed to take some steps to catch up:
release-1.17
branchrelease-1.18
branch, with updated 1.18 depsv0.18.0
release(this should actually probably bev0.18.2
since that's the matching upstream tag for what we bumped to in Update to k8s 1.18.2 dependencies #280)release-1.19
branch with updated 1.19 deps and tagv0.19.0
releaseThis will have us caught up so that, at the conclusion of the 1.20 upstream release cycle, we only need to update our deps, branch, and tag
The text was updated successfully, but these errors were encountered: