-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add information about what qualifies for a milestone #320
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale Let's quantify this as part of the v1.14 release cycle There are multiple repos that now have a v1.14 milestone, I believe the ones we care about are:
IMO all issues that relate to code or tasks or process for v1.14 should have an associated milestone, regardless of repo. For PRs, I care only about code that lands in the release, so kubernetes/kubernetes. I am less concerned with their use in the first half of the release, but consider them mandatory in the latter half once we enter the burndown phase (code slush, code freeze, code thaw). This is to help us keep track of all work destined to land in the release. It might be nice if we could have automation auto-milestone ref: #243 (comment) for the summary of our discussion of hopes and dreams on this during a sig-release meeting, there may be others |
/milestone v1.14 |
/kind documentation |
Anyone interested in doc-ing this? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/area release-team |
/assign |
Thanks for working on this, Jeff! |
I sent over the initial draft to @onlydole of items from a bug-triage perspective |
cc @onlydole |
Talking more with @tpepper, we need to focus on two primary paths:
I'll update this PR with changes more in-line with those two focuses |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Hey @onlydole, are you still actively working on this? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Howdy, @LappleApple! I was on a holding pattern for this, and I don't think there's a clear way forward on this issue. I'd like to bring it up on the next release engineering meeting as a topic to see how we could close this out! /remove-lifecycle stale |
Awesome, @onlydole! Go ahead and add to the meeting agenda as your time and schedule allow :) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/close in favor of #1257 |
/sig release
/help
rel: kubernetes/community#2408
The text was updated successfully, but these errors were encountered: