Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gcrmgr container push caused k8s 1.8.15 release to fail #579

Closed
jpbetz opened this issue Jul 11, 2018 · 10 comments
Closed

gcrmgr container push caused k8s 1.8.15 release to fail #579

jpbetz opened this issue Jul 11, 2018 · 10 comments
Labels
area/release-eng Issues or PRs related to the Release Engineering subproject lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/release Categorizes an issue or PR as relevant to SIG Release.

Comments

@jpbetz
Copy link

jpbetz commented Jul 11, 2018

Marking this as a P2 since a second attempt to release was successful.

Steps taken:

$./gcbmgr stage release-1.8 --nomock --official
$./gcbmgr tail b123dd6f-54c4-4324-9246-f67beb24cc96
$./gcbmgr release --nomock --official release-1.8 --buildversion=v1.8.15-beta.0.33+c2bd642c70b362

Failed with error: "Pushing staging-k8s.gcr.io/cloud-controller-manager:v1.8.15: .....FAILED"

$ ./gcbmgr release --nomock --official release-1.8 --buildversion=v1.8.15-beta.0.33+c2bd642c70b362
$./gcbmgr tail 05b1db46-8b32-483a-b893-d853dfcfee93

...output indicated that if I run "gsutil -m rm -r gs://kubernetes-release/release/v1.8.15" I should be able to reattempt the release.

$ gsutil -m rm -r gs://kubernetes-release/release/v1.8.15

<successful>

./gcbmgr release --nomock --official release-1.8 --buildversion=v1.8.15-beta.0.33+c2bd642c70b362
$./gcbmgr tail 402f24fa-c7b3-48bf-ac37-841e7bfdff79

...release successful

/cc @listx @ixdy @david-mcmahon

@david-mcmahon
Copy link
Contributor

These failed pushes are usually due to an out of date gcloud binary. We don't really have a good story on updating the k8s-cloud-builder image on a regular basis.

Update it by running https://github.com/kubernetes/release/blob/master/build/build-k8s-cloud-builder-container.

@jpbetz
Copy link
Author

jpbetz commented Jul 17, 2018

Also encountered by @foxish for 1.11.1 - https://gist.github.com/foxish/f0c98aa2a7d832851cbdcc9ea7107f4c

@MaciekPytel
Copy link

Is there something I can do to avoid this issue? Should I just run https://github.com/kubernetes/release/blob/master/build/build-k8s-cloud-builder-container every time before trying to cut release?

@david-mcmahon
Copy link
Contributor

@MaciekPytel That doesn't seem unreasonable, though certainly less than ideal. Having some CI engine doing that work would be best. It also appears that the underlying ubuntu base image used to create that image changed somehow and the Dockerfile no longer builds, so that needs some attention as well.

@david-mcmahon david-mcmahon added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jul 18, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 16, 2018
@dims
Copy link
Member

dims commented Oct 23, 2018

@dougm did you run into this issue?

@dougm
Copy link
Member

dougm commented Oct 23, 2018

@dims no, I haven't run into this.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 22, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@justaugustus justaugustus added sig/release Categorizes an issue or PR as relevant to SIG Release. area/release-eng Issues or PRs related to the Release Engineering subproject labels Dec 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/release-eng Issues or PRs related to the Release Engineering subproject lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/release Categorizes an issue or PR as relevant to SIG Release.
Projects
None yet
Development

No branches or pull requests

9 participants