Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release lagging (in the case of hotfix releases) #72

Closed
jakirkham opened this issue Mar 8, 2018 · 3 comments
Closed

Release lagging (in the case of hotfix releases) #72

jakirkham opened this issue Mar 8, 2018 · 3 comments

Comments

@jakirkham
Copy link
Contributor

Looks like both singularity releases 2.4.3 and 2.4.4 were published on the same day. The latter hotfixing a security issue. However the 2.4.4 PR ( conda-forge/singularity-feedstock#2 ) appears to have lagged the 2.4.3 PR ( conda-forge/singularity-feedstock#1 ) by 2 days. Not sure the exact time difference between these two releases, which is likely relevant.

That said, would think the update script should prefer the latest of two releases and/or maybe do a second pass for hotfixes closing out recently outdated PRs. Reason being we don't want to be accidentally releasing problematic versions and/or biasing ourselves in that direction via the update bot. Admittedly there are likely more than a few subtleties here, but they are worth thinking about. Any thoughts on this?

@CJ-Wright
Copy link
Member

Interesting.

I think this is a product of when 02 and 03 are run. 02 finds the upstream versions, so if the hotfix was missed here there is at least a day missing. It is possible that 03 didn't finish all the feedstocks to be bumped with it's one run so that may account for the other day.

My first move would be to run 02 and 03 more often so we catch some of these cases. There is a limit on how fast we can pick things up (until we react to a feed from upstream), but doubling the PR capacity may put us in a better state. Some of this also has to do with speed at which we can send out the PRs, which is limited by the re-render (although for recipes which are up to date with the rendering the process is faster).

Another approach would to run the PRs from the client side with rever which auto PRs into a feedstock upon release.

@CJ-Wright
Copy link
Member

Currently we're running 3 times a day (with a fourth on the way) and seem to not be maxing out our time (so we're bumping everything that could be bumped with no backlog). Would that help fix things?

@CJ-Wright
Copy link
Member

@jakirkham I'm going to close this if that is ok. We are currently building at 6am, noon, 6pm, and midnight EST. Short of turning this into a web-service, #54, I think this is as good as we are going to get. It seems that the bot is getting though all the day's updates in at least 12 hours. This is becoming faster as we need to re-render fewer recipes now that #59 is done, so the likelyhood that we get through all the recipes in one sitting is better.

Feel free to reopen if this issue happens again.

(Note that if we are in an edge case, we can make another worker to pick up the slack)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants