Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

builder/amazon: future proof RequestLimitExceeded errors #4090

Closed
mwhooker opened this issue Nov 1, 2016 · 6 comments
Closed

builder/amazon: future proof RequestLimitExceeded errors #4090

mwhooker opened this issue Nov 1, 2016 · 6 comments

Comments

@mwhooker
Copy link
Contributor

mwhooker commented Nov 1, 2016

We've seen this a couple times recently.

#4030 in particular.

We should be able to get around it by increasing the polling timeout, but I want to fix it in a way that's more user-friendly.

I propose increasing the default MaxRetries to something like 20 and allowing it to be overridden with an env var.

The default max retries aws-sdk will give us is 3, and it's somewhat worrying how low it is. We set it to 11. The logic in the sdk is interesting. After 13 retries it will wait for worse case ~500 seconds.

If it's a throttling error it will retry for ~5 minutes worse case after 8 retries.

@mwhooker
Copy link
Contributor Author

mwhooker commented Nov 1, 2016

not sure but we might need to hit every place we create a session. will be touching these all in #4093

@mwhooker mwhooker self-assigned this Nov 3, 2016
@mwhooker mwhooker removed this from the v1.0 milestone Mar 2, 2017
@nburglin
Copy link

@mwhooker Is this still on your radar? It would be nice to override the max number of retries with an env var.

In our scenario, we have several apps that can each be building multiple "micro-service" AMIs in parallel. At times we'll have 15-20 packer builds running from our CI tool, along with other in-house monitoring and clean-up scripts that use the API for various functions.

It's not uncommon for us to get the RequestLimitExceeded errors due to all of this activity, so it'd be great to just increase our number of retries to let the exponential backoff keep going. We've made use of the AWS_POLL_DELAY_SECONDS which has helped a lot, but we still hit the request limit occasionally. Especially during busy build times prior to a release cut.

@SwampDragons
Copy link
Contributor

The issue is still open so we still aspire to do it, but it isn't highly prioritized right now. If you want to take a stab at it, we'd welcome a PR.

@dhs-rec
Copy link

dhs-rec commented May 30, 2018

Still seeing this issue in 1.2.3:

amazon-ebs: Error creating AMI: RequestLimitExceeded: Request limit exceeded.

It's annoying to run into this when all the provisioning work has already been finished.

We also had this problem in our Ruby scripts we use for doing things in AWS. Since the AWS APIs use exponential backoff internally, the simple solution was to just increase the number of retries to 20 (default is 3) when initialising the clients. Could also apply here.

The default formula for the exponential backoff is 2^current_retry*0.3, which essentially means that the time between retries is doubled every third try.

@mwhooker
Copy link
Contributor Author

Thanks @dhs-rec for the note. We're definitely going to figure this out.

The way out waiters work is much different from the way it was when this ticket was opened. Because this ticket and the information is obsolete, I'm going to close it and track instead in #6177

@ghost
Copy link

ghost commented Mar 31, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Mar 31, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants