Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cloudfront - Update Distribution - max retries exceeded #296

Closed
fnavarrodev opened this issue Nov 16, 2020 · 11 comments · Fixed by #297
Closed

Cloudfront - Update Distribution - max retries exceeded #296

fnavarrodev opened this issue Nov 16, 2020 · 11 comments · Fixed by #297
Labels
affects_2.10 bug This issue/PR relates to a bug has_pr module module plugins plugin (any type) traceback

Comments

@fnavarrodev
Copy link
Contributor

fnavarrodev commented Nov 16, 2020

I'm using your module to update the path of an origin for a cloudfront distribution. It was working fine but now I get this error:

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: 
botocore.exceptions.ClientError: An error occurred (Throttling) when calling the UpdateDistribution operation (reached max retries: 4): Rate exceeded
fatal: [10.63.56.237]: FAILED! => {"boto3_version": "1.16.13", "botocore_version": "1.19.13", "changed": false, "error": {"code": "Throttling", "message": "Rate exceeded"

I contacted AWS Support but the issue can not be fixed increasing any limit, apparently we are calling UpdateDistribution 2 or 3 times in less than one minute. In my side I only do one call, like this:

- name: "Update onorigin_path from cloudfront distribution"
  community.aws.cloudfront_distribution:
    state: present
    distribution_id: "{{ cdn_cloudfront }}"
    origins:
      - id: "{{ cdn_cloudfront_origin_id }}"
        domain_name: "{{ cdn_cloudfront_origin_domain }}"
        origin_path: "/{{ deploy_helper.new_release }}"
    default_cache_behavior:
      target_origin_id: "{{ cdn_cloudfront_origin_id }}"

Any idea of how can be fixed?

ISSUE TYPE
  • Bug Report
ANSIBLE VERSION

2.10

COMPONENT NAME

cloudfront_distribution

@tremble
Copy link
Contributor

tremble commented Nov 16, 2020

Programmatically the way around this is to add the AWSRetry decorator to the 'client' calls:
An example of this can be found in:
https://github.com/ansible-collections/amazon.aws/pull/103/files#diff-17496fe80d361b6e7fa1af8cbdefaa96e1003031e89f6b4767fb73171f822c83
(this example calls the variable 'connection' rather than 'client' but it's the same sort of boto3 object)

The reason there are multiple update calls is that boto3 (the library used to connect to AWS) has a very simplistic retry model built in, however with busy AWS accounts it's often insufficient.

@fnavarrodev
Copy link
Contributor Author

Thanks Mark, that's very helpful 👍 That decorator can be added in my yaml file or I need to modify the python library?

@tremble
Copy link
Contributor

tremble commented Nov 16, 2020

You need to modify the ansible python module (plugins/modules/cloudfront_distribution.py). If you're able to get it working and are able to open a pull request I should be able to review and get it merged.

@fnavarrodev
Copy link
Contributor Author

Hi Mark, I did the changes here #297 the retry works because now it takes much longer to throw the error but I still get the same error from AWS. Any idea what can be happening?

@tremble
Copy link
Contributor

tremble commented Nov 16, 2020

It's tough to say, one option is to increase the initial delay (delay=3 in your PR currently), it's also worth making sure that it's more than just the update call that you retry on.

The trouble is that most of the time when you're hitting this problem it's because you've got a very busy account. To reduce issues like this we (${DAYJOB}) have been spreading our services across multiple accounts (See also AWS Organisations) and using SAML or Cross-Account trusts for authentication. This has the added benefit of helping to segregate some of the access controls.

@fnavarrodev
Copy link
Contributor Author

Hi Mark, I spoke with AWS support and share screen and look at logs with them. Even implementing jittered_backoff or exponential_backoff still can see 2 requests at the same time in less than one second. Then those retries have the gaps in between as the backoff works fine. From our side we will execute "update-distribution" using the aws cli from command shell. I don't know if still useful the opened PR or should we see why there are always 2 calls in less than 2 seconds.

Thank you very much

@tremble
Copy link
Contributor

tremble commented Nov 16, 2020

In my opinion PRs going through and adding the retries are valuable, if you're willing to add the extra aws_retry=True parameters I'd be happy to get them merged...

@fnavarrodev
Copy link
Contributor Author

sure! I will, I just need to put few things together before ;)

@ansibullbot
Copy link

@fnavarrodev: Greetings! Thanks for taking the time to open this issue. In order for the community to handle your issue effectively, we need a bit more information.

Here are the items we could not find in your description:

  • ansible version
  • component name

Please set the description of this issue with this template:
https://raw.githubusercontent.com/ansible/ansible/devel/.github/ISSUE_TEMPLATE/bug_report.md

click here for bot help

@ansibullbot ansibullbot added affects_2.10 bug This issue/PR relates to a bug has_pr needs_info This issue requires further information. Please answer any outstanding questions needs_template This issue/PR has an incomplete description. Please fill in the proposed template correctly needs_triage traceback labels Nov 16, 2020
@ansibullbot
Copy link

@ansibullbot ansibullbot added module module plugins plugin (any type) and removed needs_info This issue requires further information. Please answer any outstanding questions needs_template This issue/PR has an incomplete description. Please fill in the proposed template correctly needs_triage labels Nov 16, 2020
@fnavarrodev
Copy link
Contributor Author

COMPONENT NAME

community.aws.cloudfront_distribution

ANSIBLE VERSION
ansible 2.9.15

softwarefactory-project-zuul bot pushed a commit that referenced this issue Feb 23, 2022
Awsretry/cloudfront distribution

SUMMARY
Adding AWSRetry.exponential_backoff when updating a cloudfront distribution.
Fixes #296
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
cloudfront_distribution

Reviewed-by: matej <[email protected]>
Reviewed-by: Mark Chappell <None>
Reviewed-by: Francesc Navarro <[email protected]>
Reviewed-by: Alina Buzachis <None>
Reviewed-by: Markus Bergholz <[email protected]>
patchback bot pushed a commit that referenced this issue Feb 23, 2022
Awsretry/cloudfront distribution

SUMMARY
Adding AWSRetry.exponential_backoff when updating a cloudfront distribution.
Fixes #296
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
cloudfront_distribution

Reviewed-by: matej <[email protected]>
Reviewed-by: Mark Chappell <None>
Reviewed-by: Francesc Navarro <[email protected]>
Reviewed-by: Alina Buzachis <None>
Reviewed-by: Markus Bergholz <[email protected]>
(cherry picked from commit a9c5553)
patchback bot pushed a commit that referenced this issue Feb 23, 2022
Awsretry/cloudfront distribution

SUMMARY
Adding AWSRetry.exponential_backoff when updating a cloudfront distribution.
Fixes #296
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
cloudfront_distribution

Reviewed-by: matej <[email protected]>
Reviewed-by: Mark Chappell <None>
Reviewed-by: Francesc Navarro <[email protected]>
Reviewed-by: Alina Buzachis <None>
Reviewed-by: Markus Bergholz <[email protected]>
(cherry picked from commit a9c5553)
softwarefactory-project-zuul bot pushed a commit that referenced this issue Feb 24, 2022
[PR #297/a9c55535 backport][stable-2] Awsretry/cloudfront distribution

This is a backport of PR #297 as merged into main (a9c5553).
SUMMARY
Adding AWSRetry.exponential_backoff when updating a cloudfront distribution.
Fixes #296
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
cloudfront_distribution
softwarefactory-project-zuul bot pushed a commit that referenced this issue Feb 24, 2022
[PR #297/a9c55535 backport][stable-3] Awsretry/cloudfront distribution

This is a backport of PR #297 as merged into main (a9c5553).
SUMMARY
Adding AWSRetry.exponential_backoff when updating a cloudfront distribution.
Fixes #296
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
cloudfront_distribution
alinabuzachis pushed a commit to alinabuzachis/community.aws that referenced this issue May 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects_2.10 bug This issue/PR relates to a bug has_pr module module plugins plugin (any type) traceback
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants