-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ECS Service erroring out due to eventually-consistent IAM role #4375
Comments
Your assumption is correct, this is caused by the eventually consistent nature of IAM. We've been fighting with this all over the place, oddly it seems to be visible most in ECS... I'm not sure if it means that AWS treats ECS differently or whether people just manipulate with this IAM role here more often, so that the issue is more visible here. It may also be less visible in EC2 Instance Profiles purely because starting EC2 instance takes more time (I'm really just guessing). Here is a sample from my debug log supporting the theory:
As you can see ^ the error is coming from the Update call, which means that ECS API actually allows you to create a new ECS service at one point, purely because it talks to a part of IAM which already has the IAM policy, but the Update call hits a different part which doesn't have that yet, hence it fails. I've described this problem in depth here: KMS is apparently affected by this too. |
I have a similar issue with a different error, Although errors reported as mine are pointed here it seems. 2016/03/03 17:16:02 [DEBUG] terraform-provider-aws: 2016/03/03 17:16:02 [DEBUG] Trying to create ECS service again: "Unable to assume role and validate the listeners configured on your load balancer. Please verify the role being passed has the proper permissions." |
Same problem here. Also with ECS. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
aws_autoscaling_group.ecs-cluster: Creation complete
Error applying plan:
1 error(s) occurred:
status code: 400, request id: dca57d15-a4ef-11e5-9749-03d1427e2486
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
after terraform apply again, it passes
The text was updated successfully, but these errors were encountered: