Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ResourceInUse for ASG/LaunchConfig - ongoing problems #2438

Closed
cdelorme opened this issue Jun 23, 2015 · 3 comments
Closed

ResourceInUse for ASG/LaunchConfig - ongoing problems #2438

cdelorme opened this issue Jun 23, 2015 · 3 comments

Comments

@cdelorme
Copy link

One error we run into frequently is ResourceInUse when we modify a template used by a launch configuration connected to an auto-scaling group.

In our use-case we're building microservices, and we have 8 sets of the same resources:

  • an elastic load balancer
  • a route53 address
  • a template for userdata
  • a launch configuration
  • an autoscaling group

The only differences between these sets of resources is naming, which means the dependency chain should look identical. They also have a shared set of security groups and iam roles.

When I modify the userdata being loaded into the templates and attempt to apply changes to all 8 services the error occurs only on some of them, while completing successfully for others. The errors look like this:

* ResourceInUse: Cannot delete launch configuration terraform-lcmcovfsibf3pa5hfvkglapybi because it is attached to AutoScalingGroup dev-hub-publish-api-asg

After searching the issue tracker, this appears to be an ongoing problem since terraform 0.3.x:

The work-around suggested in april to add a lifecycle of create_before_destroy works once, but subsequent attempts fail.

Our first thought was maybe the chain of dependencies was creating an issue, so we added the same lifecycle rule to the auto-scaling-group, which gave us a cycle error:

* Cycle: aws_iam_role.services (destroy), aws_iam_role.services, aws_autoscaling_group.hub-message-queue-service, template_file.hub-message-queue-service (destroy), template_file.hub-message-queue-service, aws_launch_configuration.hub-message-queue-service, aws_launch_configuration.hub-message-queue-service (destroy), template_file.document-persistence-service (destroy), template_file.document-persistence-service, aws_launch_configuration.document-persistence-service, aws_autoscaling_group.document-persistence-service, aws_launch_configuration.document-persistence-service (destroy), template_file.hub-publish-api (destroy), template_file.hub-publish-api, aws_launch_configuration.hub-publish-api, aws_autoscaling_group.hub-publish-api, aws_launch_configuration.hub-publish-api (destroy), aws_launch_configuration.hub-www-auth (destroy), aws_launch_configuration.hub-proxy-api, aws_autoscaling_group.hub-proxy-api, aws_launch_configuration.hub-proxy-api (destroy), aws_launch_configuration.mock-auth, aws_autoscaling_group.mock-auth, aws_launch_configuration.mock-auth (destroy), template_file.hub-configuration-ui (destroy), template_file.hub-configuration-ui, aws_launch_configuration.hub-configuration-ui, aws_autoscaling_group.hub-configuration-ui, aws_launch_configuration.hub-configuration-ui (destroy), aws_autoscaling_group.hub-configuration-api, aws_launch_configuration.hub-configuration-api, aws_launch_configuration.hub-configuration-api (destroy), aws_iam_instance_profile.services (destroy), aws_iam_instance_profile.services, aws_launch_configuration.hub-www-auth, aws_autoscaling_group.hub-www-auth

Another option was to remove or disable the lifecycle on every apply attempt, but removing it attempted to destroy and rebuild the entire environment, which failed with an api timeout:

* Post https://ec2.us-east-1.amazonaws.com/: read tcp 205.251.242.7:443: connection reset by peer

While I do not believe terraform is the correct tool to manage deployed software versions long-term, I do think this is a problem that impedes more than just our use-case.

For refernece, here is our services terraform script.

Also here are two jenkins builds, which show the behavior failing for only some of the 8 services.

@joekhoobyar
Copy link
Contributor

It is worth mentioning a few additional things here:

  • After adding lifecycle { create_before_destroy = true } to all of the ASG, LC and template_file(s) for the services, we are able to run terraform apply - but we still cannot run terraform destroy without terraform complaining about cycles.
  • An inability to destroy resources built by terraform would be a complete blocker for us, so this is our highest priority as far as terraform issues go.

@mitchellh
Copy link
Contributor

I think this is still a dup of #1109 and we'd like to consolidate issues. @joekhoobyar's destroy issue is open as a separate issue #2493.

@ghost
Copy link

ghost commented May 1, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators May 1, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants