Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot delete launch configuration because it is attached to AutoScalingGroup #532

Closed
kief opened this issue Oct 31, 2014 · 39 comments
Closed

Comments

@kief
Copy link

kief commented Oct 31, 2014

Using terraform v0.3.1, when I change the AMI my launch configuration depends on, it fails because of the reference to the autoscaling group.

Here's the relevant section of my configuration:

resource "aws_launch_configuration" "go_agent" {
  name = "go_agent"
  image_id = "${lookup(var.amis, var.region)}"
  instance_type = "t2.small"
  key_name = "${var.key_name}"
}

resource "aws_autoscaling_group" "go_agent_pool" {
  availability_zones = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
  vpc_zone_identifier = ["${aws_subnet.agentsInZoneA.id}","${aws_subnet.agentsInZoneB.id}","${aws_subnet.agentsInZoneC.id}"]
  name = "go_agent_pool"
  max_size = 3
  min_size = 0
  health_check_grace_period = 300
  health_check_type = "ELB"
  desired_capacity = 0
  force_delete = true
  launch_configuration = "${aws_launch_configuration.go_agent.name}"
}

Here's the result of "terraform apply"

$ terraform apply
aws_launch_configuration.go_agent: Refreshing state... (ID: go_agent)
aws_vpc.gocd: Refreshing state... (ID: vpc-d331f0b6)
aws_subnet.agentsInZoneC: Refreshing state... (ID: subnet-18956441)
aws_subnet.agentsInZoneB: Refreshing state... (ID: subnet-045e8c73)
aws_subnet.agentsInZoneA: Refreshing state... (ID: subnet-9377c3f6)
aws_subnet.go_server: Refreshing state... (ID: subnet-1b956442)
aws_security_group.go_server: Refreshing state... (ID: sg-0274cc67)
aws_internet_gateway.gocd: Refreshing state... (ID: igw-2fe50e4a)
aws_autoscaling_group.go_agent_pool: Refreshing state... (ID: go_agent_pool)
aws_route_table.gocd: Refreshing state... (ID: rtb-f827ec9d)
aws_instance.go_server: Refreshing state... (ID: i-51455513)
aws_route_table_association.go_server: Refreshing state... (ID: rtbassoc-a84497cd)
aws_eip.go_server_public_ip: Refreshing state... (ID: eipalloc-40a54a25)
aws_launch_configuration.go_agent: Destroying...
aws_launch_configuration.go_agent: Error: ResourceInUse: Cannot delete launch configuration go_agent because it is attached to AutoScalingGroup go_agent_pool
Error applying plan:

1 error(s) occurred:

* ResourceInUse: Cannot delete launch configuration go_agent because it is attached to AutoScalingGroup go_agent_pool

Thanks,
Kief

@serialx
Copy link

serialx commented Nov 2, 2014

I am having the same issue. Even changing simple "user_data" triggers this problem.

@motdotla
Copy link

I am also running into this. For now, I create a new launch configuration, apply, then remove the old launch configuration, and apply.

@gwilym
Copy link

gwilym commented Mar 5, 2015

+1

Ran into this within minutes of trying out terraform for the first time with trying to change the iam_instance_profile on a launch configuration.

Like @motdotla I'm able to work around it with small incremental changes but it would be problematic if terraform were in some sort of automated continuous integration.

@phrawzty
Copy link
Contributor

phrawzty commented Mar 5, 2015

👍 We hit this as well, and like @gwilym above, it's a blocker for integrating Terraform into our CI pipeline. :person_frowning:

@catsby catsby added the bug label Mar 27, 2015
@ketzacoatl
Copy link
Contributor

+1

@willejs
Copy link

willejs commented Apr 1, 2015

👍 im also experiencing this bug. I think #1109 is a duplicate

@phinze phinze closed this as completed in 09f5935 Apr 2, 2015
@ketzacoatl
Copy link
Contributor

exciting, thanks!

mikkoc pushed a commit to alphagov/paas-alpha-tsuru-terraform that referenced this issue Apr 29, 2015
Up until now we were not able to update a launch configuration (i.e.: AMI, user-data, instance type)
due to this bug: hashicorp/terraform#532
By removing the launch configuration name, we let Terraform generate a random one for us.
This will help when updating the launch configurations (this involves create a new resource and delete the old one).
@nickm4062
Copy link

This seems to be present in 0.7.4

@philippevk
Copy link

I can confirm that this is present in 0.7.4

@dang3r
Copy link

dang3r commented Oct 20, 2016

Present in 0.7.7 as well

@myoung34
Copy link
Contributor

myoung34 commented Nov 7, 2016

Present in 0.7.9.

Modified Launch config userdata, ran apply:

Error applying plan:

1 error(s) occurred:

* aws_launch_configuration.ecs: ResourceInUse: Cannot delete launch configuration ECS health-staging-old because it is attached to AutoScalingGroup ECS health-staging-old
        status code: 400, request id: a0dcc293-a514-11e6-9498-13c53b8ef17a

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

@jason-riddle
Copy link
Contributor

Any update on this? Just bumped into this problem as well.

@elghazal-a
Copy link

Still present in Terraform v0.7.11 when trying to update Launch config userdata,

@michael-henderson
Copy link

Is there any plan to fix this? This bug has been around for 2+ years :/

@rumenvasilev
Copy link

Gyus, AWS won't allow you to delete active launch configuration. The fix for this bug (not really a bug) is to simply add a lifecycle rule "create_before_destroy" in launch configuration section (https://www.terraform.io/docs/configuration/resources.html#lifecycle)

@michael-henderson
Copy link

Aha! All it takes is for someone to post the answer, rather than just sweep it under the rug :) Thanks a ton @rumenvasilev , worked great!

@myoung34
Copy link
Contributor

@rumenvasilev that's the kind of information we were really looking for anyway!

@ketzacoatl
Copy link
Contributor

I would vote we close this issue, any takers?

@james-gonzalez
Copy link

james-gonzalez commented Dec 7, 2016

lifecycle {
    create_before_destroy = true
  }

I've got this in my resource aws_launch_configuration and I still get this error message. I'm using 0.8.0-rc2

Error:

ResourceInUse: Cannot delete launch configuration

@jrslv
Copy link

jrslv commented Dec 15, 2016

Still present in 0.8.0

@nathanielks
Copy link
Contributor

Same for 0.8.2. Adding create_before_destroy unfortunately doesn't work when trying to just destroy.

@SantoDE
Copy link

SantoDE commented Jan 11, 2017

Bump. Still present.

@omar-yassin
Copy link

Still present in 0.8.3 when I'm only updating my launch config user_data script file.

@michael-henderson
Copy link

I'm not sure if people have tried this method on this thread, but it is working for me, so I figured I would post.

  1. Do not name your launch config, let terraform name it automatically, so the name is calculated. Just delete the entire "name" param inside your launch config block.
  2. As mentioned above: lifecycle { create_before_destroy = true } Inside your launch config AND your ASG block.
  3. Not sure if this part matters, but I have the following inside my ASG block: depends_on = ["aws_launch_configuration.<launchconfigname>"]
  4. Name your ASG, including the generated launch config name, like so:
    name = "app-asg-${aws_launch_configuration.<launchconfigname>.name}"

Also note, if you include wait_for_elb_capacity = "${var.asg_desired}" your ASG will wait for that number of healthy hosts to show up BEFORE rotating out your old AMIs. Hope this helps

@gburson
Copy link

gburson commented Jan 28, 2017

Thanks Michael - great tip, works for me. I guess the only knock-on effect of this is that you get a new load balancer each time you update your launch config, which means you potentially have to update your DNS and make sure nothing breaks while that's propagated.

@gburson
Copy link

gburson commented Jan 28, 2017

I managed to achieve the right result without step 4 - which avoids the asg being recreated each time. So effectively I just used Michael's step 1 & 2 and all is good!

@joelittlejohn
Copy link

joelittlejohn commented Feb 7, 2017

I ran into this problem even though I use lifecycle { create_before_destroy = true } on both the ASG and the launch configuration. When deleting an the ASG and launch configuration, I see terraform deleting the autoscaling group first and waiting for it to be destroyed like:

module.foo.aws_autoscaling_group.foo: Still destroying... (10s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (20s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (30s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (40s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (50s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (1m0s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (1m10s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (1m20s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (1m30s elapsed)
module.foo.aws_autoscaling_group.foo: Still destroying... (1m40s elapsed)
module.foo.aws_autoscaling_group.foo: Destruction complete

terraform then immediately goes on to delete the launch configuration, but the deletion fails because AWS still complains that the launch configuration is associated with an ASG. If I then run terraform apply again, the deletion succeeds.

I think this is an eventual consistency problem in AWS. Although the ASG is deleted first, it appears we need to wait a few seconds before attempting to delete the launch configuration.

@setevoy2
Copy link

setevoy2 commented Feb 17, 2017

Ss we still have issue with 0.8.7 even after it's update - we made "fix" :-|
Terrafrom calls from the Groovy script in Jenkins.
So - we just wrapped it into try/catch:

        ...
        sh 'cd terraform && terraform get'

        try {
            sh "cd terraform && terraform apply \
              -var 'environment=${ENVIRONMENT}' \
               ...
              -var 'max_size=8'"
        } catch(Exception) {
            return 0
        }
        ...

It's "ok" for us, as Terraform made changes in ASG's launch config setting before fail, and this is all we need to deploy an application.

Hope this will be fixed soon in a correct way.

@mudrii
Copy link

mudrii commented Apr 18, 2017

example below works for me

esource "aws_launch_configuration" "api_dev_front" {
// name = "api_dev_front"
image_id = "ami-ANY_AMY"
instance_type = "t2.micro"
security_groups = [
"${var.api_dev_sec_gr_front}"]
user_data = "${file("./minin_data.sh")}"
key_name = "${var.api_dev_key_pair_minin}"
iam_instance_profile = "${var.api_dev_iam_minin_inst_prof}"
associate_public_ip_address = false
enable_monitoring = true
lifecycle {
create_before_destroy = true
}
}

resource "aws_autoscaling_group" "api_dev_front" {
vpc_zone_identifier = [
"${var.api_dev_ext_subnet_ids}"]
// name = "api_dev_front"
max_size = "4"
min_size = "1"
desired_capacity = "1"
health_check_type = "ELB"
force_delete = true
launch_configuration = "${aws_launch_configuration.api_dev_front.name}"
target_group_arns = ["${var.api_dev_alb_target_gr_arn}"]
lifecycle {
create_before_destroy = true
}
}

@zerolaser
Copy link

@michael-henderson If you do not name your launch configuration. Its hard to identify it on the console. So to identify the name of launch configuration need to navigate via Asg ?

@stefanwork
Copy link

It seems that if you have an autoscaling group that uses name, but then later you transition it to use name_prefix, but you add name to ignore_changes, you also run into this whenever something in the launch configuration changes.

I did this, but in a module, and I didn't want every user of the module to have to re-create their ASGs, and only have the name_prefix thing take effect on ASGs created after I made the change.

I had to revert and instead add another variable where the user can use a random suffix that gets appended to the ASG name. Quite annoying, but less annoying that having terraform fail a lot.

@hubertgrzeskowiak
Copy link

Not sure if this is related, but when I change some of my launch configurations, the ASGs are not recreated, even though they're using the respective LC's name.

@joelittlejohn
Copy link

@hubertgrzeskowiak when you change a launch configuration, the ASGs that use it will be updated to use it. They will not be recreated. Instances in the ASG will remain in place but any new instance launched will use the new launch configuration.

@joshma
Copy link

joshma commented Dec 27, 2017

@joelittlejohn Curious, were you able to resolve this? I'm also seeing the same thing: terraform attempts to delete the launch config, and manually checking it should be able to, so I'm guessing it's an eventual consistency problem. When I re-plan and apply the changes, with nothing else changed, it's able to "depose" of the old launch configurations.

I wonder if there's some way for terraform to "sleep" a few seconds or just retry?

@hubertgrzeskowiak
Copy link

@joelittlejohn Therefore I am using the LC's name as part of the ASG's name. The implicit dependency should re-create it.

@lisp-ceo
Copy link

Is there any further guidance for this? I have custom logic to sleep on the terraform run then to remove the oldest LC matching each host configuration.

@wilbur04
Copy link

Change your launch_configuration to use name_prefix instead of name so there is no conflicts in the name.
And add

  lifecycle {
    create_before_destroy = true
  }

@line0
Copy link

line0 commented Jul 30, 2018

This still doesn't work at all as of Terraform v0.11.7 despite using name_prefix for the lc as well as create_before_destroy for both lc and asg.

I'm seeing the exact same behavior as @joshma - guaranteed to happen every single time.

I'm also seeing the same thing: terraform attempts to delete the launch config, and manually checking it should be able to, so I'm guessing it's an eventual consistency problem. When I re-plan and apply the changes, with nothing else changed, it's able to "depose" of the old launch configurations.

@phinze could you please reopen this and have another look?

@mildwonkey
Copy link
Contributor

Hi all,

Issues with the terraform AWS provider should be opened in the aws provider repository.

Because this closed issue is generating notifications for subscribers, I am going to lock it and encourage anyone experiencing issues with the aws provider to open tickets there.

Please continue to open issues here for any other terraform issues you encounter, and thanks!

@hashicorp hashicorp locked and limited conversation to collaborators Jul 30, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests