Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Destroy cycle error with module outputs/inputs #1835

Closed
lamdor opened this issue May 6, 2015 · 5 comments · Fixed by #1855
Closed

Destroy cycle error with module outputs/inputs #1835

lamdor opened this issue May 6, 2015 · 5 comments · Fixed by #1855
Assignees

Comments

@lamdor
Copy link
Contributor

lamdor commented May 6, 2015

Hey all, I was testing out some of the newest hotness with the module flatten of #1781 from #1582 against our stack and ran into another little snag with destroy.

This is on terraform master

$ terraform version
Terraform v0.5.0-dev (cebcee5c638630c4792eb3ed1bccf50b560ecec6)

I've been able to make it a small example. It happens when a resource's attribute is an attribute and is fed into another module as an input which is used.

Given:
main.tf

module "a_module" {
  source = "./a_module"
}

module "b_module" {
  source = "./b_module"
  a_id = "${module.a_module.a_output}"
}

a_module/main.tf

resource "null_resource" "a" {
}

output "a_output" {
  value = "${null_resource.a.id}"
}

b_module/main.tf

variable "a_id" {}

resource "null_resource" "b" {
  provisioner "local-exec" {
    command = "echo ${var.a_id}"
  }
}

A terraform plan and apply works fine.

$ terraform apply
module.a_module.null_resource.a: Creating...
module.a_module.null_resource.a: Creation complete
module.b_module.null_resource.b: Creating...
module.b_module.null_resource.b: Provisioning with 'local-exec'...
module.b_module.null_resource.b (local-exec): Executing: /bin/sh -c "echo 3631129357555948569"
module.b_module.null_resource.b (local-exec): 3631129357555948569
module.b_module.null_resource.b: Creation complete

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

However, when planning the destroy, it fails with a cycle of which I'm not sure why.

$ terraform plan --destroy
Refreshing Terraform state prior to plan...

module.a_module.null_resource.a: Refreshing state... (ID: 3631129357555948569)
module.b_module.null_resource.b: Refreshing state... (ID: 654973968961742965)

Error running plan: 1 error(s) occurred:

* Cycle: module.b_module.var.a_id, module.b_module.null_resource.b (destroy), module.a_module.null_resource.a (destroy), module.a_module.null_resource.a, module.a_module.output.a_output

Any ideas? Thanks again guys.

@lamdor lamdor changed the title Destroy cycle error with module outputs/intpus Destroy cycle error with module outputs/inputs May 6, 2015
@phinze
Copy link
Contributor

phinze commented May 6, 2015

Thanks for the detailed report - we'll get this looked at.

@mitchellh
Copy link
Contributor

Fantastic bug report, I have it reproduced in a unit test case and I have a fix in mind. But first I'm going to eat dinner.

Suspense

@mitchellh
Copy link
Contributor

I thought I had a fix but this is turning out to be more complicated than I imagined, see #1842 for more details on why this is complicated. I have to think about this more.

@lamdor
Copy link
Contributor Author

lamdor commented May 20, 2015

So #1855 didn't fix my original problem cycle in our own code, but it did fix the example I gave.

Digging into it more, it seems our cycle was caused by a provisioner not being defined in the root module. We have some "special" provisioners that use variables from created resources. Once we moved all the provisioners we could to the root, the cycle on destroy ceased to exist.

Thanks for all your hard work on this @mitchellh

@ghost
Copy link

ghost commented May 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators May 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
3 participants