Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ignoring user_data changes silently recreates aws_instances because of user_data changes #6296

Closed
nkonopinski opened this issue Apr 22, 2016 · 2 comments

Comments

@nkonopinski
Copy link

Terraform Version

Terraform v0.6.14
Terraform v0.6.15-dev (4345c08)

Affected Resource(s)

  • aws_instance

Terraform Configuration Files

resource "aws_instance" "web" {
lifecycle {
ignore_changes = ["user_data"]
}

Expected Behavior

tf should respect ignore_changes for user_data and not re-create instances when the user_data value changes.

Actual Behavior

tf plan reports a new hash for user_data on all aws_instances, which forces new resources. This is irritating because there have been no changes made to the user_data template. The final output of the plan says it will add and destroy the number of aws_instances which are currently running.

As a work around to this bug, I added ignore_changes user_data to aws_instances in main.tf. Now tf plan does not report any differences for user_data on aws_instances. In fact it doesn't even list the aws_instances in the plan output. It does, however, list the same number of resources it will add and destroy on the last line of the plan. As a safe guard to doing anything horrible, I added prevent_destroy = true to the aws_instances. Now when I run plan I receive error messages reading "the plan would destroy this resource" for all aws_instances.

Important Factoids

The state files were originally created with an older version of tf but I don't remember the exact version, probably somewhere around 0.6.8

@stack72
Copy link
Contributor

stack72 commented Apr 22, 2016

Hi @nkonopinski

thanks so much for the bug report here. Unfortunately, we are currently tracking this issue right now as it has bitten a few people. I am going to link this to the major issue we opened to track it's progress and then close this as a duplicate. The details here will certainly help

Thanks again

Paul

@ghost
Copy link

ghost commented Apr 26, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 26, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants