Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Diffs when there shouldn't be... #6832

Closed
ethanfrey opened this issue May 23, 2016 · 4 comments
Closed

Diffs when there shouldn't be... #6832

ethanfrey opened this issue May 23, 2016 · 4 comments

Comments

@ethanfrey
Copy link

First off, thank you for this tool. It is pretty awesome to see AWS networks going up and down at the flick of a keystroke :) However, between the smiles, there are the small frustrations. In this case, it detects a diff when there is none, and modifies or deletes/create resources.

Oh, version 0.6.15...

Running terraform plan on a relatively simple network I just created gives me the following. Destroying one route to create another one with the exact same definition (?)

~ aws_route_table.private-b
    route.1728511137.cidr_block:                "" => "0.0.0.0/0"
    route.1728511137.gateway_id:                "" => "nat-09948108bdc1da199"
    route.1728511137.instance_id:               "" => ""
    route.1728511137.nat_gateway_id:            "" => ""
    route.1728511137.network_interface_id:      "" => ""
    route.1728511137.vpc_peering_connection_id: "" => ""
    route.3968849473.cidr_block:                "0.0.0.0/0" => ""
    route.3968849473.gateway_id:                "" => ""
    route.3968849473.instance_id:               "" => ""
    route.3968849473.nat_gateway_id:            "nat-09948108bdc1da199" => ""
    route.3968849473.network_interface_id:      "" => ""
    route.3968849473.vpc_peering_connection_id: "" => ""

Also a minor annoyance but a constant warning (boolean / int confusion)

~ module.mongo.mongo-conf.aws_instance.mongo-config.1
    associate_public_ip_address: "false" => "0"
    source_dest_check:           "true" => "1"

So far just a bit annoying, but when I don't have the following block, it rips down and recreates my jump server (or actually all instances) on each apply.

    lifecycle {
        # work around a terraform bug:
        ignore_changes = ["security_groups"]
    }

With the following message (but in the AWS console, it is clear that the jump server is already in the correct security group).

-/+ aws_instance.jump
    ami:                         "ami-f9a62c8a" => "ami-f9a62c8a"
    associate_public_ip_address: "true" => "1"
    availability_zone:           "eu-west-1a" => "<computed>"
    ebs_block_device.#:          "0" => "<computed>"
    ephemeral_block_device.#:    "0" => "<computed>"
    instance_state:              "running" => "<computed>"
    instance_type:               "t2.micro" => "t2.micro"
    key_name:                    "ethan-f-ire" => "ethan-f-ire"
    placement_group:             "" => "<computed>"
    private_dns:                 "ip-10-0-1-38.eu-west-1.compute.internal" => "<computed>"
    private_ip:                  "10.0.1.38" => "<computed>"
    public_dns:                  "ec2-54-194-34-77.eu-west-1.compute.amazonaws.com" => "<computed>"
    public_ip:                   "54.194.34.77" => "<computed>"
    root_block_device.#:         "1" => "<computed>"
    security_groups.#:           "0" => "1" (forces new resource)
    security_groups.1628629749:  "" => "sg-d9ce60be" (forces new resource)
    source_dest_check:           "false" => "0"
    subnet_id:                   "subnet-f1438987" => "subnet-f1438987"
    tags.#:                      "1" => "1"
    tags.Name:                   "VPC Jump" => "VPC Jump"
    tenancy:                     "default" => "<computed>"
    vpc_security_group_ids.#:    "1" => "<computed>"

Maybe these are different issues clumped together, please let me know what you need for more info. I need to extract the network code, but here is the needed code for the jump server:

resource "aws_security_group" "jump" {
    name = "vpc_jump"
    description = "Allow 22 to host in VPC"


    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    egress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["${var.vpc_cidr}"]
    }

    vpc_id = "${aws_vpc.company.id}"

    tags {
        Name = "JUMPSG"
    }
}

module "jump_ami" {
  source        = "github.com/terraform-community-modules/tf_aws_ubuntu_ami/ebs"
  instance_type = "t2.micro"
  region        = "${var.aws_region}"
  distribution  = "trusty"
}

resource "aws_instance" "jump" {
    ami = "${module.jump_ami.ami_id}"
    instance_type = "t2.micro"
    key_name = "${var.aws_key_name}"
    security_groups = ["${aws_security_group.jump.id}"]
    subnet_id = "${aws_subnet.public.id}"
    associate_public_ip_address = true
    source_dest_check = false

    tags {
        Name = "VPC Jump"
    }

    lifecycle {
        # work around a terraform bug:
#        ignore_changes = ["security_groups"]
    }
}
@koenijn
Copy link
Contributor

koenijn commented May 24, 2016

We are seeing the same problems since using version 0.6.16. A lot of resources will be recreated while no actual changes have taken place.

Once the state has been refreshed with the version 0.6.16 we are unable to prevent the changes which aren't needed from applying.

Switching back to 0.6.14 doesn't help after state has been refreshed.

@ethanfrey
Copy link
Author

Btw, I have a work-around that stops all tear-down of resources for now. The network changes still happen, but are mostly harmless (in comparison to restarting the servers). Here is my "magic line" I paste in all AWS resources. Maybe there is another workaround for the network resources?

lifecycle {
   ignore_changes = ["security_groups", "associate_public_ip_address", "source_dest_check", "ebs_optimized"]
}

Good to hear I am not alone, and it is a recent regression, as I didn't see it the first time I evaluated terraform. But I had to upgrade from 0.6.14, as there was a bugfix for grandchildren modules in 0.6.15.

@catsby
Copy link
Contributor

catsby commented May 24, 2016

Hello friends –

Regrading AWS Instance, this is likely due to a regression regarding using security_groups when in a VPC, and not using vpc_security_group_ids, that was introduced into v0.6.16. Here's another issue that reported this:

security_groups is meant only for Ec2 Classic; any changes there force a destroy and recreate cycle. The regression there has incorrectly introduced them. You can get around this by correcting the configuration to use vpc_security_group_ids. While we've patched the regression for now, we'll likely revert to enforcing it for Instances in a VPC, but we'll have a more explicit note in the CHANGELOG.

Regarding the bool -> int, this merged PR may address that:

Finally, for your aws_route_table, looks like you're specifying a Nat ID in the Gateway ID attribute:

~ aws_route_table.private-b
    route.1728511137.gateway_id:                "" => "nat-09948108bdc1da199"
    route.1728511137.nat_gateway_id:            "" => ""
    route.3968849473.gateway_id:                "" => ""
    route.3968849473.nat_gateway_id:            "nat-09948108bdc1da199" => ""

Unfortunately it seems that the AWS API will accept a nat id as the gateway ID, and automatically correct it for you. In your case I imagine you have a value for gateway_id where you should have it for nat_gateway_id.

Let me know if you're still having issues after these suggestions. Thanks!

@ghost
Copy link

ghost commented Apr 25, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants