Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_security_group always think there is a change #10099

Closed
kiich opened this issue Nov 14, 2016 · 6 comments
Closed

aws_security_group always think there is a change #10099

kiich opened this issue Nov 14, 2016 · 6 comments

Comments

@kiich
Copy link

kiich commented Nov 14, 2016

Terraform Version

Terraform v0.7.10

Affected Resource(s)

  • aws_security_group

Terraform Configuration Files

resource "aws_security_group" "test" {
    name        = "TF"
    description = "TF security group to show issue"
    vpc_id      = "vpc-12345678"

    ingress {
      from_port       = 0
      to_port         = 0
      protocol        = "-1"
      self            = true
    }

    ingress {
      from_port       = 0
      to_port         = 0
      protocol        = "-1"
      cidr_blocks = ["91.232.36.10/32"]
    }

    ingress {
      from_port       = 0
      to_port         = 0
      protocol        = "-1"
      cidr_blocks = ["192.168.0.0/16"]
    }

    egress {
      from_port       = 0
      to_port         = 0
      protocol        = "-1"
      cidr_blocks     = ["0.0.0.0/0"]
    }
}

Expected Behavior

apply creates the resource
then immediately after, terraform plan should not show any changes

Actual Behavior

terraform apply shows:

  egress.#:                             "" => "1"
  egress.482069346.cidr_blocks.#:       "" => "1"
  egress.482069346.cidr_blocks.0:       "" => "0.0.0.0/0"
  egress.482069346.from_port:           "" => "0"
  egress.482069346.prefix_list_ids.#:   "" => "0"
  egress.482069346.protocol:            "" => "-1"
  egress.482069346.security_groups.#:   "" => "0"
  egress.482069346.self:                "" => "false"
  egress.482069346.to_port:             "" => "0"
  ingress.#:                            "" => "3"
  ingress.3032708741.cidr_blocks.#:     "" => "1"
  ingress.3032708741.cidr_blocks.0:     "" => "91.232.36.10/32"
  ingress.3032708741.from_port:         "" => "0"
  ingress.3032708741.protocol:          "" => "-1"
  ingress.3032708741.security_groups.#: "" => "0"
  ingress.3032708741.self:              "" => "false"
  ingress.3032708741.to_port:           "" => "0"
  ingress.3308487977.cidr_blocks.#:     "" => "1"
  ingress.3308487977.cidr_blocks.0:     "" => "192.168.0.0/16"
  ingress.3308487977.from_port:         "" => "0"
  ingress.3308487977.protocol:          "" => "-1"
  ingress.3308487977.security_groups.#: "" => "0"
  ingress.3308487977.self:              "" => "false"
  ingress.3308487977.to_port:           "" => "0"
  ingress.753360330.cidr_blocks.#:      "" => "0"
  ingress.753360330.from_port:          "" => "0"
  ingress.753360330.protocol:           "" => "-1"
  ingress.753360330.security_groups.#:  "" => "0"
  ingress.753360330.self:               "" => "true"
  ingress.753360330.to_port:            "" => "0"

then terraform plan after shows changes after successful apply EVEN with no changes in any of the files.

    ingress.#:                            "2" => "3"
    ingress.1778472056.cidr_blocks.#:     "0" => "1"
    ingress.1778472056.cidr_blocks.0:     "" => "192.168.0.0/16"
    ingress.1778472056.from_port:         "" => "0"
    ingress.1778472056.protocol:          "" => "-1"
    ingress.1778472056.security_groups.#: "0" => "0"
    ingress.1778472056.self:              "" => "false"
    ingress.1778472056.to_port:           "" => "0"
    ingress.3032708741.cidr_blocks.#:     "0" => "1"
    ingress.3032708741.cidr_blocks.0:     "" => "91.232.36.10/32"
    ingress.3032708741.from_port:         "" => "0"
    ingress.3032708741.protocol:          "" => "-1"
    ingress.3032708741.security_groups.#: "0" => "0"
    ingress.3032708741.self:              "" => "false"
    ingress.3032708741.to_port:           "" => "0"
    ingress.3688512951.cidr_blocks.#:     "2" => "0"
    ingress.3688512951.cidr_blocks.0:     "91.232.36.10/32" => ""
    ingress.3688512951.cidr_blocks.1:     "192.168.0.0/16" => ""
    ingress.3688512951.from_port:         "0" => "0"
    ingress.3688512951.protocol:          "-1" => ""
    ingress.3688512951.security_groups.#: "0" => "0"
    ingress.3688512951.self:              "false" => "false"
    ingress.3688512951.to_port:           "0" => "0"
    ingress.753360330.cidr_blocks.#:      "0" => "0"
    ingress.753360330.from_port:          "0" => "0"
    ingress.753360330.protocol:           "-1" => "-1"
    ingress.753360330.security_groups.#:  "0" => "0"
    ingress.753360330.self:               "true" => "true"
    ingress.753360330.to_port:            "0" => "0"


Plan: 0 to add, 1 to change, 0 to destroy.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. terraform plan

References

might be related to:

@kiich
Copy link
Author

kiich commented Nov 14, 2016

The IP i use in the ingress blocks seems irrelevant - the issue can be reproduced with number of CIDR block i tested with, real and made up ones.

@kiich
Copy link
Author

kiich commented Nov 18, 2016

Same issue with Terraform 0.7.11

@kiich
Copy link
Author

kiich commented Nov 18, 2016

FYI, If you take out any of one of the 3 ingress rules in the code above, it does not reproduce the issue.

@timkoopmans
Copy link

You should group ingress cidr blocks that have the same attributes so that:

ingress {
      from_port       = 0
      to_port         = 0
      protocol        = "-1"
      cidr_blocks = ["91.232.36.10/32"]
    }

    ingress {
      from_port       = 0
      to_port         = 0
      protocol        = "-1"
      cidr_blocks = ["192.168.0.0/16"]
    }

becomes

ingress {
      from_port       = 0
      to_port         = 0
      protocol        = "-1"
      cidr_blocks = ["91.232.36.10/32", "192.168.0.0/16"]
    }

@kiich
Copy link
Author

kiich commented Dec 6, 2016

@90kts Excellent - thanks for that. I wouldn't have thought having ingress resources that have the same attributes individually would be a problem but sure enough when i grouped them together, subsequent plan shows no changes and resolves my issue/problem.

The ingress resources were defined like that to be clearer but i don't think it's worth having this apply/plan issue so i'll make the change. Thanks again!

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

aws_security_group.test: Refreshing state... (ID: sg-xxxxxxxx)

No changes. Infrastructure is up-to-date. This means that Terraform
could not detect any differences between your configuration and
the real physical resources that exist. As a result, Terraform
doesn't need to do anything.

@kiich kiich closed this as completed Dec 6, 2016
@ghost
Copy link

ghost commented Apr 19, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 19, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants