-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Security group with cidr_blocks keep showing up as modified when planning #2843
Comments
Update: Terraform does appear to be doing something, but we checked and nothing changed on AWS side: aws_security_group.gz-dev-worker-elb: Modifying...
ingress.#: "2" => "4"
ingress.1852403216.cidr_blocks.#: "0" => "1"
ingress.1852403216.cidr_blocks.0: "" => "10.52.0.0/16"
ingress.1852403216.from_port: "" => "50091"
ingress.1852403216.protocol: "" => "TCP"
ingress.1852403216.security_groups.#: "0" => "0"
ingress.1852403216.self: "" => "0"
ingress.1852403216.to_port: "" => "50091"
ingress.2024721709.cidr_blocks.#: "0" => "1"
ingress.2024721709.cidr_blocks.0: "" => "10.52.0.0/16"
ingress.2024721709.from_port: "" => "50081"
ingress.2024721709.protocol: "" => "TCP"
ingress.2024721709.security_groups.#: "0" => "0"
ingress.2024721709.self: "" => "0"
ingress.2024721709.to_port: "" => "50081"
ingress.2024964166.cidr_blocks.#: "2" => "0"
ingress.2024964166.cidr_blocks.0: "10.42.0.0/16" => ""
ingress.2024964166.cidr_blocks.1: "10.52.0.0/16" => ""
ingress.2024964166.from_port: "50081" => "0"
ingress.2024964166.protocol: "tcp" => ""
ingress.2024964166.security_groups.#: "0" => "0"
ingress.2024964166.self: "0" => "0"
ingress.2024964166.to_port: "50081" => "0"
ingress.2207325978.cidr_blocks.#: "0" => "1"
ingress.2207325978.cidr_blocks.0: "" => "10.42.32.231/16"
ingress.2207325978.from_port: "" => "50091"
ingress.2207325978.protocol: "" => "TCP"
ingress.2207325978.security_groups.#: "0" => "0"
ingress.2207325978.self: "" => "0"
ingress.2207325978.to_port: "" => "50091"
ingress.2511184803.cidr_blocks.#: "2" => "0"
ingress.2511184803.cidr_blocks.0: "10.52.0.0/16" => ""
ingress.2511184803.cidr_blocks.1: "10.42.0.0/16" => ""
ingress.2511184803.from_port: "50091" => "0"
ingress.2511184803.protocol: "tcp" => ""
ingress.2511184803.security_groups.#: "0" => "0"
ingress.2511184803.self: "0" => "0"
ingress.2511184803.to_port: "50091" => "0"
ingress.789107190.cidr_blocks.#: "0" => "1"
ingress.789107190.cidr_blocks.0: "" => "10.42.32.231/16"
ingress.789107190.from_port: "" => "50081"
ingress.789107190.protocol: "" => "TCP"
ingress.789107190.security_groups.#: "0" => "0"
ingress.789107190.self: "" => "0"
ingress.789107190.to_port: "" => "50081"
aws_security_group.gz-dev-worker-elb: Modifications complete |
Hey @scalp42 – do you have a config that shows just these changing blocks? If you could share that, minus any secrets, that would help. Some things I noticed though:
thats why you see this:
|
Hi @catsby, thanks for getting back to me! Indeed, we fixed the CIDR for the bastion host, but we're still hitting the issue. Assuming This is the config for resource "aws_security_group" "gz-prod-worker-elb" {
name = "gz-prod-worker-elb"
description = "gz-prod-worker-elb"
vpc_id = "${aws_vpc.prod.id}"
ingress {
from_port = 50081
to_port = 50081
protocol = "TCP"
cidr_blocks = ["${aws_eip.gz-infra-jumphost.private_ip}/32"]
}
ingress {
from_port = 50091
to_port = 50091
protocol = "TCP"
cidr_blocks = ["${aws_eip.gz-infra-jumphost.private_ip}/32"]
}
ingress {
from_port = 50081
to_port = 50081
protocol = "TCP"
cidr_blocks = ["${var.prod_vpc_cidr_block}"]
}
ingress {
from_port = 50091
to_port = 50091
protocol = "TCP"
cidr_blocks = ["${var.prod_vpc_cidr_block}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
Name = "gz-prod-worker-elb"
Description = "gz-prod-worker-elb"
vpc = "prod"
terraform = "true"
}
} variable "prod_vpc_cidr_block" {
description = "CIDR block for the prod VPC."
default = "10.92.0.0/16"
} Here is the summary for ~ aws_security_group.gz-prod-worker-elb
ingress.#: "0" => "4"
ingress.1222688924.cidr_blocks.#: "0" => "1"
ingress.1222688924.cidr_blocks.0: "" => "10.42.32.231/32"
ingress.1222688924.from_port: "" => "50081"
ingress.1222688924.protocol: "" => "TCP"
ingress.1222688924.security_groups.#: "0" => "0"
ingress.1222688924.self: "" => "0"
ingress.1222688924.to_port: "" => "50081"
ingress.1653747107.cidr_blocks.#: "0" => "1"
ingress.1653747107.cidr_blocks.0: "" => "10.92.0.0/16"
ingress.1653747107.from_port: "" => "50081"
ingress.1653747107.protocol: "" => "TCP"
ingress.1653747107.security_groups.#: "0" => "0"
ingress.1653747107.self: "" => "0"
ingress.1653747107.to_port: "" => "50081"
ingress.1951764126.cidr_blocks.#: "0" => "1"
ingress.1951764126.cidr_blocks.0: "" => "10.92.0.0/16"
ingress.1951764126.from_port: "" => "50091"
ingress.1951764126.protocol: "" => "TCP"
ingress.1951764126.security_groups.#: "0" => "0"
ingress.1951764126.self: "" => "0"
ingress.1951764126.to_port: "" => "50091"
ingress.3833138800.cidr_blocks.#: "0" => "1"
ingress.3833138800.cidr_blocks.0: "" => "10.42.32.231/32"
ingress.3833138800.from_port: "" => "50091"
ingress.3833138800.protocol: "" => "TCP"
ingress.3833138800.security_groups.#: "0" => "0"
ingress.3833138800.self: "" => "0"
ingress.3833138800.to_port: "" => "50091" Here is the
At this point, we can see the correct rules created in AWS, so far so good: If we try to plan again: aws_security_group.gz-prod-worker-elb: Refreshing state... (ID: sg-b38249d7)
~ aws_security_group.gz-prod-worker-elb
ingress.#: "2" => "4"
ingress.1222688924.cidr_blocks.#: "0" => "1"
ingress.1222688924.cidr_blocks.0: "" => "10.42.32.231/32"
ingress.1222688924.from_port: "" => "50081"
ingress.1222688924.protocol: "" => "TCP"
ingress.1222688924.security_groups.#: "0" => "0"
ingress.1222688924.self: "" => "0"
ingress.1222688924.to_port: "" => "50081"
ingress.1653747107.cidr_blocks.#: "0" => "1"
ingress.1653747107.cidr_blocks.0: "" => "10.92.0.0/16"
ingress.1653747107.from_port: "" => "50081"
ingress.1653747107.protocol: "" => "TCP"
ingress.1653747107.security_groups.#: "0" => "0"
ingress.1653747107.self: "" => "0"
ingress.1653747107.to_port: "" => "50081"
ingress.1951764126.cidr_blocks.#: "0" => "1"
ingress.1951764126.cidr_blocks.0: "" => "10.92.0.0/16"
ingress.1951764126.from_port: "" => "50091"
ingress.1951764126.protocol: "" => "TCP"
ingress.1951764126.security_groups.#: "0" => "0"
ingress.1951764126.self: "" => "0"
ingress.1951764126.to_port: "" => "50091"
ingress.3019390024.cidr_blocks.#: "2" => "0"
ingress.3019390024.cidr_blocks.0: "10.92.0.0/16" => ""
ingress.3019390024.cidr_blocks.1: "10.42.32.231/32" => ""
ingress.3019390024.from_port: "50091" => "0"
ingress.3019390024.protocol: "tcp" => ""
ingress.3019390024.security_groups.#: "0" => "0"
ingress.3019390024.self: "0" => "0"
ingress.3019390024.to_port: "50091" => "0"
ingress.3833138800.cidr_blocks.#: "0" => "1"
ingress.3833138800.cidr_blocks.0: "" => "10.42.32.231/32"
ingress.3833138800.from_port: "" => "50091"
ingress.3833138800.protocol: "" => "TCP"
ingress.3833138800.security_groups.#: "0" => "0"
ingress.3833138800.self: "" => "0"
ingress.3833138800.to_port: "" => "50091"
ingress.795890810.cidr_blocks.#: "2" => "0"
ingress.795890810.cidr_blocks.0: "10.42.32.231/32" => ""
ingress.795890810.cidr_blocks.1: "10.92.0.0/16" => ""
ingress.795890810.from_port: "50081" => "0"
ingress.795890810.protocol: "tcp" => ""
ingress.795890810.security_groups.#: "0" => "0"
ingress.795890810.self: "0" => "0"
ingress.795890810.to_port: "50081" => "0" As you can see, there's a bug here (we have 60 changes like this one). We noticed that the ordering in AWS changed: Let me know if you need anything else, greatly appreciated. |
We experience this same problem --
Applying the plan doesn't seem to make any changes to the AWS resources, but it's still nerve-wracking to see so many changes. |
I'm seeing the same thing. Here's a test case:
You can
|
Hey all – thank you everyone for contributing your examples that demonstrate this issue. I sincerely apologize for not getting to this sooner, but I'm on it now. The gist of it, what I can see, is how AWS handles Security Group Rules with regards to ports. AWS will group all rules for a given port range into one "rule", even though the console shows differently. The console shows all the IP ranges, but they're grouped under one rule by port.. I'm working on this and similar issue to try and narrow down a fix. |
@catsby thanks for looking into it, do you know if it'll make it to the next release by any chance? |
any update on this? it is really causing pain for us. |
For anyone having the same issue, our workaround currently is to make sure of the lifecycle {
create_before_destroy = false
ignore_changes = ["ingress"]
} We added it to every single SG having Where before we had close to 100 changes, now it's "working" as workaround:
cc @phinze for updates (granted its just a workaround) |
I've been having this same problem in 0.6.7-dev. I tried the workaround and got this error instead:
|
Thanks for all the detailed reports, everyone! In @scalp42's latest example it looks like AWS is normalizing "TCP" to "tcp" and causing a diff. Fixing that should look similar to fixing #3120... just use I think this is a bug in addition to the issue that @catsby described above. The normalization of the protocol should be easy to fix relative to the issue of the API collapsing multiple separate rules into one. In both cases, the workaround is to write your Terraform config in the normalized form. Specifically:
So for @scalp42's example: resource "aws_security_group" "gz-prod-worker-elb" {
name = "gz-prod-worker-elb"
description = "gz-prod-worker-elb"
vpc_id = "${aws_vpc.prod.id}"
ingress {
from_port = 50081
to_port = 50081
protocol = "tcp"
cidr_blocks = [
"${aws_eip.gz-infra-jumphost.private_ip}/32",
"${var.prod_vpc_cidr_block}",
]
}
ingress {
from_port = 50091
to_port = 50091
protocol = "tcp"
cidr_blocks = [
"${aws_eip.gz-infra-jumphost.private_ip}/32",
"${var.prod_vpc_cidr_block}",
]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
Name = "gz-prod-worker-elb"
Description = "gz-prod-worker-elb"
vpc = "prod"
terraform = "true"
}
} The resource "aws_security_group" "test_sg" {
name = "test_sg"
vpc_id = "vpc-XXXXXXXX"
ingress {
from_port = 9200
to_port = 9400
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
self = true
}
} By writing the configuration in the normalized form, the state retrieved from the API will match the configuration and so the diffs should go away. |
The lowercase workaround seems to have worked for me. Thanks @apparentlymart! |
+1 for @apparentlymart's lower case "tcp" work around. Thanks! |
Hey all – I'm going to close this for now. Let me know if there's anything else here! |
Normalizing ingress rules worked for me. Thanks @apparentlymart! |
@catsby
I cannot put the two additional IPs into |
@mseiwald, have you tried this: ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"${split(",", var.external_ips)}",
"X.X.Y.Y/32",
"X.X.Z.Z/32"
]
} |
@mseiwald the cidr_blocks = ["${concat(split(",", var.external_ips), split(",", "X.X.Y.Y/32,X.X.Z.Z/32"))}] |
Thanks, guys. |
this thread helped TONS, thanks for the help. |
I still get this error. see below, I tried all the above. I am adding aws_security_group_rule as well as shown below. 33m~ aws_security_group.xyz_sg_abc resource "aws_security_group" "xyz_sg_abc" { HTTP access from anywhereingress { outbound internet accessegress { tags { resource "aws_security_group_rule" "xyz_sg_abc_rule01" { resource "aws_security_group_rule" "xyz_sg_abc_rule02" { |
I was also getting this problem. Even after normalizing it was still occurring. The issue turned out to be a CIDR that was actually invalid but AWS was silently correcting it on their side. Naturally Terraform saw this as a change. If this still occurs for you, please double check ALL of your CIDR blocks and make sure they're valid for the network you're specifying. You might be having this issue. |
So to recap, what we need to do is
Seems like things that |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Hi there,
We noticed that SGs with cidr_blocks keep trying to be modified, per the plan.
In reality, nothing changes but it keeps showing up in the plan.
Let me know what kind of log I can provide to help.
The text was updated successfully, but these errors were encountered: