Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure - Resizing some subnets is disassociating NSGs associated for other subnets ... #1015

Closed
bvenkataramana opened this issue Mar 21, 2018 · 3 comments

Comments

@bvenkataramana
Copy link

Hi there,

We have an existing Vnet & subnets. We have associated one NSG to each of the subnets.

When we have resized few subnets using terraform, it disassociated NSG associations for all of the subnets.

Terraform Version

Terraform v0.11.3

Affected Resource(s)

Network Security Groups associated for all subnets

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Yes, impacted all remaining subnets & their NSG associations.

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

What should have happened?
Kept remaining subnet-NSG associations which are not changed.

Actual Behavior

What actually happened?
Disassociated all NSG associations for all Subnets

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Create Vnet, subnets.
  2. Create NSGs
  3. Associate one NSG per each subnet
  4. Change few subnet sizes
  5. Run terraform apply

See the result that it will disassociate all NSGs of all subnets (even the ones that are not changed).

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

@catsby catsby added the bug label Apr 5, 2018
@achandmsft achandmsft modified the milestones: Temp/To Be Sorted, Soon Apr 19, 2018
@tombuildsstuff
Copy link
Contributor

hey @bvenkataramana

Thanks for opening this issue - apologies for the delayed response here!

I've spent a little while trying to reproduce this and I'm struggling - would you be able to post a before/after config here so that we can take a look into this? Here's the config I've attempted:

resource "azurerm_resource_group" "test" {
  name     = "tom-vnetdev"
  location = "West Europe"
}

resource "azurerm_network_security_group" "test" {
  name                = "tharvey-nsg"
  location            = "${azurerm_resource_group.test.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"

  security_rule {
    name                       = "test123"
    priority                   = 100
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "*"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }

  tags {
    environment = "Production"
  }
}

resource "azurerm_virtual_network" "test" {
  name                = "tom-devvn"
  address_space       = ["10.0.0.0/16"]
  location            = "${azurerm_resource_group.test.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"
}

resource "azurerm_subnet" "test" {
  name                      = "first"
  resource_group_name       = "${azurerm_resource_group.test.name}"
  virtual_network_name      = "${azurerm_virtual_network.test.name}"
  address_prefix            = "10.0.1.0/24"
  network_security_group_id = "${azurerm_network_security_group.test.id}"
}

resource "azurerm_subnet" "second" {
  name                      = "second"
  resource_group_name       = "${azurerm_resource_group.test.name}"
  virtual_network_name      = "${azurerm_virtual_network.test.name}"
  address_prefix            = "10.0.2.0/24"
  network_security_group_id = "${azurerm_network_security_group.test.id}"
}

resource "azurerm_subnet" "third" {
  name                      = "third"
  resource_group_name       = "${azurerm_resource_group.test.name}"
  virtual_network_name      = "${azurerm_virtual_network.test.name}"
  address_prefix            = "10.0.3.0/24"
  network_security_group_id = "${azurerm_network_security_group.test.id}"
}

which I've then changed to:

resource "azurerm_resource_group" "test" {
  name     = "tom-vnetdev"
  location = "West Europe"
}

resource "azurerm_network_security_group" "test" {
  name                = "tharvey-nsg"
  location            = "${azurerm_resource_group.test.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"

  security_rule {
    name                       = "test123"
    priority                   = 100
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "*"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }

  tags {
    environment = "Production"
  }
}

resource "azurerm_virtual_network" "test" {
  name                = "tom-devvn"
  address_space       = ["10.0.0.0/16", "10.104.3.0/32"]
  location            = "${azurerm_resource_group.test.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"
}

resource "azurerm_subnet" "test" {
  name                      = "first"
  resource_group_name       = "${azurerm_resource_group.test.name}"
  virtual_network_name      = "${azurerm_virtual_network.test.name}"
  address_prefix            = "10.0.1.0/24"
  network_security_group_id = "${azurerm_network_security_group.test.id}"
}

resource "azurerm_subnet" "second" {
  name                      = "second"
  resource_group_name       = "${azurerm_resource_group.test.name}"
  virtual_network_name      = "${azurerm_virtual_network.test.name}"
  address_prefix            = "10.0.2.0/24"
  network_security_group_id = "${azurerm_network_security_group.test.id}"
}

resource "azurerm_subnet" "third" {
  name                      = "third"
  resource_group_name       = "${azurerm_resource_group.test.name}"
  virtual_network_name      = "${azurerm_virtual_network.test.name}"
  address_prefix            = "10.0.3.0/24"
  network_security_group_id = "${azurerm_network_security_group.test.id}"
}

Thanks!

@tombuildsstuff
Copy link
Contributor

Closing this since we've not heard back - please feel free to re-open this / comment on this if it's still an issue and we'll take another look :)

Thanks!

@tombuildsstuff tombuildsstuff modified the milestones: Soon, Being Sorted Oct 25, 2018
@ghost
Copy link

ghost commented Mar 6, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 6, 2019
@ghost ghost removed the waiting-response label Mar 6, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants