Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform Crash on apply #3106

Closed
ghost opened this issue Mar 22, 2019 · 5 comments · Fixed by #5680
Closed

Terraform Crash on apply #3106

ghost opened this issue Mar 22, 2019 · 5 comments · Fixed by #5680
Milestone

Comments

@ghost
Copy link

ghost commented Mar 22, 2019

This issue was originally opened by @marrik96 as hashicorp/terraform#20778. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.11.11

Terraform Configuration Files

# Uncomment to create New Resource Group.
# if you comment out, you must replace the reference with the var "${var.RG_IaaS}" anywhere RG is required.
resource "azurerm_resource_group" "ResourceGroup" {
  name      = "${var.RG_IaaS}"
  location  = "${var.region}"
  tags      = "${var.tags}"
}

# *********************************************
# Subnet Resource Reference
# *********************************************
data "azurerm_subnet" "subnet" {
  name                 = "${var.subnet}"
  virtual_network_name = "${var.vnet}"
  resource_group_name  = "${var.RG_Network}"
}

# *********************************************
# NIC Resource Creation
# *********************************************
#resource "azurerm_network_interface" "nic" {
#  name                      = "${var.vm_name}0${count.index + 1}-nic"
#  location                  = "${var.region}"
#  resource_group_name       = "${azurerm_resource_group.ResourceGroup.name}"
#  internal_dns_name_label   = "${var.vm_name}0${count.index + 1}"
#  tags                      = "${var.tags}"
#  count                     = "${var.vm_count}"
#  ip_configuration {
#    name                          = "${var.vm_name}0${count.index + 1}-ip"
#    subnet_id                     = "${data.azurerm_subnet.subnet.id}"
#    private_ip_address_allocation = "dynamic"
#  }
#}

# *********************************************
# LB Public IP Resource Creation
# *********************************************
resource "azurerm_public_ip" "vmss" {
  name                = "PublicIPForLB"
  location            = "${var.region}"
  resource_group_name = "${azurerm_resource_group.ResourceGroup.name}"
  allocation_method   = "Static"
  tags                = "${var.tags}"
}

# *********************************************
# VM Scale Set & Disks Resource Creation
# *********************************************
resource "azurerm_lb" "vmss" {
  name                 = "${lower(var.vm_name)}-lb"
  location             = "${var.region}"
  resource_group_name  = "${azurerm_resource_group.ResourceGroup.name}"
  sku                  = "Basic"
  tags                 = "${var.tags}"

 frontend_ip_configuration {
   name                 = "PublicIPAddress"
   public_ip_address_id = "${azurerm_public_ip.vmss.id}"
 }
}

resource "azurerm_lb_backend_address_pool" "bpepool" {
  resource_group_name = "${azurerm_resource_group.ResourceGroup.name}"
  loadbalancer_id     = "${azurerm_lb.vmss.id}"
  name                = "${lower(var.vm_name)}-lb-bap"
}

resource "azurerm_lb_nat_pool" "lbnatpool" {
  count                          = 3
  resource_group_name            = "${azurerm_resource_group.ResourceGroup.name}"
  name                           = "http"
  loadbalancer_id                = "${azurerm_lb.vmss.id}"
  protocol                       = "Tcp"
  frontend_port_start            = 8000
  frontend_port_end              = 8100
  backend_port                   = 80
  frontend_ip_configuration_name = "PublicIPAddress"
}

resource "azurerm_lb_rule" "lbnatrule" {
  resource_group_name             = "${azurerm_resource_group.ResourceGroup.name}"
  loadbalancer_id                 = "${azurerm_lb.vmss.id}"
  name                            = "http"
  protocol                        = "Tcp"
  frontend_port                   = "${var.frontend_port}"
  backend_port                    = "${var.backend_port}"
  backend_address_pool_id         = "${azurerm_lb_backend_address_pool.bpepool.id}"
  # Default uses 5 Tuple Hash, SourceIP uses 2 Tuple, SourceIPProtocol uses 3 Tuple for session state
  load_distribution               = "SourceIP"
  frontend_ip_configuration_name  = "PublicIPAddress"
  probe_id                        = "${azurerm_lb_probe.vmss.id}"
}

resource "azurerm_lb_probe" "vmss" {
  resource_group_name = "${azurerm_resource_group.ResourceGroup.name}"
  loadbalancer_id     = "${azurerm_lb.vmss.id}"
  name                = "${lower(var.vm_name)}-https-probe"
  # only set below when using HTTP protocol
  #request_path        = "/health"
  port                = "${var.application_port}"
  protocol            = "TCP"
}

resource "azurerm_virtual_machine_scale_set" "vmss" {
  name                  = "${var.vm_name}-scaleset"
  location              = "${var.region}"
  resource_group_name   = "${azurerm_resource_group.ResourceGroup.name}"
  upgrade_policy_mode   = "Manual"
  tags                  = "${var.tags}"

  sku {
    name     = "${var.vm_size}"
    tier     = "Standard"
    capacity = 2
  }

  storage_profile_image_reference {
    publisher = "${var.publisher}"
    offer     = "${var.offer}"
    sku       = "${var.sku}"
    version   = "${var.version}"
  }

  storage_profile_os_disk {
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  storage_profile_data_disk {
    lun          = 0
    caching        = "ReadWrite"
    create_option  = "Empty"
    disk_size_gb   = 10
  }

  os_profile {
    computer_name_prefix  = "${var.vm_name}0${count.index + 1}"
    admin_username        = "${var.vm_username}"
    admin_password        = "${var.vm_password}"
    #custom_data          = "${file("web.conf")}"
  }

  #os_profile_linux_config {
  #  disable_password_authentication = false
  #}

  network_profile {
    name    = "terraformnetworkprofile"
    primary = true

    ip_configuration {
      name                                   = "IPConfiguration"
      subnet_id                              = "${data.azurerm_subnet.subnet.id}"
      load_balancer_backend_address_pool_ids = ["${azurerm_lb_backend_address_pool.bpepool.id}"]
      primary = true
    }
  }

}
# *********************************************
# Output variables
# *********************************************
 output "bpepool" {
     value = "${azurerm_lb.vmss.id}"
 }

 output "lb_private_ip" {
   value = "${azurerm_lb.vmss.private_ip_address}"
 }

Debug Output

Crash Output

https://gist.githubusercontent.com/marrik96/1c338c16f6754f9604174ca49272641a/raw/2f48d9378b6718826d4c4c24fc94058aa1d2c3f0/terraform_crash.log

Expected Behavior

Destroy the azurerm_lb_nat_pool resource

Actual Behavior

Terraform crashed

Steps to Reproduce

  1. terraform init
  2. terraform apply
  3. delete the azurerm_lb_nat_pool resource from main.tf file
  4. terraform apply

Additional Context

Used terraform to create an Azure Load Balancer with a VM Scale set. Made several modifications to main.tf with no problems and ran subsequest "terraform apply". Finally decided to remove/delete the "azurerm_lb_nat_pool". Deleted this section from main.tf. Ran "terraform apply" and the crashed occurred about ~6 minutes into deployment.

References

@Lucretius
Copy link
Contributor

I've had similar issues deleting NAT pools when using a VMSS. Do you happen to know if that NAT pool was being referenced by your load balancer at the time you tried to delete it? Perhaps look in the Azure portal as I don't see anything in your terraform file. What I see happening often times is there is an existing reference to the NAT pool inside the VMSS LB, and so terraform is unable to delete it as it is in use. This normally errors the first time around, but perhaps it gets into a bad state afterwards. Often times I just destroy the whole VNET and re-create but my guess is something is not right with the dependencies and there is some existing reference of the NAT pool causing the issue.

@marrik96
Copy link

Hi @Lucretius, Yes, I suspect you are correct. The NAT pool was being referenced by LB. The interesting thing is that the resource was in fact deleted although TF crashed. If you are not seeing anything in crash logs then feel free to close this issue. I like the idea of deleting entire deployment, although our model is to have a top-level VNET for all deployments. But I could have certainly deleted the VMSS completely and redeploy.

Thank you!

@tombuildsstuff
Copy link
Contributor

Crash is here: https://github.com/terraform-providers/terraform-provider-azurerm/blob/v1.23.0/azurerm/loadbalancer.go#L35

This should be fixed with the latest go-autorest update we merged recently; but we should update this to use the utils functions which handle the response being nil too

@ghost
Copy link
Author

ghost commented Feb 12, 2020

This has been released in version 1.44.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 1.44.0"
}
# ... other configuration ...

@ghost
Copy link
Author

ghost commented Mar 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants