Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please allow data disks to be added to existing machines in inventory in-line without destroy/recreate #582

Closed
jstewart612 opened this issue Nov 22, 2017 · 21 comments

Comments

@jstewart612
Copy link

The Azure Resource Manager control panel lets you attach a data disk without destroying the machine, so why can't Terraform?

@nbering
Copy link

nbering commented Nov 22, 2017

I was able to do this smoothly in past releases, though I admit I probably haven't tried since Terraform 0.9.11. What version are you using? Can you give an example of the configuration, the specifics of how you changed it, and the terraform plan output (with secrets removed)? This not only helps with diagnostics, but also helps other users looking at this later to determine if the issue they're discussing is the same one you're facing.

@jstewart612
Copy link
Author

jstewart612 commented Nov 27, 2017

main.tf

# Subscription-wide values
variable "client_id"                    {}
variable "client_secret"                {}
variable "subscription_id"              {}
variable "tenant_id"                    {}

# Terraform Remote State values
variable "storage_account_name"         {}
variable "container_name"               {}
variable "key"                          {}

# Data Center and Environment Options
variable "location"                     {}
variable "resource_group_name"          {}

# Availability Set Options
variable "platform_update_domain_count" {}
variable "platform_fault_domain_count"  {}
variable "managed"                      {}

# Configure the Microsoft Azure Provider
provider "azurerm" {
  subscription_id = "${var.subscription_id}"
  client_id       = "${var.client_id}"
  client_secret   = "${var.client_secret}"
  tenant_id       = "${var.tenant_id}"
}

module "rentpath-appgw" {
  source = "modules/rentpath-appgw"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

#module "haproxy-int" {
#  source = "modules/haproxy-int"

  # Subscription-wide values
#  subscription_id     = "${var.subscription_id}"
#  client_id           = "${var.client_id}"
#  client_secret       = "${var.client_secret}"
#  tenant_id           = "${var.tenant_id}"
#  resource_group_name = "${var.resource_group_name}"
#  location            = "${var.location}"

  # Data Center and Environment Options
#  location            = "${var.location}"
#  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
#  platform_update_domain_count = "${var.platform_update_domain_count}"
#  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
#  managed                      = "${var.managed}"
#}

module "nsmaster" {
  source = "modules/nsmaster"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "nsquery" {
  source = "modules/nsquery"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "puppet-master" {
  source = "modules/puppet-master"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "nats-int" {
  source = "modules/nats-int"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "postfix" {
  source = "modules/postfix"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "ipa" {
  source = "modules/ipa"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "spacewalk" {
  source = "modules/spacewalk"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "mgmt" {
  source = "modules/mgmt"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "foreman" {
  source = "modules/foreman"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "puppet-db" {
  source = "modules/puppet-db"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "pgsql-infra" {
  source = "modules/pgsql-infra"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "puppet-ca" {
  source = "modules/puppet-ca"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "ag-webjs" {
  source = "modules/ag-webjs"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "nagios" {
  source = "modules/nagios"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "splunk-deploy" {
  source = "modules/splunk-deploy"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "splunk-forward" {
  source = "modules/splunk-forward"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "splunk-index" {
  source = "modules/splunk-index"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "splunk-master" {
  source = "modules/splunk-master"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "splunk-search" {
  source = "modules/splunk-search"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "repo" {
  source = "modules/repo"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "consul" {
  source = "modules/consul"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "statsd" {
  source = "modules/statsd"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "manageiq" {
  source = "modules/manageiq"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

Now let's say I want to add a new disk to modules/manageiq/main.tf

It currently reads:

# Subscription wide variables - set in main.tf of parent environment branch
variable "client_id" {}
variable "client_secret" {}
variable "location" {}
variable "resource_group_name" {}
variable "subscription_id" {}
variable "tenant_id" {}

# Availability set variables - set in main.tf of parent environment branch
variable "platform_update_domain_count" {}
variable "platform_fault_domain_count" {}
variable "managed" {}

# Create Availability Set
resource "azurerm_availability_set" "ine2-as-manageiq" {
    name                         = "ine2-as-manageiq"
    location                     = "${var.location}"
    resource_group_name          = "${var.resource_group_name}"
    platform_update_domain_count = "${var.platform_update_domain_count}"
    platform_fault_domain_count  = "${var.platform_fault_domain_count}"
    managed                      = "${var.managed}"
}

# Create Azure Load Balancer
resource "azurerm_lb" "ine2-lb-manageiq" {
    name                = "ine2-lb-manageiq"
    location            = "${var.location}"
    resource_group_name = "${var.resource_group_name}"

    frontend_ip_configuration {
        name                          = "ine2-lb-manageiq-frontend"
        subnet_id                     = "/subscriptions/${var.subscription_id}/resourceGroups/${var.resource_group_name}/providers/Microsoft.Network/virtualNetworks/US_East2_${var.resource_group_name}_172.28.96.0-19/subnets/US_East_2_${var.resource_group_name}_Production_Management_VIP"
        private_ip_address_allocation = "static"
        private_ip_address            = "172.28.106.7"
    }

}

# Terraform does not yet let you make a backend pointing to an availability set
# These are placeholder blocks, commented out until it does
# https://github.com/terraform-providers/terraform-provider-azurerm/issues/63
#resource "azurerm_lb_backend_address_pool" "ine2-be-manageiq" {
#    resource_group_name = "${var.resource_group_name}"
#    loadbalancer_id     = "${azurerm_lb.ine2-lb-manageiq.id}"
#    name                = "ine2-be-manageiq"
#}

resource "azurerm_lb_probe" "ine2-pr-manageiq-443" {
    resource_group_name = "${var.resource_group_name}"
    loadbalancer_id     = "${azurerm_lb.ine2-lb-manageiq.id}"
    name                = "ine2-pr-manageiq-443"
    port                = 443
    protocol            = "Tcp"
    interval_in_seconds = 5
    number_of_probes    = 2
}

resource "azurerm_lb_rule" "ine2-ru-manageiq-443" {
    resource_group_name            = "${var.resource_group_name}"
    loadbalancer_id                = "${azurerm_lb.ine2-lb-manageiq.id}"
    name                           = "ine2-ru-manageiq-443"
    protocol                       = "Tcp"
    frontend_port                  = 443
    backend_port                   = 443
    frontend_ip_configuration_name = "${azurerm_lb.ine2-lb-manageiq.frontend_ip_configuration.0.name}"
    probe_id                       = "${azurerm_lb_probe.ine2-pr-manageiq-443.id}"
}

# Create network interface
resource "azurerm_network_interface" "ine2-ni-manageiq-eth0" {
    count               = 2
    name                = "ine2-ni-manageiq${format("%03d", count.index + 1)}-eth0"
    location            = "${var.location}"
    resource_group_name = "${var.resource_group_name}"

    ip_configuration {
        name                          = "ine2-ni-manageiq${format("%03d", count.index + 1)}-eth0-config"
        subnet_id                     = "/subscriptions/${var.subscription_id}/resourceGroups/${var.resource_group_name}/providers/Microsoft.Network/virtualNetworks/US_East2_${var.resource_group_name}_172.28.96.0-19/subnets/US_East_2_${var.resource_group_name}_Production_Management"
        private_ip_address_allocation = "static"
        private_ip_address            = "172.28.107.${150 + count.index}"
    }
}

# Create virtual machine
resource "azurerm_virtual_machine" "ine2-vm-manageiq" {
    count                 = 2
    name                  = "ine2-vm-manageiq${format("%03d", count.index + 1)}"
    location              = "${var.location}"
    resource_group_name   = "${var.resource_group_name}"
    network_interface_ids = ["/subscriptions/${var.subscription_id}/resourceGroups/${var.resource_group_name}/providers/Microsoft.Network/networkInterfaces/ine2-ni-manageiq${format("%03d", count.index + 1)}-eth0"]
    vm_size               = "Standard_f4s"
    availability_set_id   = "${azurerm_availability_set.ine2-as-manageiq.id}"

    delete_os_disk_on_termination = true
    delete_data_disks_on_termination = true

    storage_os_disk {
        name              = "ine2-di-manageiq${format("%03d", count.index + 1)}-os"
        caching           = "ReadWrite"
        create_option     = "FromImage"
        managed_disk_type = "Premium_LRS"
        os_type           = "linux"
    }

    storage_image_reference {
        id = "/subscriptions/${var.subscription_id}/resourceGroups/${var.resource_group_name}/providers/Microsoft.Compute/images/ine2-im-linux"
    }

    os_profile {
        computer_name  = "manageiq${format("%03d", count.index + 1)}.useast2.rentpath.com"
        admin_username = "rentpath"
        admin_password = "2864x%M78f3]%3}4*6f]k.C+"
    }

    os_profile_linux_config {
        disable_password_authentication = false
    }

    boot_diagnostics {
        enabled = "true"
        storage_uri = "https://linuxuseast2.blob.core.windows.net/"
    }

    tags {
        foreman_group_id = "42"
    }

}

Let's now add the following section:

    storage_data_disk {
        name              = "ine2-di-manageiq${format("%03d", count.index + 1)}-data0"
        caching           = "ReadWrite"
        create_option     = "Empty"
        managed_disk_type = "Premium_LRS"
        lun               = 1
        disk_size_gb      = 250
    }

The terraform plan now reads as follows:

...
-/+ module.manageiq.azurerm_virtual_machine.ine2-vm-manageiq[0] (new resource required)
      id:                                                                 "/subscriptions/.../resourceGroups/Linux/providers/Microsoft.Compute/virtualMachines/ine2-vm-manageiq001" => <computed> (forces new resource)
...
      storage_data_disk.#:                                                "0" => "1"
      storage_data_disk.0.caching:                                        "" => "ReadWrite"
      storage_data_disk.0.create_option:                                  "" => "Empty" (forces new resource)
      storage_data_disk.0.disk_size_gb:                                   "" => "250"
      storage_data_disk.0.lun:                                            "" => "1"
      storage_data_disk.0.managed_disk_id:                                "" => <computed>
      storage_data_disk.0.managed_disk_type:                              "" => "Premium_LRS"
      storage_data_disk.0.name:                                           "" => "ine2-di-manageiq001-data0"
...

Here are my versions

jstewart@mgmt001 ~/terraform [useast2.rentpath] $ terraform -version
Terraform v0.11.0
+ provider.azurerm v0.3.2

jstewart@mgmt001 ~/terraform [useast2.rentpath] $

So, have we checked all the boxes? Terraform wants to destroy and recreate entire instances when I add data disks. Why? Azure Resource Manager doesn't force this on me.

@tombuildsstuff
Copy link
Contributor

hey @jstewart612

Thanks for opening this issue

To provide an update here - digging into this, this change came from #218 - which changed this field to ForceNew given that Azure will return an error if attempting to change this on an existing disk.

Whilst that solution worked for that use-case, it's clearly not ideal and we need a better solution for this field. We should be able to error only when Azure says it's invalid (as in the example below) - but that requires some time/thought.. until that time perhaps it's worth removing ForceNew from this field (given Azure's returning the error anyway)? In either case this needs some investigation to determine how best to proceed IMO.

Thanks!

@nbering
Copy link

nbering commented Dec 4, 2017

I guess the workaround if you remove ForceNew would be to manually taint the resource if you encounter a change that Azure refuses to apply?

@jstewart612
Copy link
Author

Oh, I see.... I just read #240 . This happens because the API started throwing up an error at you. So weird of a provider to have their GUI behave differently and hide a deficiency of their API.... or maybe not ;)

Interested to see how this will turn out. Thanks for the updates @tombuildsstuff and @nbering !

@tombuildsstuff
Copy link
Contributor

I guess the workaround if you remove ForceNew would be to manually taint the resource if you encounter a change that Azure refuses to apply?

@nbering Probably - however I think we should try and identify and detail those workflows on the VM Resource Page, rather than leaving it open-ended.. what do you think?

@nbering
Copy link

nbering commented Dec 5, 2017

Ya... that's was my thought when I saw your proposal to remove ForceNew. That unfortunately leaves some people in a state where it becomes difficult to know what to do in order to recover from the failed apply.

@tanner-bruce
Copy link

Forgive my ignorance but could this somehow be done similar to the AWS way where there is an aws_volume_attachment option? This would also allow us to not have to specify a ton of storage_data_disk blocks (unless there is a way around that I haven't found yet?) and the UI implies to me this is possible.

@nbering
Copy link

nbering commented Dec 18, 2017

@tanner-bruce As far as I know, the volume_attachment resource for AWS actually maps to an entity in the AWS API. Whereas in Azure the disks attached to a VM are a property of the VM. One could maybe create such a resource, but it would be a construct of the Terraform provider - not the Azure API. This can cause weird inconsistencies in behaviour.

Just my take - but I'd guess that it might not work because - for example - if you want to change the Blob Storage URL of an unmanaged disk, that's actually a ForceNew action on the VM. If the property is on a fabricated extra resource, Terraform Core wouldn't know the VM needs to be recreated for that apply.

@ms1111
Copy link

ms1111 commented Jan 12, 2018

Did anyone find a workaround to add an unmanaged storage volume to an existing VM without blowing away the VM?

      storage_data_disk.2.create_option:                                "" => "Empty" (forces new resource)

@tanner-bruce
Copy link

Your only option is to add them beside the VM, and manually attach them. The volume_attachment construct (or any alternative, really) is dearly needed, having to recreate a VM to add a disk is ridiculous.

@nonsense
Copy link

@tanner-bruce if you do that and try to increase the count of a specific VM resource type, Terraform marks the existing VMs for deletion (as now they have disks attached to them). Any workaround to fix that?

@jzampieron
Copy link
Contributor

This is actually really bad b/c you can't even just force terraform to create the disks and then attach them out-of-band using the Azure portal b/c the Azure portal uses the upper case name for the resource group and then all the ids will never match.

IMHO that's just the Azure portal being broken and I'll raise a ticket with MSFT about it b/c it's not reflective of how the API returns the resource group name.

jzampieron added a commit to becoinc/terraform-provider-azurerm that referenced this issue Feb 4, 2018
This provides a way to work around hashicorp#582 by using the Azure Portal
to attach disks.
@jzampieron
Copy link
Contributor

jzampieron commented Feb 4, 2018

I've opened a PR with a small change to at least allow folks to work around the issue by creating the disks using a azurerm_managed_disk and using the Attach option to attach it to the VM.

Essentially, the workflow (less than ideal, but workable) is to create your plan restricted to the creation of the azurerm_managed_disk resource... and then once terraform has created the disks, you use the Azure portal to attach them.

This is not a long term solution, but it does work.

Note that for this to work you must attach the disks in the portal in the same order as you declare them in the terraform code. I recommend ordering them by LUN in ascending order for clarity.

@jzampieron
Copy link
Contributor

jzampieron commented Feb 4, 2018

Regardless of the long-term solution here, the PR code change to set DiffSuppressFunc: ignoreCaseDiffSuppressFunc, is correct anyway b/c Azure can return a different case for the resourceGroup section of the managed_disk_id URL.

Azure (and the Azure RM portal) appears to treat them as case-insensitive and the azurerm provider should as well.

@jzampieron
Copy link
Contributor

Another interesting tidbit ... and I have no idea the proper place to document this ... is that changing the cache setting on a data disk is a disruptive operation. It causes the VM to lose access to the disk for some period of time. It's almost like an detach/attach operation, but it's hard to tell.

@VaijanathB VaijanathB self-assigned this Feb 13, 2018
@VaijanathB
Copy link
Contributor

This is being fixed in this PR #813

@achandmsft achandmsft added this to the 1.4.0 milestone Mar 8, 2018
@achandmsft achandmsft modified the milestones: 1.4.0, 1.3.0 Mar 8, 2018
@achandmsft
Copy link
Contributor

@VaijanathB As this issue is fixed in #813, could you please verify and close this issue? @jstewart612, This should be fixed in v1.1.2 of the provider.

@achandmsft
Copy link
Contributor

Verified that this is closed. @jstewart612 please confirm, else reopen.

@jstewart612
Copy link
Author

Works like a charm.... thank you all for pushing through on this!

@ghost
Copy link

ghost commented Mar 31, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 31, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants