Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_virtual_machine does not update state file if disks are renumbered #9437

Closed
cchildress opened this issue Oct 18, 2016 · 2 comments
Closed

Comments

@cchildress
Copy link
Contributor

Terraform Version

0.7.3 (waiting for 0.7.6 to come out due to [GH-9122]

Affected Resource(s)

azurerm_virtual_machine

Terraform Configuration Files

Before:

  storage_data_disk {
    name = "${var.node_name}_data_disk_premium_01"
    vhd_uri = "${var.node_data_storage_account_premium_blob_endpoint}${azurerm_storage_container.node_container_data_premium.name}/${var.node_name}_data_premium_01.vhd"
    create_option = "Empty"
    disk_size_gb = 1023
    lun = 0
  }
  delete_data_disks_on_termination = true

After:

  storage_data_disk {
    name = "${var.node_name}_data_disk_standard_01"
    vhd_uri = "${var.node_data_storage_account_standard_blob_endpoint}${azurerm_storage_container.node_container_data_standard.name}/${var.node_name}_data_standard_01.vhd"
    create_option = "Empty"
    disk_size_gb = 1023
    lun = 0
  }
  storage_data_disk {
    name = "${var.node_name}_data_disk_premium_01"
    vhd_uri = "${var.node_data_storage_account_premium_blob_endpoint}${azurerm_storage_container.node_container_data_premium.name}/${var.node_name}_data_premium_01.vhd"
    create_option = "Empty"
    disk_size_gb = 1023
    lun = 32
  }
  delete_data_disks_on_termination = true

Debug Output

~ module.<some_server>.azurerm_virtual_machine.node
    storage_data_disk.0.lun:     "32" => "0"
    storage_data_disk.0.name:    "<some_server>_data_disk_premium_01" => "<some_server>_data_disk_standard_01"
    storage_data_disk.0.vhd_uri: "<foo>" => "<bar>"
    storage_data_disk.1.lun:     "0" => "32"
    storage_data_disk.1.name:    "<some_server>_data_disk_standard_01" => "<some_server>_data_disk_premium_01"
    storage_data_disk.1.vhd_uri: "<bar>" => "<foo>"

Expected Behavior

This change is successfully applied to Azure, but does not appear to be recored properly in the state file.

Actual Behavior

This shows up as a change to be made in every following terraform plan

Steps to Reproduce

  1. Set up a VM using the before layout.
  2. Change the Terraform config to match the after.
  3. Run a terraform apply and confirm you see the changes in the Azure portal
  4. Run terraform plan and the changes should be listed as pending still.

Important Factoids

I tried just editing the state file to see if it was something that could be manually corrected. Maybe I missed a spot, but I could never get it to see the change as completed.

@cchildress
Copy link
Contributor Author

I went through the state file and edited the order of the disks. Now I've found that if I run terraform plan -refresh=false Terraform does not think it needs to apply any changes to the disks on the virtual machine (which is what I would expect). If I run terraform plan and allow it to do the refresh it does think the disk changes need to be re-applied. This suggests to me that the out of order information is actually coming from Azure and not the local statefile.

This is a little odd because the lun layout and disk ordering in the Azure web UI is set properly.

@ghost
Copy link

ghost commented Apr 10, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants