Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm - Terraform wants to recreate VM when attaching existing managed data disk #85

Closed
hashibot opened this issue Jun 13, 2017 · 9 comments

Comments

@hashibot
Copy link

hashibot commented Jun 13, 2017

This issue was originally opened by @marratj as hashicorp/terraform#14268. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

0.9.4

Affected Resource(s)

azurerm_virtual_machine
azurerm_managed_disk

Terraform Configuration Files

resource "azurerm_managed_disk" "db1appdisk2" {
  name                 = "db1appdisk2"
  location             = "${azurerm_resource_group.testdemo.location}"
  resource_group_name  = "${azurerm_resource_group.testdemo.name}"
  storage_account_type = "Premium_LRS"
  create_option        = "Empty"
  disk_size_gb         = "127"
}

resource "azurerm_virtual_machine" "db1" {
  name                  = "db1"
  location              = "${azurerm_resource_group.testdemo.location}"
  resource_group_name   = "${azurerm_resource_group.testdemo.name}"
  network_interface_ids = ["${azurerm_network_interface.db1nic1.id}"]
  vm_size               = "Standard_DS1_v2"

...

  storage_data_disk {
    name            = "${azurerm_managed_disk.db1appdisk2.name}"
    managed_disk_id = "${azurerm_managed_disk.db1appdisk2.id}"
    create_option   = "Attach"
    lun             = 1
    disk_size_gb    = "${azurerm_managed_disk.db1appdisk2.disk_size_gb}"
  }
}

Debug Output

Panic Output

Expected Behavior

Existing Managed disk gets attached to VM without recreating the VM.

Actual Behavior

The VM gets destroyed and recreated from scratch.

When adding a Managed Disk with the "Empty" option however, it works as expected -> here the VM just gets reconfigured, not recreated.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:
1 terraform plan
2. terraform apply

Important Factoids

References

@hashibot hashibot added the bug label Jun 13, 2017
@marratj
Copy link

marratj commented Aug 6, 2017

Is there any update on that? In my opinion the simplest solution would be to have ForceNew: false instead of true when changing the managed_disk_id of storage_data_disk.

Or would this currently not be working due to Azure restrictions?

@3guboff
Copy link

3guboff commented Oct 11, 2017

It's really important for fix! I hope will be done asap!

@tombuildsstuff
Copy link
Contributor

👋 hey @marratj

Thanks for opening this issue :)

I've taken a look into this issue and unfortunately this appears to be a limitation at Azure's end where it's not possible to update the ordering of disks through the API; where attempting to change either the managed_disk_id field or the lun field (to re-order the disks) causes an error to be returned from the API. I also took a look into stopping the VM, then modifying the disks and then starting it again but this doesn't appear to be a viable route either.

As such - I've raised an issue about this on the Azure SDK for Go repository - to find out how this should be achieved, since I believe it should be possible (since the portal allows for this to be done) - and will update when I've heard back.

Thanks!

@tombuildsstuff tombuildsstuff modified the milestones: M1, Future Oct 17, 2017
@tombuildsstuff tombuildsstuff removed their assignment Nov 14, 2017
@svanharmelen
Copy link

@tombuildsstuff I noticed that the issue you opened at the Azure SDK repo is closed almost a month ago and that you removed your assignment from this issue 10 days ago. So does that mean this is now fixed? I could not find a related PR, but maybe I'm just overlooking something here...

And if it's not yet fixed, is there something in the works already?

@imcdnzl
Copy link

imcdnzl commented Dec 11, 2017

@svanharmelen as I read this, it's not a bug upstream but you have to order operations in a certain manner which @tombuildsstuff could then use to make this work correctly (or whoever picks up the bug).

Though would sure like this too as hitting our site at present also!

@retheshnair
Copy link

@tombuildsstuff This is not upstream bug I did test using the latest go sdk to find if the VM was getting reprovisioned when attaching a new disk .None of tools like arm template, azcli arm powershell or sdk has this behaviour . This is only to do with how terraform handle the new data disk . As @marratj said using ForceNew: false instead of true when changing the managed_disk_idof storage_data_disk may fix the issue but may be not the clearest one . This is really important and it is affecting us badly and your help is highly appreciated

@tombuildsstuff tombuildsstuff modified the milestones: 1.4.0, Temp/To Be Sorted Apr 17, 2018
@tombuildsstuff
Copy link
Contributor

👋 hey @marratj @3guboff @svanharmelen @imcdnzl

This has previously been fixed since ForceNew has been resolved, in addition I believe another solution to this will be available in the new Data Disk Attachment resource which has been requested in #795 and is being worked on in #1207.

Thanks!

@svanharmelen
Copy link

Sounds good! Thanks for the update @tombuildsstuff!

@ghost
Copy link

ghost commented Mar 31, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 31, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

10 participants