-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please allow data disks to be added to existing machines in inventory in-line without destroy/recreate #582
Comments
I was able to do this smoothly in past releases, though I admit I probably haven't tried since Terraform 0.9.11. What version are you using? Can you give an example of the configuration, the specifics of how you changed it, and the terraform plan output (with secrets removed)? This not only helps with diagnostics, but also helps other users looking at this later to determine if the issue they're discussing is the same one you're facing. |
main.tf
Now let's say I want to add a new disk to modules/manageiq/main.tf It currently reads:
Let's now add the following section:
The terraform plan now reads as follows:
Here are my versions
So, have we checked all the boxes? Terraform wants to destroy and recreate entire instances when I add data disks. Why? Azure Resource Manager doesn't force this on me. |
hey @jstewart612 Thanks for opening this issue To provide an update here - digging into this, this change came from #218 - which changed this field to ForceNew given that Azure will return an error if attempting to change this on an existing disk. Whilst that solution worked for that use-case, it's clearly not ideal and we need a better solution for this field. We should be able to error only when Azure says it's invalid (as in the example below) - but that requires some time/thought.. until that time perhaps it's worth removing Thanks! |
I guess the workaround if you remove |
Oh, I see.... I just read #240 . This happens because the API started throwing up an error at you. So weird of a provider to have their GUI behave differently and hide a deficiency of their API.... or maybe not ;) Interested to see how this will turn out. Thanks for the updates @tombuildsstuff and @nbering ! |
@nbering Probably - however I think we should try and identify and detail those workflows on the VM Resource Page, rather than leaving it open-ended.. what do you think? |
Ya... that's was my thought when I saw your proposal to remove ForceNew. That unfortunately leaves some people in a state where it becomes difficult to know what to do in order to recover from the failed apply. |
Forgive my ignorance but could this somehow be done similar to the AWS way where there is an |
@tanner-bruce As far as I know, the Just my take - but I'd guess that it might not work because - for example - if you want to change the Blob Storage URL of an unmanaged disk, that's actually a ForceNew action on the VM. If the property is on a fabricated extra resource, Terraform Core wouldn't know the VM needs to be recreated for that apply. |
Did anyone find a workaround to add an unmanaged storage volume to an existing VM without blowing away the VM?
|
Your only option is to add them beside the VM, and manually attach them. The |
@tanner-bruce if you do that and try to increase the count of a specific VM resource type, Terraform marks the existing VMs for deletion (as now they have disks attached to them). Any workaround to fix that? |
This is actually really bad b/c you can't even just force terraform to create the disks and then attach them out-of-band using the Azure portal b/c the Azure portal uses the upper case name for the resource group and then all the ids will never match. IMHO that's just the Azure portal being broken and I'll raise a ticket with MSFT about it b/c it's not reflective of how the API returns the resource group name. |
This provides a way to work around hashicorp#582 by using the Azure Portal to attach disks.
I've opened a PR with a small change to at least allow folks to work around the issue by creating the disks using a Essentially, the workflow (less than ideal, but workable) is to create your plan restricted to the creation of the This is not a long term solution, but it does work. Note that for this to work you must attach the disks in the portal in the same order as you declare them in the terraform code. I recommend ordering them by LUN in ascending order for clarity. |
Regardless of the long-term solution here, the PR code change to set Azure (and the Azure RM portal) appears to treat them as case-insensitive and the |
Another interesting tidbit ... and I have no idea the proper place to document this ... is that changing the cache setting on a data disk is a disruptive operation. It causes the VM to lose access to the disk for some period of time. It's almost like an detach/attach operation, but it's hard to tell. |
This is being fixed in this PR #813 |
@VaijanathB As this issue is fixed in #813, could you please verify and close this issue? @jstewart612, This should be fixed in v1.1.2 of the provider. |
Verified that this is closed. @jstewart612 please confirm, else reopen. |
Works like a charm.... thank you all for pushing through on this! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
The Azure Resource Manager control panel lets you attach a data disk without destroying the machine, so why can't Terraform?
The text was updated successfully, but these errors were encountered: