-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for disk attachment to VMs at creation time #6117
Comments
Oh, that is really unfortunate... I wish I could try this but I'm not even able to create a managed disk due to #6029 |
If I'm following this thread correctly (as we are still using the legacy disk system and were looking to move over) can you not deploy VMs with disks already attached? Is it truly rebooting VMs for each disk (thread in #6314 above)? This feels like a HUGE step backwards especially if the legacy mode we are using is being deprecated. |
Also how do you deploy and configure a data disk that is in the source reference image if the data disk block is no longer valid? |
@lightdrive, I've worked around it by using ansible at https://github.com/rgl/terraform-ansible-azure-vagrant |
This is something I just ran across as well, I'd like to be able to use cloud-init to configure the disks. Any news on a resolution? |
This item is next on my list, no ETA yet though sorry. I'll link it to a milestone when I've had chance to size and scope it. |
It seems that the work done by @jackofallops have been closed with a note that it needs to be implemented in a different way. Does anyone have a possible work-around for this? My use-case are like others have pointed out:
Writing my own scripts to make this instead of using cloud-init seems like a waste. |
Alas, was really looking forward to an official fix for this. 🙁 In lieu of that however, here's what I came up with about six months ago having had no option but to make this work at minimum for newly booted VMs (note: this has not been tested with changes to, or replacements of the disks - literally just booting new VMs). I'm also not really a Go person, and as a result this is definitely a hack and nothing even approaching a "good" solution, much less sane contents for a PR. Given that be warned that whatever state is generated is almost certainly destined to be incompatible with whatever shape the official implementation yields should it ever land, but on the off chance it does prove useful in some capacity or simply the embers to spark someone else's imagination, here's the horrible change I made to allow for booting VMs with disk attached such that Usage:resource "azurerm_linux_virtual_machine" "example" {
[...]
data_disk {
name = "example-data"
caching = "ReadWrite"
disk_size_gb = 320
lun = 0
storage_account_type = "StandardSSD_LRS"
}
[...]
} |
@mal FWIW this is being worked on, however the edge-cases make this more complicated than it appears - in particular we're trying to avoid several limitations from the older VM resources, which is why this isn't being lifted over 1:1 and is taking longer here. |
Thanks for the insight @tombuildsstuff, great to know it's still being actively worked on. I put that commit out there in response to the request for possible work-arounds in case it was useful to someone that finds themself in the position I was in previously, where waiting for something to cover all the cases wasn't an option. Please don't take that as any kind of slight or indictment of the ongoing efforts, I definitely support any official solution covering all the cases, in my case it just wasn't possible to wait for it, but I'll be first in line to move definitions over to it when it does land. 😁 |
incase this helps anyone else... main part to note is the top line waiting for write_files:
- content: |
# Wait for x disks to be available
while [ `ls -l /dev/disk/azure/scsi1 | grep lun | wc -l` -lt 3 ]; do echo waiting on disks...; sleep 5; done
DISK=$1
DISK_PARTITION=$DISK"-part1"
VG=$2
VOL=$3
MOUNTPOINT=$4
# Partition disk
sed -e 's/\s*\([\+0-9a-zA-Z]*\).*/\1/' << EOF | fdisk $DISK
n # new partition
p # primary partition
1 # partition number 1
# default - start at beginning of disk
# default - end of the disk
w # write the partition table
q # and we're done
EOF
# Create physical volume
pvcreate $DISK_PARTITION
# Create volume group
if [[ -z `vgs | grep $VG` ]]; then
vgcreate $VG $DISK_PARTITION
else
vgextend $VG $DISK_PARTITION
fi
# Create logical volume
if [[ -z $SIZE ]]; then
SIZE="100%FREE"
fi
lvcreate -l $SIZE -n $VOL $VG
# Create filesystem
mkfs.ext3 -m 0 /dev/$VG/$VOL
# Add to fstab
echo "/dev/$VG/$VOL $MOUNTPOINT ext3 defaults 0 2" >> /etc/fstab
# Create mount point
mkdir -p $MOUNTPOINT
# Mount
mount $MOUNTPOINT
path: /run/create_fs.sh
permissions: '0700'
runcmd:
- /run/create_fs.sh /dev/disk/azure/scsi1/lun1 vg00 vol1 /oracle
- /run/create_fs.sh /dev/disk/azure/scsi1/lun2 vg00 vol2 /oracle/diag |
This comment has been minimized.
This comment has been minimized.
@ruandersMSFT that's what this issue is tracking - you can find the latest update here As per the community note above: |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
I think a hacky work around is that azure deployment templates are able to deploy a VM and attach a disk at creation. So you can: (1) make an azure deployment template for the VM you need. (It's easy to do this in the Azure console by manually configuring the VM and clicking the "Download template for automation" button I'd much rather have the terraform resource support this, but I think something like this might be a stopgap. I'm trying to get this integrated into our process now and it's working so far. |
I expect this will be marked off-topic, but after nearly 3 years since open, this issue needs more attention. The AzureRM provider has put its users in a bad place here. There are critical features of Azure that are now inaccessible as mentioned by others in this thread. Because my shared OS image has data disks, I cannot use dedicated hosts, cloud-init, or proper identity support for my virtual machine, and this list will only continue to grow because the cloud never stops moving. How can we as a community help here? There is clearly a lot of development effort going into this provider, judging by the changelog and rate of pull requests; can we raise the priority of this issue? There is certainly an opportunity for more transparency on why this hasn't moved and other items are getting development attention. |
If there is a clean way to migrate from virtual_machine to deployment template, I can live with that, but current terraform will try to do unexpected things due to how they've implemented deployment templates as well. |
@jackofallops it's hard to tell in this thread, but looks like you may have added this to the "blocked" milestone. It's no longer clear in the thread what is blocking this issue. Can you clarify? We are seeing a lot of activity in this thread and it's the third-most 👍 issue. |
What is the state of this issue? Is it blocked? It's currently 3 years old and we still can't build a VM from a template which has data disks? |
The azurerm_linux_virtual_machine_ docs include "storage_data_disk" as a valid block but terraform plan errors out claiming it is unsupported. I tried a dynamic block and a standard block - with a precreated disk to "attach" and "empty" with no disk created - all failed. When I've seen this error before it was either a syntax error or a no longer supported block type. Is this a documentation bug? Versions:
Error:
Disk creation (works)
Call to storage_data_disk:
Thanks. I'm sure I'm missing something here. |
So is this not possible, and if not now, will this be possible in the future as the azurerm_virtual_machine becomes depreciated. `resource "azurerm_linux_virtual_machine" "example_name" { os_disk { depends_on = [ ###################### I have tried using the "data_disk" option, but this is not supported as stated above. ` data_disks { data_disks { Are there any other suggestions, or will this be included in terraform in the near future? |
I feel I must be missing something here as my scenario seems like it would be so common that this issue would need to have been addressed much sooner. I am trying to use Packer to build CIS/STIG compliant VMs for GoldenImages. Part of the spec has several folders that need to go onto non root partitions. To achieve this I added a drive added the partitions and moved data around. We also use LVM in order to achieve availability requirements if a partition gets full. I used az cli to boot the VM and I was also able to add an additional data drive using the --data-disk-sizes-gb option so I know the control plane will handle it. When I try to use the VM with Terraform I get the storageAccount error mentioned above. Is there really no viable workaround for building golden images with multiple disks and using TF to create the VM's? |
@shaneholder for now, the generally accepted workaround (which I have used successfully) is to use a secondary It would be great to hear from the developers as to exactly why this is still blocked, since it's unclear to everyone here especially given the popularity of the request. |
@djryanj thanks for the reply. I'm trying to understand it in the context of my problem though. The image in the gallery already has 2 disks, 1 os and 1 data, and right now i'm not trying to add another disk but that would be the next logical step. The issue I'm having is that I can't even get to the point where the VM has been created. I ran TF with a trace and found the PUT command that creates the VM and what I believe is happening is that TF seems to be incorrectly adding a |
@shaneholder ah I understand. If the gallery image has 2 disks and is not deployable via Terraform using the @tombuildsstuff - I'm sure you can see the activity here. Any input? |
A little more information. I just ran the same TF but used a VM image that does not have a data disk built in. That PUT request also has the |
Another piece to the puzzle. I set the logging option for az cli and noticed that it adds the following dataDisks element when I specify additional disks. The lun:0 object is the disk that is built into the image. If I run similar code in TF the
|
Alright, so I cloned the repo and fiddled around a bit. I hacked the linux_virtual_machine_resource.go file around line 512. I changed:
to:
And I was able to build my VM with the two drives that are declared in the image in our gallery. Additionally I was also able to add a third disk using the azurerm_managed_disk/azurerm_virtual_machine_data_disk_attachment. I was trying to determine how to find the dataDiskImages from the image in the gallery but I've not been able to suss that out yet. It seems that what needs to be done is the code should pull the dataDiskImages property and do a similar conversion as it does with the osDisk. Hoping that @tombuildsstuff can help me out then maybe I can PR a change? |
Ok, so on a hunch I completely commented out the DataDisks property and ran it again and it worked, I created a VM with both the included image data drive AND an attached drive. |
- Remove the DataDisks property instead of making it an empty array allows for the usage of golden images that have multiple disks
👋 hey folks To give an update on this one, unfortunately this issue is still blocked due to a combination of the behaviour of the Azure API (specifically the We've spent a considerable amount of time trying to solve this; however given the number of use-cases for disks, every technical solution possible using the Terraform Plugin SDK has hit a wall for some subset of users which means that Terraform Plugin Framework is required to solve this. Unfortunately this requires bumping the version of the Terraform Protocol being used - which is going to bump the minimum required version of Terraform. Although bumping the minimum version of Terraform is something that we've had scheduled for 4.0 for a long time - unfortunately that migration in a codebase this size is non-trivial, due to the design of Terraform Plugin Framework being substantially different to the Terraform Plugin SDK, which (amongst other things) requires breaking configuration changes. Whilst porting over the existing Moving forward we plan to open a Meta Issue tracking Terraform Plugin Framework in the not-too-distant future, however there's a number of items that we need to resolve before doing so. We understand that's disheartening to hear, we're trying to unblock this (and several other) of the larger issues - but equally we don't want to give folks false-hope that this is a quick win when doing so would cause larger issues. Given the amount of activity on this thread - I'm going to temporarily lock this issue for the moment to avoid setting incorrect expectations - but we'll post an update as soon as we can. To reiterate/TL;DR: adding support for Terraform Plugin Framework is a high priority for us and will unblock work on this feature request. We plan to open a Meta Issue for that in the not-too-distant future - which we'll post an update about here when that becomes available. Thank you all for your input, please bear with us - and we'll post an update as soon as we can. |
Community Note
Description
Azure allows VMs to be booted with managed data disks pre-attached/attached-on-boot. This enables use cases where
cloud-init
and/or other "on-launch" configuration management tooling is able to prepare them for use as part of the initialisation process.This provider currently only supports this case for individual VMs with the older, deprecated
azurerm_virtual_machine
resource. The newazurerm_linux_virtual_machine
andazurerm_windows_virtual_machine
resources instead opt to push users towards the separateazurerm_virtual_machine_data_disk_attachment
which only attaches data disks to an existing VM post-boot, which fails to service the use case laid out above.This is in contrast to the respective
*_scale_set
providers which (albeit out of necessity) support this behaviour.Please could a repeatable
data_disk
block be added to the new VM resources (analogous to the same block in their scale_set counterparts) in order to allow VMs to be started with managed data disks pre-attached.Thanks! 😁
New or Affected Resource(s)
azurerm_linux_virtual_machine
azurerm_windows_virtual_machine
Potential Terraform Configuration
References
The text was updated successfully, but these errors were encountered: