Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azure.azcollection 1.14.0 azure_rm_manageddisk failed to create disk : Error getting virtual machine debian12 - No value for given attribute #1394

Closed
fabriceverkor opened this issue Jan 5, 2024 · 11 comments · Fixed by #1407
Labels
has_pr PR fixes have been made high_priority High priority question Further information is requested

Comments

@fabriceverkor
Copy link

SUMMARY

On new Debian Bookworm, azure.azcollection 1.14.0 azure_rm_manageddisk failed to create disk : Error getting virtual machine debian12 - No value for given attribute
On old Debian Bullseye, The same code with azure.azcollection 1.13.0 azure_rm_manageddisk works fine.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

azure.azcollection.azure_rm_manageddisk

ANSIBLE VERSION
ansible [core 2.14.3]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  ansible collection location = /etc/ansible/collections
  executable location = /usr/bin/ansible
  python version = 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] (/usr/bin/python3)
  jinja version = 3.1.2
  libyaml = True
COLLECTION VERSION
ansible-galaxy collection list azure.azcollection

# /etc/ansible/collections/ansible_collections
Collection         Version
------------------ -------
azure.azcollection 2.1.1

# /usr/lib/python3/dist-packages/ansible_collections
Collection         Version
------------------ -------
azure.azcollection 1.14.0

CONFIGURATION
CALLBACKS_ENABLED(/etc/ansible/ansible.cfg) = ['log_plays']
COLLECTIONS_PATHS(/etc/ansible/ansible.cfg) = ['/etc/ansible/collections']
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_BECOME(/etc/ansible/ansible.cfg) = True
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory']
DEFAULT_PRIVATE_KEY_FILE(/etc/ansible/ansible.cfg) = /home/vking/.ssh/id_rsa
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = vking
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles']
DEFAULT_VAULT_PASSWORD_FILE(/etc/ansible/ansible.cfg) = /etc/ansible/.vault_pass
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT

Debian Bookworm
azure-cli 2.55.0-1~bullseye

STEPS TO REPRODUCE

ansble-playbook create_disk.yml

create_disk.yml :
      - name: Create managed disks
        azure.azcollection.azure_rm_manageddisk:
          profile: "myprofile"
          name: "debian12-data1"
          resource_group: "debian12-rg"
          lun: "1"
          disk_size_gb: "30"
          storage_account_type: "Standard_LRS"
          managed_by: "debian12"
          state: present
EXPECTED RESULTS

A new disk debian12-data1 should be created and attached to VM debian12.
It works from a Debian Bullseye with azure.azcollection 1.13.0 and azure-cli 2.55.0-1~bookworm

ACTUAL RESULTS

Disk is created but not attached to VM debian12, with error below :
Error getting virtual machine debian12 - No value for given attribute

``
TASK [Create managed disks] **************************************************************************************************************
task path: /etc/ansible/playbooks/Azure/create_vm.yml:84
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1704454723.8986018-86892-161874392700225 `" && echo ansible-tmp-1704454723.8986018-86892-161874392700225="` echo /root/.ansible/tmp/ansible-tmp-1704454723.8986018-86892-161874392700225 `" ) && sleep 0'
Using module file /etc/ansible/collections/ansible_collections/azure/azcollection/plugins/modules/azure_rm_manageddisk.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-86795u2pa7sep/tmp9ln0ubw0 TO /root/.ansible/tmp/ansible-tmp-1704454723.8986018-86892-161874392700225/AnsiballZ_azure_rm_manageddisk.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1704454723.8986018-86892-161874392700225/ /root/.ansible/tmp/ansible-tmp-1704454723.8986018-86892-161874392700225/AnsiballZ_azure_rm_manageddisk.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1704454723.8986018-86892-161874392700225/AnsiballZ_azure_rm_manageddisk.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1704454723.8986018-86892-161874392700225/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_azure.azcollection.azure_rm_manageddisk_payload_vcr1msn1/ansible_azure.azcollection.azure_rm_manageddisk_payload.zip/ansible_collections/azure/azcollection/plugins/modules/azure_rm_manageddisk.py", line 563, in _get_vm
File "/usr/local/lib/python3.11/dist-packages/azure/core/tracing/decorator.py", line 76, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/azure/mgmt/compute/v2021_04_01/operations/_virtual_machines_operations.py", line 1492, in get
request = build_get_request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/azure/mgmt/compute/v2021_04_01/operations/_virtual_machines_operations.py", line 243, in build_get_request
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/msrest/serialization.py", line 652, in url
output = self.serialize_data(data, data_type, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/msrest/serialization.py", line 760, in serialize_data
raise ValueError("No value for given attribute")
failed: [localhost] (item={'key': 'data1', 'value': {'disk': '/dev/sdc', 'lun': 1, 'type': 'Standard_LRS', 'size': 20}}) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"ad_user": null,
"adfs_authority_url": null,
"api_profile": "latest",
"append_tags": true,
"attach_caching": null,
"auth_source": "auto",
"cert_validation_mode": null,
"client_id": null,
"cloud_environment": "AzureCloud",
"create_option": null,
"disk_size_gb": 20,
"location": null,
"log_mode": null,
"log_path": null,
"lun": 1,
"managed_by": "debian12",
"managed_by_extended": null,
"max_shares": null,
"name": "debian12-data1",
"os_type": null,
"password": null,
"profile": "default",
"resource_group": "debian12-rg",
"secret": null,
"source_uri": null,
"state": "present",
"storage_account_id": null,
"storage_account_type": "Standard_LRS",
"subscription_id": null,
"tags": null,
"tenant": null,
"thumbprint": null,
"x509_certificate_path": null,
"zone": null
}
},
"item": {
"key": "data1",
"value": {
"disk": "/dev/sdc",
"lun": 1,
"size": 20,
"type": "Standard_LRS"
}
},
"msg": "Error getting virtual machine debian12 - No value for given attribute"
}

@Fred-sun
Copy link
Collaborator

Fred-sun commented Jan 8, 2024

@fabriceverkor You are welcome to submit any problems you encounter, but from your error log and test logs, you know that an error occurred while attaching the disk to the VM, and an exception was displayed while getting the VM's status. Can you check that your VM exists and that the VM status is' success'? Thank you!

@Fred-sun Fred-sun added question Further information is requested medium_priority Medium priority work in In trying to solve, or in working with contributors labels Jan 8, 2024
@fabriceverkor
Copy link
Author

Yes, the VM is created before to create disk, see complete playbook below. The same playbook works fine on a ansible server with azure.azcollection 1.13.0 and Ansible core 2.12.10
_---

  • name: Create VM in Azure
    hosts: localhost
    gather_facts: false
    vars_files:

    • "/etc/ansible/vars/vault.yml"
    • "/etc/ansible/vars/azure.yml"
      collections:
    • azure.azcollection

    tasks:

    • block:

      • name: Create Azure Credential file
        ansible.builtin.include_tasks: "/etc/ansible/tasks/create_azure_credentials.yml"
    • name: Create resource group
      azure.azcollection.azure_rm_resourcegroup:
      profile: "default"
      name: "debian12-rg"
      location: "francecentral"

    • name: Create server interface
      azure.azcollection.azure_rm_networkinterface:
      profile: "default"
      name: "debian12-eth0"
      resource_group: "debian12-rg"
      virtual_network_name: "{{ my_virtual_network }}"
      subnet_name: "VMs_Infra"
      create_with_security_group: false
      ip_configurations:
      - name: ipconfig1
      private_ip_allocation_method: Static
      private_ip_address: "{{ myip }}"

    • name: Create VM with Public Image
      azure.azcollection.azure_rm_virtualmachine:
      os_type: Linux
      profile: "default"
      name: "debian12"
      resource_group: "debian12-rg"
      admin_username: vking
      admin_password: "{{ VAULT_ANSIBLE_PASSWORD }}"
      ssh_password_enabled: false
      ssh_public_keys:
      - path: /home/vking/.ssh/authorized_keys
      key_data: "{{publicKey}}"
      vm_size: "Standard_B1ms"
      boot_diagnostics:
      enabled: yes
      image:
      offer: "debian-12"
      publisher: "debian"
      sku: "12-gen2"
      version: latest
      network_interface_names:
      - "debian12-eth0"
      managed_disk_type: Standard_LRS

    • name: Create managed disks
      azure.azcollection.azure_rm_manageddisk:
      profile: "default"
      name: "debian12-data1"
      resource_group: "debian12-rg"
      lun: "1"
      disk_size_gb: "30"
      storage_account_type: "Standard_LRS"
      managed_by: "debian12"
      state: present_

The VM is properly created without error,as show output below, I could connect by SSH on it.

_PLAY [Create VM in Azure] ***************************************************************************************************

TASK [Create Azure Credential file] *****************************************************************************************
included: /etc/ansible/tasks/create_azure_credentials.yml for localhost

TASK [Create /root/.azure directory] ****************************************************************************************
ok: [localhost]

TASK [Create Azure Credential file] *****************************************************************************************
ok: [localhost]

TASK [Create resource group] ************************************************************************************************
ok: [localhost]

TASK [Create server interface] **********************************************************************************************
ok: [localhost]

TASK [Create VM with Public Image] ******************************************************************************************
changed: [localhost]

TASK [Create managed disks] *************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error getting virtual machine debian12 - No value for given attribu te"}_

@Fred-sun
Copy link
Collaborator

Fred-sun commented Jan 9, 2024

@fabriceverkor I used the script you provided and did not encounter your error, creating the VM successfully and mounting the disk to the virtual machine. What version of Aze. azcollections are you using? The current version has been upgraded to v2.1.0, can you update to the latest version? Thank you!


    "state": {
        "create_option": "empty",
        "disk_size_gb": 30,
        "id": "/subscriptions/xxxxxx/resourceGroups/debian12-rg/providers/Microsoft.Compute/disks/debian12-data1",
        "location": "francecentral",
        "managed_by": "/subscriptions/xxxxxx/resourceGroups/debian12-rg/providers/Microsoft.Compute/virtualMachines/debian12",
        "managed_by_extended": null,
        "max_shares": null,
        "name": "debian12-data1",
        "os_type": null,
        "source_uri": null,
        "storage_account_type": "Standard_LRS",
        "tags": null,
        "zone": ""
    }

@fabriceverkor
Copy link
Author

As you can see below, I have the last versions.
I just deleted the VM et replay again. Same problem. Crazy.

ansible-galaxy collection list azure.azcollection

/etc/ansible/collections/ansible_collections

Collection Version


azure.azcollection 2.1.1

/usr/lib/python3/dist-packages/ansible_collections

Collection Version


azure.azcollection 1.14.0

@Fred-sun
Copy link
Collaborator

@fabriceverkor However, it is strange that I did not encounter the problem you mentioned after repeated tests, no matter 1.13.0, 1.14.0 or 2.1.0. Do you have the conditions to rebuild an environment and install the v2.1.0 version to try? I'd like to rule out any environmental contamination of the test. Thank you very much!

@vijayreddiar
Copy link

vijayreddiar commented Jan 11, 2024

@Fred-sun I am using the latest version 2.1.1 by installing through project sync in AWX referring from collections/requirements.yml

Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/azure-azcollection-2.1.1.tar.gz to /var/lib/awx/projects/.__awx_cache/_33__*************/stage/tmp/ansible-local-91889cp0upcbz/tmp60tnb60i/azure-azcollection-2.1.1-0b28pe3e
Installing &apos;azure.azcollection:2.1.1&apos; to &apos;/var/lib/awx/projects/.__awx_cache/_33__***********/stage/requirements_collections/ansible_collections/azure/azcollection&apos;
azure.azcollection:2.1.1 was installed successfully

I am able to reproduce the same issue.

 "msg": "Error getting virtual machine ************** - No value for given attribute",
  "exception": "  File \"/tmp/ansible_azure.azcollection.azure_rm_manageddisk_payload_iy4wjljn/ansible_azure.azcollection.azure_rm_manageddisk_payload.zip/ansible_collections/azure/azcollection/plugins/modules/azure_rm_manageddisk.py\", line 563, in _get_vm\n  File \"/usr/local/lib/python3.9/site-packages/azure/core/tracing/decorator.py\", line 76, in wrapper_use_tracer\n    return func(*args, **kwargs)\n  File \"/usr/local/lib/python3.9/site-packages/azure/mgmt/compute/v2021_04_01/operations/_virtual_machines_operations.py\", line 1492, in get\n    request = build_get_request(\n  File \"/usr/local/lib/python3.9/site-packages/azure/mgmt/compute/v2021_04_01/operations/_virtual_machines_operations.py\", line 243, in build_get_request\n    \"resourceGroupName\": _SERIALIZER.url(\"resource_group_name\", resource_group_name, 'str'),\n  File \"/usr/local/lib/python3.9/site-packages/msrest/serialization.py\", line 652, in url\n    output = self.serialize_data(data, data_type, **kwargs)\n  File \"/usr/local/lib/python3.9/site-packages/msrest/serialization.py\", line 760, in serialize_data\n    raise ValueError(\"No value for given attribute\")\n",

The VM is created and running. The same code was working till 30th Nov, but it was not used for a while and noticed this issue today. So unsure of the prior version when it stopped working.

@Fred-sun
Copy link
Collaborator

@vijayreddiar Sorry for reply later! This is very strange. I have tried multiple tests, but you can't see the problem you're experiencing. I'm so sorry!

@vijayreddiar
Copy link

vijayreddiar commented Jan 12, 2024

@Fred-sun It doesn't look like an environmental issue because I was able to mount the disk manually from Azure Portal. The playbook creates the disk, but fails only when attempting to mount.

PFB the playbook tasks.

- name: Create Managed Data disk
  azure.azcollection.azure_rm_manageddisk:
    auth_source: "{{ azure_auth_source }}"
    profile: "{{ azure_profile }}"
    resource_group: "{{ azure_resource_group }}"
    name: "{{ cloud_disk_name }}"
    disk_size_gb: "{{ cloud_disk_size }}"
    storage_account_type: "{{ cloud_disk_storage_type }}"
    tags: "{{ resource_tags }}"

- name: "Mount Managed Data disk to {{ cloud_vm_name }}"
  azure.azcollection.azure_rm_manageddisk:
    auth_source: "{{ azure_auth_source }}"
    profile: "{{ azure_profile }}"
    resource_group: "{{ azure_resource_group }}"
    name: "{{ cloud_disk_name }}"
    disk_size_gb: "{{ cloud_disk_size }}"
    managed_by: "{{ cloud_vm_name }}"
    storage_account_type: "{{ cloud_disk_storage_type }}"
    tags: "{{ resource_tags }}"

PFB the output of these specific tasks.

TASK [provision_ops_manage_clouddisk : Create Managed Data disk] ***************
task path: /runner/project/roles/provision_ops_manage_clouddisk/tasks/azure_managed_datadisk.yml:8
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: 1000
<localhost> EXEC /bin/sh -c 'echo ~1000 && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /runner/.ansible/tmp `"&& mkdir "` echo /runner/.ansible/tmp/ansible-tmp-1705041407.6009939-93-7281207224241 `" && echo ansible-tmp-1705041407.6009939-93-7281207224241="` echo /runner/.ansible/tmp/ansible-tmp-1705041407.6009939-93-7281207224241 `" ) && sleep 0'
Using module file /runner/requirements_collections/ansible_collections/azure/azcollection/plugins/modules/azure_rm_manageddisk.py
<localhost> PUT /runner/.ansible/tmp/ansible-local-17xwpcqpww/tmp8qsx1_av TO /runner/.ansible/tmp/ansible-tmp-1705041407.6009939-93-7281207224241/AnsiballZ_azure_rm_manageddisk.py
<localhost> EXEC /bin/sh -c 'chmod u+x /runner/.ansible/tmp/ansible-tmp-1705041407.6009939-93-7281207224241/ /runner/.ansible/tmp/ansible-tmp-1705041407.6009939-93-7281207224241/AnsiballZ_azure_rm_manageddisk.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /runner/.ansible/tmp/ansible-tmp-1705041407.6009939-93-7281207224241/AnsiballZ_azure_rm_manageddisk.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /runner/.ansible/tmp/ansible-tmp-1705041407.6009939-93-7281207224241/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
    "changed": true,
    "invocation": {
        "module_args": {
            "ad_user": null,
            "adfs_authority_url": null,
            "api_profile": "latest",
            "append_tags": true,
            "attach_caching": null,
            "auth_source": "env",
            "cert_validation_mode": null,
            "client_id": null,
            "cloud_environment": "AzureCloud",
            "create_option": null,
            "disk_size_gb": 256,
            "location": null,
            "log_mode": null,
            "log_path": null,
            "lun": null,
            "managed_by": null,
            "managed_by_extended": null,
            "max_shares": null,
            "name": "pocrxsandweb01-datadisk3",
            "os_type": null,
            "password": null,
            "profile": "default",
            "resource_group": "maps_sandbox",
            "secret": null,
            "source_uri": null,
            "state": "present",
            "storage_account_id": null,
            "storage_account_type": "StandardSSD_LRS",
            "subscription_id": null,
            "tags": {
                "Environment": "sandbox"
            },
            "tenant": null,
            "thumbprint": null,
            "x509_certificate_path": null,
            "zone": null
        }
    },
    "state": {
        "create_option": "empty",
        "disk_size_gb": 256,
        "id": "/subscriptions/****************************************************************/resourceGroups/maps_sandbox/providers/Microsoft.Compute/disks/pocrxsandweb01-datadisk3",
        "location": "eastus",
        "managed_by": null,
        "managed_by_extended": null,
        "max_shares": null,
        "name": "pocrxsandweb01-datadisk3",
        "os_type": null,
        "source_uri": null,
        "storage_account_type": "StandardSSD_LRS",
        "tags": {
            "Environment": "sandbox"
        },
        "zone": ""
    }
}

TASK [provision_ops_manage_clouddisk : Mount Managed Data disk to pocrxsandweb01] ***
task path: /runner/project/roles/provision_ops_manage_clouddisk/tasks/azure_managed_datadisk.yml:24
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: 1000
<localhost> EXEC /bin/sh -c 'echo ~1000 && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /runner/.ansible/tmp `"&& mkdir "` echo /runner/.ansible/tmp/ansible-tmp-1705041421.7415237-111-107126472830537 `" && echo ansible-tmp-1705041421.7415237-111-107126472830537="` echo /runner/.ansible/tmp/ansible-tmp-1705041421.7415237-111-107126472830537 `" ) && sleep 0'
Using module file /runner/requirements_collections/ansible_collections/azure/azcollection/plugins/modules/azure_rm_manageddisk.py
<localhost> PUT /runner/.ansible/tmp/ansible-local-17xwpcqpww/tmp5aav5ubq TO /runner/.ansible/tmp/ansible-tmp-1705041421.7415237-111-107126472830537/AnsiballZ_azure_rm_manageddisk.py
<localhost> EXEC /bin/sh -c 'chmod u+x /runner/.ansible/tmp/ansible-tmp-1705041421.7415237-111-107126472830537/ /runner/.ansible/tmp/ansible-tmp-1705041421.7415237-111-107126472830537/AnsiballZ_azure_rm_manageddisk.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /runner/.ansible/tmp/ansible-tmp-1705041421.7415237-111-107126472830537/AnsiballZ_azure_rm_manageddisk.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /runner/.ansible/tmp/ansible-tmp-1705041421.7415237-111-107126472830537/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_azure.azcollection.azure_rm_manageddisk_payload_wlig4i0d/ansible_azure.azcollection.azure_rm_manageddisk_payload.zip/ansible_collections/azure/azcollection/plugins/modules/azure_rm_manageddisk.py", line 563, in _get_vm
  File "/usr/local/lib/python3.9/site-packages/azure/core/tracing/decorator.py", line 76, in wrapper_use_tracer
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/azure/mgmt/compute/v2021_04_01/operations/_virtual_machines_operations.py", line 1492, in get
    request = build_get_request(
  File "/usr/local/lib/python3.9/site-packages/azure/mgmt/compute/v2021_04_01/operations/_virtual_machines_operations.py", line 243, in build_get_request
    "resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
  File "/usr/local/lib/python3.9/site-packages/msrest/serialization.py", line 652, in url
    output = self.serialize_data(data, data_type, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/msrest/serialization.py", line 760, in serialize_data
    raise ValueError("No value for given attribute")
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "ad_user": null,
            "adfs_authority_url": null,
            "api_profile": "latest",
            "append_tags": true,
            "attach_caching": null,
            "auth_source": "env",
            "cert_validation_mode": null,
            "client_id": null,
            "cloud_environment": "AzureCloud",
            "create_option": null,
            "disk_size_gb": 256,
            "location": null,
            "log_mode": null,
            "log_path": null,
            "lun": null,
            "managed_by": "pocrxsandweb01",
            "managed_by_extended": null,
            "max_shares": null,
            "name": "pocrxsandweb01-datadisk3",
            "os_type": null,
            "password": null,
            "profile": "default",
            "resource_group": "maps_sandbox",
            "secret": null,
            "source_uri": null,
            "state": "present",
            "storage_account_id": null,
            "storage_account_type": "StandardSSD_LRS",
            "subscription_id": null,
            "tags": {
                "Environment": "sandbox"
            },
            "tenant": null,
            "thumbprint": null,
            "x509_certificate_path": null,
            "zone": null
        }
    },
    "msg": "Error getting virtual machine pocrxsandweb01 - No value for given attribute"
}

I am going to test the same scenario using old version of azure.azcollection

@vijayreddiar
Copy link

@Fred-sun I am able to attach the disk using azure.azcollection v2.0.0 and fails with v2.1.0 and v2.1.1. So, it is surely a bug introduced from v2.1.0, but unsure why it is not reproducible from your end.

Can this be related to #1201?

PFB the output of successful attachment of disk using azure.azcollection v2.0.0

TASK [provision_ops_manage_clouddisk : Mount Managed Data disk to pocrxsandweb01] ***
task path: /runner/project/roles/provision_ops_manage_clouddisk/tasks/azure_managed_datadisk.yml:24
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: 1000
<localhost> EXEC /bin/sh -c 'echo ~1000 && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /runner/.ansible/tmp `"&& mkdir "` echo /runner/.ansible/tmp/ansible-tmp-1705044497.5873828-110-52092295044329 `" && echo ansible-tmp-1705044497.5873828-110-52092295044329="` echo /runner/.ansible/tmp/ansible-tmp-1705044497.5873828-110-52092295044329 `" ) && sleep 0'
Using module file /runner/requirements_collections/ansible_collections/azure/azcollection/plugins/modules/azure_rm_manageddisk.py
<localhost> PUT /runner/.ansible/tmp/ansible-local-17dwookzj_/tmpqe3mfypt TO /runner/.ansible/tmp/ansible-tmp-1705044497.5873828-110-52092295044329/AnsiballZ_azure_rm_manageddisk.py
<localhost> EXEC /bin/sh -c 'chmod u+x /runner/.ansible/tmp/ansible-tmp-1705044497.5873828-110-52092295044329/ /runner/.ansible/tmp/ansible-tmp-1705044497.5873828-110-52092295044329/AnsiballZ_azure_rm_manageddisk.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /runner/.ansible/tmp/ansible-tmp-1705044497.5873828-110-52092295044329/AnsiballZ_azure_rm_manageddisk.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /runner/.ansible/tmp/ansible-tmp-1705044497.5873828-110-52092295044329/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
    "changed": true,
    "invocation": {
        "module_args": {
            "ad_user": null,
            "adfs_authority_url": null,
            "api_profile": "latest",
            "append_tags": true,
            "attach_caching": null,
            "auth_source": "env",
            "cert_validation_mode": null,
            "client_id": null,
            "cloud_environment": "AzureCloud",
            "create_option": null,
            "disk_size_gb": 256,
            "location": null,
            "log_mode": null,
            "log_path": null,
            "lun": null,
            "managed_by": "pocrxsandweb01",
            "managed_by_extended": null,
            "max_shares": null,
            "name": "pocrxsandweb01-datadisk3",
            "os_type": null,
            "password": null,
            "profile": "default",
            "resource_group": "maps_sandbox",
            "secret": null,
            "source_uri": null,
            "state": "present",
            "storage_account_id": null,
            "storage_account_type": "StandardSSD_LRS",
            "subscription_id": null,
            "tags": {
                "Environment": "sandbox"
            },
            "tenant": null,
            "thumbprint": null,
            "x509_certificate_path": null,
            "zone": null
        }
    },
    "state": {
        "create_option": "empty",
        "disk_size_gb": 256,
        "id": "/subscriptions/***************************************/resourceGroups/maps_sandbox/providers/Microsoft.Compute/disks/pocrxsandweb01-datadisk3",
        "location": "eastus",
        "managed_by": "/subscriptions/***************************************/resourceGroups/maps_sandbox/providers/Microsoft.Compute/virtualMachines/pocrxsandweb01",
        "managed_by_extended": null,
        "max_shares": null,
        "name": "pocrxsandweb01-datadisk3",
        "os_type": null,
        "source_uri": null,
        "storage_account_type": "StandardSSD_LRS",
        "tags": {
            "Environment": "sandbox"
        },
        "zone": ""
    }
}

@Fred-sun
Copy link
Collaborator

@vijayreddiar Thanks for your feedback! I have know the cause of the issue, is try to resolve!

@Fred-sun
Copy link
Collaborator

@vijayreddiar Fixed by #1407

@Fred-sun Fred-sun added has_pr PR fixes have been made high_priority High priority and removed work in In trying to solve, or in working with contributors medium_priority Medium priority labels Jan 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
has_pr PR fixes have been made high_priority High priority question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants