Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volatile disks fail to backup, SAVE property is not included #6505

Closed
3 tasks
nachowork90 opened this issue Feb 13, 2024 · 9 comments
Closed
3 tasks

Volatile disks fail to backup, SAVE property is not included #6505

nachowork90 opened this issue Feb 13, 2024 · 9 comments

Comments

@nachowork90
Copy link

nachowork90 commented Feb 13, 2024

Description
During the execution of the backup of one VM that including volatile disks, the backup_qcow2.rb fail with the following error:

# -------------------------------------- # Create backup live # -------------------------------------- /var/tmp/one/tm/lib/backup_qcow2.rb -l -d "0:2:" -x /var/lib/one/datastores/100/87/backup/vm.xml -p /var/lib/one//datastores/0/87
Error: undefined method `text' for nil:NilClass Error preparing disk files: undefined method `text' for nil:NilClass 

To Reproduce
Create a new VM and attach a new Volatile Disk, select the option for include volatile disks in the backup, and runa backup.

Expected behavior
Backup created successfully and no error throw.

Details

  • Affected Component: [Storage]
  • Hypervisor: [ KVM]
  • Version: [6.8.1]

Additional context
In the in "backup_qcow2.rb" interface there are a check in the lines:

730 ----> per = d.elements['SAVE'].text.casecmp('YES') == 0
802-----> per = d.elements['SAVE'].text.casecmp('YES') == 0
945-----> per = d.elements['SAVE'].text.casecmp('YES') == 0
1031----> per = d.elements['SAVE'].text.casecmp('YES') == 0

If we check the template for an volatile disk, there are no such attribute. (The "SAVE" attribute is missing)

DISK = [
  ALLOW_ORPHANS = "FORMAT",
  CLUSTER_ID = "0,100",
  DATASTORE = "system",
  DATASTORE_ID = "0",
  DEV_PREFIX = "vd",
  DISK_ID = "2",
  DISK_TYPE = "FILE",
  DRIVER = "qcow2",
  FORMAT = "qcow2",
  FS = "ext4",
  SIZE = "1024",
  TARGET = "vdb",
  TM_MAD = "shared",
  TM_MAD_SYSTEM = "shared",
  TYPE = "fs" ]

Progress Status

  • Code committed
  • Testing - QA
  • Documentation (Release notes - resolved issues, compatibility, known issues)
@rsmontero
Copy link
Member

@nachowork90
Copy link
Author

Sorry i didn't add all template, but yes, the BACKUP_VOLATILE is set to YES.

root@orchestrator1:~# onevm show 86 -y
---
VM:
  ID: '86'
  UID: '0'
  GID: '0'
  UNAME: oneadmin
  GNAME: oneadmin
  NAME: test-dr
  PERMISSIONS:
    OWNER_U: '1'
    OWNER_M: '1'
    OWNER_A: '0'
    GROUP_U: '0'
    GROUP_M: '0'
    GROUP_A: '0'
    OTHER_U: '0'
    OTHER_M: '0'
    OTHER_A: '0'
  LAST_POLL: '1708023803'
  STATE: '3'
  LCM_STATE: '3'
  PREV_STATE: '3'
  PREV_LCM_STATE: '3'
  RESCHED: '0'
  STIME: '1707851420'
  ETIME: '0'
  DEPLOY_ID: 6bea7b2d-760e-4a9b-b3ab-0630234d1ece
  MONITORING:
    CPU: '0.0'
    DISKRDBYTES: '46257014'
    DISKRDIOPS: '1562'
    DISKWRBYTES: '20205568'
    DISKWRIOPS: '15019'
    DISK_SIZE:
    - ID: '0'
      SIZE: '7'
    - ID: '1'
      SIZE: '1'
    - ID: '2'
      SIZE: '2'
    ID: '86'
    MEMORY: '176636'
    NETRX: '0'
    NETTX: '0'
    TIMESTAMP: '1708023803'
  SCHED_ACTIONS: {}
  TEMPLATE:
    AUTOMATIC_DS_REQUIREMENTS: ("CLUSTERS/ID" @> 0 | "CLUSTERS/ID" @> 100)
    AUTOMATIC_NIC_REQUIREMENTS: ("CLUSTERS/ID" @> 0 | "CLUSTERS/ID" @> 100)
    AUTOMATIC_REQUIREMENTS: "(CLUSTER_ID = 0 | CLUSTER_ID = 100) & !(PUBLIC_CLOUD
      = YES) & !(PIN_POLICY = PINNED)"
    CONTEXT:
      DISK_ID: '1'
      NETWORK: 'YES'
      SSH_PUBLIC_KEY: ''
      TARGET: hda
    CPU: '1'
    DISK:
    - ALLOW_ORPHANS: FORMAT
      CLONE: 'YES'
      CLONE_TARGET: SYSTEM
      CLUSTER_ID: '0,100'
      DATASTORE: images
      DATASTORE_ID: '1'
      DEV_PREFIX: vd
      DISK_ID: '0'
      DISK_SNAPSHOT_TOTAL_SIZE: '0'
      DISK_TYPE: FILE
      DRIVER: qcow2
      FORMAT: qcow2
      IMAGE: alpine_3.18_KVM ECASA
      IMAGE_ID: '160'
      IMAGE_STATE: '2'
      LN_TARGET: NONE
      ORIGINAL_SIZE: '5120'
      READONLY: 'NO'
      SAVE: 'NO'
      SIZE: '5120'
      SOURCE: "/var/lib/one//datastores/1/68c315adc3356bf3f959454607f6f97e"
      TARGET: vda
      TM_MAD: shared
      TYPE: FILE
    - ALLOW_ORPHANS: FORMAT
      CLUSTER_ID: '0,100'
      DATASTORE: system
      DATASTORE_ID: '0'
      DEV_PREFIX: vd
      DISK_ID: '2'
      DISK_TYPE: FILE
      DRIVER: qcow2
      FORMAT: qcow2
      FS: ext4
      SIZE: '1024'
      TARGET: vdb
      TM_MAD: shared
      TM_MAD_SYSTEM: shared
      TYPE: fs
    GRAPHICS:
      LISTEN: 0.0.0.0
      PORT: '5986'
      TYPE: VNC
    MEMORY: '128'
    NIC_DEFAULT:
      MODEL: virtio
    OS:
      ARCH: x86_64
      UUID: 6bea7b2d-760e-4a9b-b3ab-0630234d1ece
    TEMPLATE_ID: '51'
    TM_MAD_SYSTEM: shared
    VCPU: '1'
    VMID: '86'
  USER_TEMPLATE:
    DESCRIPTION: Alpine Linux 3.18 image for KVM, LXD and vCenter
    LOGO: images/logos/linux.png
    LXD_SECURITY_PRIVILEGED: 'true'
  HISTORY_RECORDS:
    HISTORY:
    - OID: '86'
      SEQ: '0'
      HOSTNAME: hv-dev-n2-kvm
      HID: '1'
      CID: '100'
      STIME: '1707851421'
      ETIME: '1707851449'
      VM_MAD: kvm
      TM_MAD: shared
      DS_ID: '0'
      PSTIME: '1707851421'
      PETIME: '1707851423'
      RSTIME: '1707851423'
      RETIME: '1707851449'
      ESTIME: '0'
      EETIME: '0'
      ACTION: '21'
      UID: '0'
      GID: '0'
      REQUEST_ID: '5680'
    - OID: '86'
      SEQ: '1'
      HOSTNAME: hv-dev-n2-kvm
      HID: '1'
      CID: '100'
      STIME: '1707851449'
      ETIME: '1707851486'
      VM_MAD: kvm
      TM_MAD: shared
      DS_ID: '0'
      PSTIME: '0'
      PETIME: '0'
      RSTIME: '1707851449'
      RETIME: '1707851486'
      ESTIME: '0'
      EETIME: '0'
      ACTION: '50'
      UID: '0'
      GID: '0'
      REQUEST_ID: '1248'
    - OID: '86'
      SEQ: '2'
      HOSTNAME: hv-dev-n2-kvm
      HID: '1'
      CID: '100'
      STIME: '1707851486'
      ETIME: '1707858321'
      VM_MAD: kvm
      TM_MAD: shared
      DS_ID: '0'
      PSTIME: '0'
      PETIME: '0'
      RSTIME: '1707851486'
      RETIME: '1707858321'
      ESTIME: '0'
      EETIME: '0'
      ACTION: '50'
      UID: '0'
      GID: '0'
      REQUEST_ID: '8000'
    - OID: '86'
      SEQ: '3'
      HOSTNAME: hv-dev-n2-kvm
      HID: '1'
      CID: '100'
      STIME: '1707858321'
      ETIME: '0'
      VM_MAD: kvm
      TM_MAD: shared
      DS_ID: '0'
      PSTIME: '0'
      PETIME: '0'
      RSTIME: '1707858321'
      RETIME: '0'
      ESTIME: '0'
      EETIME: '0'
      ACTION: '0'
      UID: "-1"
      GID: "-1"
      REQUEST_ID: "-1"
  BACKUPS:
    BACKUP_CONFIG:
      BACKUP_VOLATILE: 'YES'
      FS_FREEZE: AGENT
      INCREMENTAL_BACKUP_ID: '172'
      INCREMENT_MODE: CBT
      KEEP_LAST: '4'
      LAST_INCREMENT_ID: '0'
      MODE: INCREMENT
    BACKUP_IDS:
      ID: '172'

@nachowork90
Copy link
Author

nachowork90 commented Feb 15, 2024

I have done the test manually with a complete new VM and same results.

oneadmin@hv-dev-n1:~$ /var/tmp/one/tm/lib/backup_qcow2.rb -l -d 0:2: -x /var/lib/one/datastores/100/97/backup/vm.xml -p /var/lib/one//datastores/0/97
undefined method `text' for nil:NilClass
oneadmin@hv-dev-n1:~$ tail /var/log/one/backup_qcow2.log

11:37:21.831 [CMD]: virsh --connect qemu:///system checkpoint-list --name 051d250c-1567-4841-bdcb-55e318aa2999
11:37:21.861 [CMD]: DONE
11:37:21.862 [CMD]: qemu-img info --output json --force-share /var/lib/one//datastores/0/97/disk.0
11:37:21.938 [CMD]: DONE
oneadmin@hv-dev-n1:~$ ruby --version
ruby 3.0.2p107 (2021-07-07 revision 0db68f0233) [x86_64-linux-gnu]
oneadmin@hv-dev-n1:~$

First disk image is done, but the Exception don't let continue.
I'm using Ubuntu 22.04 Server.

@Franco-Sparrow
Copy link

Franco-Sparrow commented Feb 17, 2024

I would also like to add, that saveas action cant be done over a volatile disk. Does it have something related with the missing property "SAVE" that @nachowork90 said before?

@rsmontero
Copy link
Member

I would also like to add, that saveas action cant be done over a volatile disk. Does it have something related with the missing property "SAVE" that @nachowork90 said before?

No this is by design to simplify operations (mainly disk provision). If you need to persist data or copy you need to use a regular disk

@rsmontero
Copy link
Member

About SAVE this attribute marks the disk as persistent, it is not used to determine which disks needs to be backup. In the backup logic it is checked because persistent and no-persistent disks are dealt in a different way.

@nachowork90
Copy link
Author

Team, can we loop on this!

@rsmontero
Copy link
Member

WE'll look into it during the 6.8.3 release cycle, no more updates so far

@nachowork90
Copy link
Author

Hi @rsmontero, the solution in our code was replacing the line:

per = d.elements['SAVE'].text.casecmp('YES') == 0
per = d.elements['SAVE'].nil? ? false : d.elements['SAVE'].text.casecmp('YES') == 0

in all occurrences inside the backup_qcow2.rb lib!

@tinova tinova modified the milestones: Release 6.8.3, Release 6.8.4 Apr 22, 2024
@rsmontero rsmontero self-assigned this Jun 4, 2024
rsmontero pushed a commit that referenced this issue Sep 4, 2024
* F #6505: Fix volatile disk backup and restore

* F #6578: Skip backup of CDROM
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment