-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volatile disks fail to backup, SAVE property is not included #6505
Comments
Just to double check is BACKUP_VOLATILE set to YES? |
Sorry i didn't add all template, but yes, the BACKUP_VOLATILE is set to YES. root@orchestrator1:~# onevm show 86 -y ---
VM:
ID: '86'
UID: '0'
GID: '0'
UNAME: oneadmin
GNAME: oneadmin
NAME: test-dr
PERMISSIONS:
OWNER_U: '1'
OWNER_M: '1'
OWNER_A: '0'
GROUP_U: '0'
GROUP_M: '0'
GROUP_A: '0'
OTHER_U: '0'
OTHER_M: '0'
OTHER_A: '0'
LAST_POLL: '1708023803'
STATE: '3'
LCM_STATE: '3'
PREV_STATE: '3'
PREV_LCM_STATE: '3'
RESCHED: '0'
STIME: '1707851420'
ETIME: '0'
DEPLOY_ID: 6bea7b2d-760e-4a9b-b3ab-0630234d1ece
MONITORING:
CPU: '0.0'
DISKRDBYTES: '46257014'
DISKRDIOPS: '1562'
DISKWRBYTES: '20205568'
DISKWRIOPS: '15019'
DISK_SIZE:
- ID: '0'
SIZE: '7'
- ID: '1'
SIZE: '1'
- ID: '2'
SIZE: '2'
ID: '86'
MEMORY: '176636'
NETRX: '0'
NETTX: '0'
TIMESTAMP: '1708023803'
SCHED_ACTIONS: {}
TEMPLATE:
AUTOMATIC_DS_REQUIREMENTS: ("CLUSTERS/ID" @> 0 | "CLUSTERS/ID" @> 100)
AUTOMATIC_NIC_REQUIREMENTS: ("CLUSTERS/ID" @> 0 | "CLUSTERS/ID" @> 100)
AUTOMATIC_REQUIREMENTS: "(CLUSTER_ID = 0 | CLUSTER_ID = 100) & !(PUBLIC_CLOUD
= YES) & !(PIN_POLICY = PINNED)"
CONTEXT:
DISK_ID: '1'
NETWORK: 'YES'
SSH_PUBLIC_KEY: ''
TARGET: hda
CPU: '1'
DISK:
- ALLOW_ORPHANS: FORMAT
CLONE: 'YES'
CLONE_TARGET: SYSTEM
CLUSTER_ID: '0,100'
DATASTORE: images
DATASTORE_ID: '1'
DEV_PREFIX: vd
DISK_ID: '0'
DISK_SNAPSHOT_TOTAL_SIZE: '0'
DISK_TYPE: FILE
DRIVER: qcow2
FORMAT: qcow2
IMAGE: alpine_3.18_KVM ECASA
IMAGE_ID: '160'
IMAGE_STATE: '2'
LN_TARGET: NONE
ORIGINAL_SIZE: '5120'
READONLY: 'NO'
SAVE: 'NO'
SIZE: '5120'
SOURCE: "/var/lib/one//datastores/1/68c315adc3356bf3f959454607f6f97e"
TARGET: vda
TM_MAD: shared
TYPE: FILE
- ALLOW_ORPHANS: FORMAT
CLUSTER_ID: '0,100'
DATASTORE: system
DATASTORE_ID: '0'
DEV_PREFIX: vd
DISK_ID: '2'
DISK_TYPE: FILE
DRIVER: qcow2
FORMAT: qcow2
FS: ext4
SIZE: '1024'
TARGET: vdb
TM_MAD: shared
TM_MAD_SYSTEM: shared
TYPE: fs
GRAPHICS:
LISTEN: 0.0.0.0
PORT: '5986'
TYPE: VNC
MEMORY: '128'
NIC_DEFAULT:
MODEL: virtio
OS:
ARCH: x86_64
UUID: 6bea7b2d-760e-4a9b-b3ab-0630234d1ece
TEMPLATE_ID: '51'
TM_MAD_SYSTEM: shared
VCPU: '1'
VMID: '86'
USER_TEMPLATE:
DESCRIPTION: Alpine Linux 3.18 image for KVM, LXD and vCenter
LOGO: images/logos/linux.png
LXD_SECURITY_PRIVILEGED: 'true'
HISTORY_RECORDS:
HISTORY:
- OID: '86'
SEQ: '0'
HOSTNAME: hv-dev-n2-kvm
HID: '1'
CID: '100'
STIME: '1707851421'
ETIME: '1707851449'
VM_MAD: kvm
TM_MAD: shared
DS_ID: '0'
PSTIME: '1707851421'
PETIME: '1707851423'
RSTIME: '1707851423'
RETIME: '1707851449'
ESTIME: '0'
EETIME: '0'
ACTION: '21'
UID: '0'
GID: '0'
REQUEST_ID: '5680'
- OID: '86'
SEQ: '1'
HOSTNAME: hv-dev-n2-kvm
HID: '1'
CID: '100'
STIME: '1707851449'
ETIME: '1707851486'
VM_MAD: kvm
TM_MAD: shared
DS_ID: '0'
PSTIME: '0'
PETIME: '0'
RSTIME: '1707851449'
RETIME: '1707851486'
ESTIME: '0'
EETIME: '0'
ACTION: '50'
UID: '0'
GID: '0'
REQUEST_ID: '1248'
- OID: '86'
SEQ: '2'
HOSTNAME: hv-dev-n2-kvm
HID: '1'
CID: '100'
STIME: '1707851486'
ETIME: '1707858321'
VM_MAD: kvm
TM_MAD: shared
DS_ID: '0'
PSTIME: '0'
PETIME: '0'
RSTIME: '1707851486'
RETIME: '1707858321'
ESTIME: '0'
EETIME: '0'
ACTION: '50'
UID: '0'
GID: '0'
REQUEST_ID: '8000'
- OID: '86'
SEQ: '3'
HOSTNAME: hv-dev-n2-kvm
HID: '1'
CID: '100'
STIME: '1707858321'
ETIME: '0'
VM_MAD: kvm
TM_MAD: shared
DS_ID: '0'
PSTIME: '0'
PETIME: '0'
RSTIME: '1707858321'
RETIME: '0'
ESTIME: '0'
EETIME: '0'
ACTION: '0'
UID: "-1"
GID: "-1"
REQUEST_ID: "-1"
BACKUPS:
BACKUP_CONFIG:
BACKUP_VOLATILE: 'YES'
FS_FREEZE: AGENT
INCREMENTAL_BACKUP_ID: '172'
INCREMENT_MODE: CBT
KEEP_LAST: '4'
LAST_INCREMENT_ID: '0'
MODE: INCREMENT
BACKUP_IDS:
ID: '172' |
I have done the test manually with a complete new VM and same results. oneadmin@hv-dev-n1:~$ /var/tmp/one/tm/lib/backup_qcow2.rb -l -d 0:2: -x /var/lib/one/datastores/100/97/backup/vm.xml -p /var/lib/one//datastores/0/97
undefined method `text' for nil:NilClass
oneadmin@hv-dev-n1:~$ tail /var/log/one/backup_qcow2.log
11:37:21.831 [CMD]: virsh --connect qemu:///system checkpoint-list --name 051d250c-1567-4841-bdcb-55e318aa2999
11:37:21.861 [CMD]: DONE
11:37:21.862 [CMD]: qemu-img info --output json --force-share /var/lib/one//datastores/0/97/disk.0
11:37:21.938 [CMD]: DONE
oneadmin@hv-dev-n1:~$ ruby --version
ruby 3.0.2p107 (2021-07-07 revision 0db68f0233) [x86_64-linux-gnu]
oneadmin@hv-dev-n1:~$ First disk image is done, but the Exception don't let continue. |
I would also like to add, that |
No this is by design to simplify operations (mainly disk provision). If you need to persist data or copy you need to use a regular disk |
About |
Team, can we loop on this! |
WE'll look into it during the 6.8.3 release cycle, no more updates so far |
Hi @rsmontero, the solution in our code was replacing the line: per = d.elements['SAVE'].text.casecmp('YES') == 0 per = d.elements['SAVE'].nil? ? false : d.elements['SAVE'].text.casecmp('YES') == 0 in all occurrences inside the backup_qcow2.rb lib! |
Description
During the execution of the backup of one VM that including volatile disks, the backup_qcow2.rb fail with the following error:
To Reproduce
Create a new VM and attach a new Volatile Disk, select the option for include volatile disks in the backup, and runa backup.
Expected behavior
Backup created successfully and no error throw.
Details
Additional context
In the in "backup_qcow2.rb" interface there are a check in the lines:
If we check the template for an volatile disk, there are no such attribute. (The "SAVE" attribute is missing)
Progress Status
The text was updated successfully, but these errors were encountered: