You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've reinstalled the OS and Incus on a stand-alone physical server due to disk corruption (failed RAID member) and was using incus admin recover to get everything back from the other arrays still in the same server.
I think I've noticed a potential situation where this tool will not be able to restore if 2 or more instances cross-reference storage volumes on other pools, and vice versa...e.g.:
During the recovery, which is understandable because it hasn't recovered that pool yet, the import process cannot deal with dependencies and at some point recovery might be difficult:
Error: Failed import request: Failed creating instance "instance01" record in project "default": Failed creating instance record: Failed initialising instance: Failed add validation for device "instance01_disk00": Failed to get storage pool "storagepool02": Storage pool not found
In my case this was not an the exact scenario so I just needed to restore the pools in a specific order.
Information to attach
Any relevant kernel output (dmesg)
Container log (incus info NAME --show-log)
Container configuration (incus config show NAME --expanded)
Main daemon log (at /var/log/incus/incusd.log)
[Y] Output of the client with --debug
Output of the daemon with --debug (alternatively output of incus monitor --pretty while reproducing the issue)
The text was updated successfully, but these errors were encountered:
root@v1:~# incus admin recover
This server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: vol1
Name of the storage backend (dir, zfs): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): vol1
Additional storage pool configuration property (KEY=VALUE, empty when done):
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: vol2
Name of the storage backend (dir, zfs): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): vol2
Additional storage pool configuration property (KEY=VALUE, empty when done):
Would you like to recover another storage pool? (yes/no) [default=no]:
The recovery process will be scanning the following storage pools:
- NEW: "vol1" (backend="zfs", source="vol1")
- NEW: "vol2" (backend="zfs", source="vol2")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]:
Scanning for unknown volumes...
The following unknown storage pools have been found:
- Storage pool "vol1" of type "zfs"
- Storage pool "vol2" of type "zfs"
The following unknown volumes have been found:
- Container "a1" on pool "vol1" in project "default" (includes 0 snapshots)
- Volume "bar" on pool "vol2" in project "default" (includes 0 snapshots)
- Volume "foo" on pool "vol2" in project "default" (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]: yes
Starting recovery...
Error: Failed import request: Failed creating instance "a1" record in project "default": Failed creating instance record: Failed initializing instance: Failed add validation for device "bar": Failed to get storage pool "vol2": Storage pool not found
root@v1:~#
stgraber
added a commit
to stgraber/incus
that referenced
this issue
Mar 27, 2024
Required information
Issue description
I've reinstalled the OS and Incus on a stand-alone physical server due to disk corruption (failed RAID member) and was using
incus admin recover
to get everything back from the other arrays still in the same server.I think I've noticed a potential situation where this tool will not be able to restore if 2 or more instances cross-reference storage volumes on other pools, and vice versa...e.g.:
During the recovery, which is understandable because it hasn't recovered that pool yet, the import process cannot deal with dependencies and at some point recovery might be difficult:
In my case this was not an the exact scenario so I just needed to restore the pools in a specific order.
Information to attach
dmesg
)incus info NAME --show-log
)incus config show NAME --expanded
)incus monitor --pretty
while reproducing the issue)The text was updated successfully, but these errors were encountered: