Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migration fails to run due to target being empty, despite Incus being freshly installed #479

Closed
2 tasks done
C0rn3j opened this issue Feb 10, 2024 · 10 comments · Fixed by #480
Closed
2 tasks done
Assignees
Labels
Bug Confirmed to be a bug Easy Good for new contributors
Milestone

Comments

@C0rn3j
Copy link

C0rn3j commented Feb 10, 2024

# Arch Linux
$ pacman -Q lxc lxcfs incus lxd btrfs-progs; uname -r
lxc 1:5.0.3-1
lxcfs 5.0.4-1
incus 0.5.1-1
lxd 5.20-1
btrfs-progs 6.7-1
6.7.1-arch1-1
[0] % lxc list --fast
+-------------+---------+--------------+----------------------+----------+-----------+
|    NAME     |  STATE  | ARCHITECTURE |      CREATED AT      | PROFILES |   TYPE    |
+-------------+---------+--------------+----------------------+----------+-----------+
| ansible     | RUNNING | x86_64       | 2021/08/09 10:48 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| asterisk    | STOPPED | x86_64       | 2021/08/19 11:48 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| auth        | RUNNING | x86_64       | 2021/08/10 17:28 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| books       | RUNNING | x86_64       | 2021/08/10 15:32 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| cloud       | RUNNING | x86_64       | 2021/08/10 21:47 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| gitea       | RUNNING | x86_64       | 2021/12/25 20:06 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| graphs      | RUNNING | x86_64       | 2022/01/16 13:09 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| jellyfin    | RUNNING | x86_64       | 2023/03/10 15:54 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| minecraft   | STOPPED | x86_64       | 2023/05/01 18:50 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| monitoring  | RUNNING | x86_64       | 2021/08/11 09:39 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| postgresql  | RUNNING | x86_64       | 2022/08/21 20:40 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| proxy       | RUNNING | x86_64       | 2021/08/08 14:46 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| rss         | RUNNING | x86_64       | 2021/08/10 12:55 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| semaphore   | RUNNING | x86_64       | 2022/08/21 13:09 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| vlmcsd      | RUNNING | x86_64       | 2021/08/07 16:38 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| waylandtest | RUNNING | x86_64       | 2023/03/28 21:40 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+
| wekan       | RUNNING | x86_64       | 2021/08/10 11:22 UTC | default  | CONTAINER |
+-------------+---------+--------------+----------------------+----------+-----------+

[0] % incus list --fast 
+------+-------+--------------+------------+----------+------+
| NAME | STATE | ARCHITECTURE | CREATED AT | PROFILES | TYPE |
+------+-------+--------------+------------+----------+------+

[0] % sudo lxd-to-incus                                                                                                                          
=> Looking for source server
==> Detected: manual installation
=> Looking for target server
==> Detected: systemd
=> Connecting to source server
=> Connecting to the target server
=> Checking server versions
==> Source version: 5.20
==> Target version: 0.5.1
=> Validating version compatibility
=> Checking that the source server isn't empty
=> Checking that the target server is empty
Error: Target server isn't empty, can't proceed with migration.
incus info
[0] % incus info                               
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_dev_incus
- migration_pre_copy
- infiniband
- dev_incus_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- dev_incus_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- zfs_delegate
- storage_api_remote_volume_snapshot_copy
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- image_restriction_privileged
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- certificate_description
- disk_io_bus_virtio_blk
- loki_config_instance
- instance_create_start
- clustering_evacuation_stop_options
- boot_host_shutdown_action
- agent_config_drive
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: c0rn3j
auth_user_method: unix
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIICATCCAYagAwIBAgIQP5L3IFfQurKt3B4HbJ8o3zAKBggqhkjOPQQDAzAyMRkw
    FwYDVQQKExBMaW51eCBDb250YWluZXJzMRUwEwYDVQQDDAxyb290QEx1eHVyaWEw
    HhcNMjQwMjEwMjI0NDUyWhcNMzQwMjA3MjI0NDUyWjAyMRkwFwYDVQQKExBMaW51
    eCBDb250YWluZXJzMRUwEwYDVQQDDAxyb290QEx1eHVyaWEwdjAQBgcqhkjOPQIB
    BgUrgQQAIgNiAAQhujILYZ2y2NiPml5NhNYqzeAeeO19KowpayH7Z/ObL5F9PibB
    4S46cyuKESWUU+o9AXLSKgCaqJ2muCVqVVLg+Q5M9+8R346v1jIaNSWFHYHZXzmn
    YpaMU837wPYh7z2jYTBfMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEF
    BQcDATAMBgNVHRMBAf8EAjAAMCoGA1UdEQQjMCGCB0x1eHVyaWGHBH8AAAGHEAAA
    AAAAAAAAAAAAAAAAAAEwCgYIKoZIzj0EAwMDaQAwZgIxAMh+cwT3hjnm3P4bwUKv
    3Xs/3caUUA+iljOu6bsjITFIToB8ep725b4iO2PsyDguwAIxAJXbd28NQ6FitRIQ
    3xXzDukPViBMlQo09olelNmyGMVtbHNWajrGQ0ZtIMr0Q7ZSsw==
    -----END CERTIFICATE-----
  certificate_fingerprint: 11943adbf583c030907f9ca752d1991bf5de6a8064aad50e6e01158d30988484
  driver: lxc | qemu
  driver_version: 5.0.3 | 8.2.1
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 6.7.1-arch1-1
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Arch Linux
  os_version: ""
  project: default
  server: incus
  server_clustered: false
  server_event_mode: full-mesh
  server_name: Luxuria
  server_pid: 26066
  server_version: 0.5.1
  storage: dir
  storage_version: "1"
  storage_supported_drivers:
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.23(2) (2023-11-21) / 1.02.197 (2023-11-21) / 4.48.0
    remote: false
  - name: btrfs
    version: "6.7"
    remote: false

Attempting migration fails successfully.

Would be nice if the tool said why exactly it thinks target isn't empty

Repro:

  • Enabled incus.socket, incus.service, incus-user.socket
  • usermod -a -G incus username
  • usermod -a -G incus-admin username
  • Forced group refresh
  • Ran incus admin init but immediately exited out of it with ^C
  • Ran lxd-to-incus
  • Main daemon log (at /var/log/incus/incusd.log) -> AppArmor support has been disabled because of lack of kernel support
  • Output of the daemon with --debug (alternatively output of incus monitor --pretty while reproducing the issue)
[130] # sudo incus monitor --pretty
To start your first container, try: incus launch images:ubuntu/22.04
Or for a virtual machine: incus launch images:ubuntu/22.04 --vm

DEBUG  [2024-02-11T00:13:56+01:00] Event listener server handler started         id=66813547-06f6-415e-93d4-3f8f7af8c466 local=/var/lib/incus/unix.socket remote=@
DEBUG  [2024-02-11T00:14:05+01:00] Handling API request                          ip=@ method=GET protocol=unix url=/1.0 username=root
DEBUG  [2024-02-11T00:14:05+01:00] Handling API request                          ip=@ method=GET protocol=unix url=/1.0/projects username=root
DEBUG  [2024-02-11T00:14:05+01:00] Handling API request                          ip=@ method=GET protocol=unix url=/1.0/profiles username=root
DEBUG  [2024-02-11T00:14:05+01:00] Handling API request                          ip=@ method=GET protocol=unix url="/1.0/instances?" username=root
DEBUG  [2024-02-11T00:14:05+01:00] Handling API request                          ip=@ method=GET protocol=unix url=/1.0/storage-pools username=root
@stgraber
Copy link
Member

Let's check the basics:

  • incus config show
  • incus list
  • incus project list
  • incus network list
  • incus storage list

@stgraber
Copy link
Member

Actually based on the debug output above, it looks like the issue is with incus storage list.

@C0rn3j
Copy link
Author

C0rn3j commented Feb 10, 2024

[0] # incus storage list
+---------+--------+--------------------------------------+-------------+---------+---------+
|  NAME   | DRIVER |                SOURCE                | DESCRIPTION | USED BY |  STATE  |
+---------+--------+--------------------------------------+-------------+---------+---------+
| default | dir    | /var/lib/incus/storage-pools/default |             | 1       | CREATED |
+---------+--------+--------------------------------------+-------------+---------+---------+

[0] # incus project list
+-------------------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+
|       NAME        | IMAGES | PROFILES | STORAGE VOLUMES | STORAGE BUCKETS | NETWORKS | NETWORK ZONES |      DESCRIPTION      | USED BY |
+-------------------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+
| default (current) | YES    | YES      | YES             | YES             | YES      | YES           | Default Incus project | 2       |
+-------------------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+

[0] # incus network list
+----------+----------+---------+----------------+--------------------------+-------------+---------+---------+
|   NAME   |   TYPE   | MANAGED |      IPV4      |           IPV6           | DESCRIPTION | USED BY |  STATE  |
+----------+----------+---------+----------------+--------------------------+-------------+---------+---------+
| enp14s0  | physical | NO      |                |                          |             | 0       |         |
+----------+----------+---------+----------------+--------------------------+-------------+---------+---------+
| incusbr0 | bridge   | YES     | 10.123.32.1/24 | fd42:97a:129a:7801::1/64 |             | 1       | CREATED |
+----------+----------+---------+----------------+--------------------------+-------------+---------+---------+
| sysdbr0  | bridge   | NO      |                |                          |             | 0       |         |
+----------+----------+---------+----------------+--------------------------+-------------+---------+---------+
| wlan0    | physical | NO      |                |                          |             | 0       |         |
+----------+----------+---------+----------------+--------------------------+-------------+---------+---------+

config/list is empty.

Looks like it automatically creates a btrfs storage somehow?

@stgraber
Copy link
Member

Right, so your incus admin init still managed to create storage and network.

You should be able to delete all that stuff with:

  • incus profile device remove default root
  • incus profile device remove default eth0
  • incus storage delete default
  • incus network delete incusbr0

At which point Incus should be back to being empty and allow the migration.

@stgraber
Copy link
Member

Normally ctrl+c during the interactive incus admin init would not lead to any changes to the system, though the configuration you have would actually be consistent with the non-interactive incus admin init --auto having been run at some point.

@C0rn3j
Copy link
Author

C0rn3j commented Feb 10, 2024

[0] # lxd-to-incus                                               
=> Looking for source server
==> Detected: manual installation
=> Looking for target server
==> Detected: systemd
=> Connecting to source server
=> Connecting to the target server
=> Checking server versions
==> Source version: 5.20
==> Target version: 0.5.1
=> Validating version compatibility
=> Checking that the source server isn't empty
=> Checking that the target server is empty
=> Validating source server configuration

The migration is now ready to proceed.
At this point, the source server and all its instances will be stopped.
Instances will come back online once the migration is complete.
Proceed with the migration? [default=no]: no

Yep, that seems to have done it (will do the actual migration soonish™, just need to check what the snapd workaround is, it's past midnight here)

@C0rn3j
Copy link
Author

C0rn3j commented Feb 10, 2024

Restarting incus-user.service creates them back.
Which would run /usr/bin/incus-user --group incus

@stgraber
Copy link
Member

Ah yeah, that'd explain that. Though incus-user is supposed to be socket activated and so never started unless something connects to its socket.

stgraber added a commit to stgraber/incus that referenced this issue Feb 10, 2024
@stgraber
Copy link
Member

Sent #480 to give a slightly clearer error

@C0rn3j
Copy link
Author

C0rn3j commented Feb 10, 2024

I have apparently ran systemctl restart incus-user, as the Arch Wiki incorrectly calls it a systemd user service unit, while it is neither a user unit nor a service unit.

To delegate container creation to users, enable/start the user service incus-user.socket

I have fixed the wiki to not be ambiguous.

Thanks for the PR, that'll hopefully help a bunch!

@stgraber stgraber self-assigned this Feb 10, 2024
@stgraber stgraber added this to the incus-0.6 milestone Feb 10, 2024
@stgraber stgraber added Bug Confirmed to be a bug Easy Good for new contributors labels Feb 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Confirmed to be a bug Easy Good for new contributors
Development

Successfully merging a pull request may close this issue.

2 participants