Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Syncoid initial replication fails when zfs dataset contains long entries for properties, ex. org.zfsbootmenu:commandline - No idea about successive sends with property set however... #945

Open
abclution opened this issue Aug 8, 2024 · 7 comments

Comments

@abclution
Copy link

abclution commented Aug 8, 2024

Had to move some root pools around that I have setup using ZfsBootMenu.
ZBM of course asks for kernel options to be passed via dataset property.

With this combination of options and my long kernel command line syncoid bombs.

Had to blank the property before sending which is so far transfering correctly

/usr/sbin/syncoid \
--recursive \
--identifier=CLONEMIGRATE \
--keep-sync-snap \
--mbuffer-size=768M \
--compress none \
--sendoptions='Lc' \
--recvoptions='u' \
--preserve-recordsize \
--preserve-properties \
rpool_nvme/ROOT \
rpool_nvme_REAL/ROOT


zfs get org.zfsbootmenu:commandline rpool_nvme/ROOT/pve-1_FATTOP

NAME                          PROPERTY                     VALUE 
rpool_nvme/ROOT/pve-1_FATTOP  org.zfsbootmenu:commandline  ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4 

mbuffer: error: outputThread: error writing to <stdout> at offset 0x20000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe
CRITICAL ERROR:  zfs send -L -c  'rpool_nvme/ROOT/pve-1_FATTOP'@'syncoid_fatbeast-pve_2024-08-04:12:42:24-GMT03:00' | mbuffer  -q -s 128k -m 768M | pv -p -t -e -r -b -s 30440129168 |  zfs receive -u  -o org.zfsbootmenu:commandline=ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4 -o mountpoint=/ -o canmount=noauto -s -F 'rpool_nvme_REAL/ROOT/pve-1_FATTOP' failed: 512 at /usr/sbin/syncoid line 549.
INFO: Sending oldest full snapshot rpool_nvme/ROOT/pve-1_FATTOP/5bb9fd5b0b47c577993d71921e55ca1bac50c1310f96884ac1c2a118d86391ce@syncoid_fatbeast-pve_2024-08-04:12:42:30-GMT03:00 (~ 30.7 MB) to new target filesystem:
cannot open 'rpool_nvme_REAL/ROOT/pve-1_FATTOP': dataset does not exist
cannot receive new filesystem stream: unable to restore to destination
64.0KiB 0:00:00 [ 284KiB/s] [>
@jimsalterjrs
Copy link
Owner

Sorry; as far as I can tell you're going to need to open this upstream with OpenZFS and/or ZFSBootMenu. The error you're seeing is a direct zfs send error, not an error within syncoid itself. You didn't actually show me the transfer succeeding after you wipe whatever property you're referring to, but--correct me if I'm wrong--what I'm understanding is that it's the presence of this particular custom dataset property set by ZBM that's causing the zfs send error to occur when preserving properties.

I would encourage you to file this as an upstream bug, though, as it certainly doesn't seem like expected behavior.

@abclution
Copy link
Author

abclution commented Aug 8, 2024

Sorry I didn't realize I wasn't clear.

The property 'org.zfsbootmenu:commandline' is the embedded kernel command line switches for use with ZfsBootMenu.

When I set it to blank, the transfer succeeded.

Correct me if I am wrong, could just be an issue with how the error is reported, but it appears that the zfs receive option with preserving properties syncoid option, is not "quoting" the value for the org.zfsbootmenu:commandline ala

zfs receive -u -o org.zfsbootmenu:commandline=ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4 -o mountpoint=/ -o canmount=noauto -s -F 'rpool_nvme_REAL/ROOT/pve-1_FATTOP'

should look like this instead (basically quotes around the kernel option line coming after -o org.zfsbootmenu:commandline= )

zfs receive -u -o org.zfsbootmenu:commandline="ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4 -o mountpoint=/ -o canmount=noauto" -s -F 'rpool_nvme_REAL/ROOT/pve-1_FATTOP'

I'm rusty on using the zfs receive option (Thanks to @jimsalterjrs / syncoid :) ) directly but when setting the option I had to use quotes initially ala zfs set org.zfsbootmenu:commandline="blah blah blah blah" pool/dataset

I know most zfs set options are single words and don't need to be quoted.

@jimsalterjrs jimsalterjrs reopened this Aug 8, 2024
@jimsalterjrs
Copy link
Owner

You're right, that was not clear! Can you please confirm, by first running the syncoid command that fails, using --debug if necessary to capture the final command line, then once it fails reissuing the zfs send | zfs receive command that --debug showed you, but with the quotes you believe are missing?

That kind of direct confirmation accelerates bug fixes A LOT. :) In the meantime, I'm reopening this. Thanks for the clarification!

@abclution
Copy link
Author

Sure thing.
I'm going to have to try and replicate it on a vm. Hopefully a bit later today.
It was the booting rootfs and the migration is complete and system is booted into the replicated root, but I believe I will be able to replicate it easy enough.

@jimsalterjrs
Copy link
Owner

Hmmm, I wonder if #946 addresses your issue as well?

@abclution
Copy link
Author

abclution commented Aug 10, 2024

Sorry for the delay.

The rpool/ROOT/pve-1 is set to the following.
zfs set org.zfsbootmenu:commandline="ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4" rpool/ROOT/pve-1

The I use this command:


/usr/sbin/syncoid \
--debug \
--recursive \
--identifier=CLONEMIGRATE \
--keep-sync-snap \
--mbuffer-size=768M \
--compress none \
--sendoptions='Lc' \
--recvoptions='u' \
--preserve-recordsize \
--preserve-properties \
rpool/ROOT \
destination_pool/ROOT


Full debug output.
And the command fails.


/usr/sbin/syncoid \
--debug \
--recursive \
--identifier=CLONEMIGRATE \
--keep-sync-snap \
--mbuffer-size=768M \
--compress none \
--sendoptions='Lc' \
--recvoptions='u' \
--preserve-recordsize \
--preserve-properties \
rpool/ROOT \
destination_pool/ROOT
DEBUG: SSHCMD: ssh
DEBUG: compression forced off from command line arguments.
DEBUG: checking availability of mbuffer on source...
DEBUG: checking availability of mbuffer on target...
DEBUG: checking availability of pv on local machine...
DEBUG: checking availability of zfs resume feature on source...
DEBUG: checking availability of zfs resume feature on target...
DEBUG: recursive sync of rpool/ROOT.
DEBUG: getting list of child datasets on rpool/ROOT using   zfs list -o name,origin -t filesystem,volume -Hr 'rpool/ROOT' |...
DEBUG: syncing source rpool/ROOT to target destination_pool/ROOT.
DEBUG: getting current value of syncoid:sync on rpool/ROOT...
zfs get -H syncoid:sync 'rpool/ROOT'
DEBUG: checking to see if destination_pool/ROOT on  is already in zfs receive using  ps -Ao args= ...
DEBUG: checking to see if target filesystem exists using "  zfs get -H name 'destination_pool/ROOT' 2>&1 |"...
DEBUG: getting list of snapshots on rpool/ROOT using   zfs get -Hpd 1 -t snapshot guid,creation 'rpool/ROOT' |...
DEBUG: creating sync snapshot using "  zfs snapshot 'rpool/ROOT'@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00
"...
DEBUG: target destination_pool/ROOT does not exist.  Finding oldest available snapshot on source rpool/ROOT ...
DEBUG: getoldestsnapshot() returned false, so using syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00.
DEBUG: getting locally set values of properties on rpool/ROOT...
zfs get all -s local -H 'rpool/ROOT'
DEBUG: getting estimated transfer size from source  using "  zfs send -L -c  -nvP 'rpool/ROOT@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' 2>&1 |"...
DEBUG: sendsize = 43632
INFO: Sending oldest full snapshot rpool/ROOT@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00 (~ 42 KB) to new target filesystem:
DEBUG:  zfs send -L -c  'rpool/ROOT'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' | mbuffer  -q -s 128k -m 768M | pv -p -t -e -r -b -s 43632 |  zfs receive -u  -s -F 'destination_pool/ROOT'
DEBUG: checking to see if destination_pool/ROOT on  is already in zfs receive using  ps -Ao args= ...
45.8KiB 0:00:00 [2.21MiB/s] [=================================================================================================================================================================================================================] 107%
DEBUG: syncing source rpool/ROOT/pve-1 to target destination_pool/ROOT/pve-1.
DEBUG: getting current value of syncoid:sync on rpool/ROOT/pve-1...
zfs get -H syncoid:sync 'rpool/ROOT/pve-1'
DEBUG: checking to see if destination_pool/ROOT/pve-1 on  is already in zfs receive using  ps -Ao args= ...
DEBUG: checking to see if target filesystem exists using "  zfs get -H name 'destination_pool/ROOT/pve-1' 2>&1 |"...
DEBUG: getting list of snapshots on rpool/ROOT/pve-1 using   zfs get -Hpd 1 -t snapshot guid,creation 'rpool/ROOT/pve-1' |...
DEBUG: creating sync snapshot using "  zfs snapshot 'rpool/ROOT/pve-1'@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00
"...
DEBUG: target destination_pool/ROOT/pve-1 does not exist.  Finding oldest available snapshot on source rpool/ROOT/pve-1 ...
DEBUG: getoldestsnapshot() returned false, so using syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00.
DEBUG: getting locally set values of properties on rpool/ROOT/pve-1...
zfs get all -s local -H 'rpool/ROOT/pve-1'
DEBUG: will set org.zfsbootmenu:commandline to ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4 ...
DEBUG: will set acltype to posix ...
DEBUG: will set mountpoint to / ...
DEBUG: getting estimated transfer size from source  using "  zfs send -L -c  -nvP 'rpool/ROOT/pve-1@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' 2>&1 |"...
DEBUG: sendsize = 2214117968
INFO: Sending oldest full snapshot rpool/ROOT/pve-1@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00 (~ 2.1 GB) to new target filesystem:
DEBUG:  zfs send -L -c  'rpool/ROOT/pve-1'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' | mbuffer  -q -s 128k -m 768M | pv -p -t -e -r -b -s 2214117968 |  zfs receive -u  -o org.zfsbootmenu:commandline=ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4 -o acltype=posix -o mountpoint=/ -s -F 'destination_pool/ROOT/pve-1'
DEBUG: checking to see if destination_pool/ROOT/pve-1 on  is already in zfs receive using  ps -Ao args= ...
too many arguments
usage:
receive [-vMnsFhu] [-o <property>=<value>] ... [-x <property>] ...
<filesystem|volume|snapshot>
receive [-vMnsFhu] [-o <property>=<value>] ... [-x <property>] ...
[-d | -e] <filesystem>
receive -A <filesystem|volume>

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow

For further help on a command or topic, run: zfs help [<topic>]
0.00 B 0:00:00 [0.00 B/s] [>                                                                                                                                                                                                                   ]  0%
mbuffer: error: outputThread: error writing to <stdout> at offset 0x20000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe
CRITICAL ERROR:  zfs send -L -c  'rpool/ROOT/pve-1'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' | mbuffer  -q -s 128k -m 768M | pv -p -t -e -r -b -s 2214117968 |  zfs receive -u  -o org.zfsbootmenu:commandline=ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4 -o acltype=posix -o mountpoint=/ -s -F 'destination_pool/ROOT/pve-1' failed: 512 at /usr/sbin/syncoid line 549.

Unsetting via
zfs set org.zfsbootmenu:commandline="" rpool/ROOT/pve-1

And rerunning the same command yields a perfect sync.


DEBUG: SSHCMD: ssh
DEBUG: compression forced off from command line arguments.
DEBUG: checking availability of mbuffer on source...
DEBUG: checking availability of mbuffer on target...
DEBUG: checking availability of pv on local machine...
DEBUG: checking availability of zfs resume feature on source...
DEBUG: checking availability of zfs resume feature on target...
DEBUG: recursive sync of rpool/ROOT.
DEBUG: getting list of child datasets on rpool/ROOT using   zfs list -o name,origin -t filesystem,volume -Hr 'rpool/ROOT' |...
DEBUG: syncing source rpool/ROOT to target destination_pool/ROOT.
DEBUG: getting current value of syncoid:sync on rpool/ROOT...
zfs get -H syncoid:sync 'rpool/ROOT'
DEBUG: checking to see if destination_pool/ROOT on  is already in zfs receive using  ps -Ao args= ...
DEBUG: checking to see if target filesystem exists using "  zfs get -H name 'destination_pool/ROOT' 2>&1 |"...
DEBUG: getting list of snapshots on rpool/ROOT using   zfs get -Hpd 1 -t snapshot guid,creation 'rpool/ROOT' |...
DEBUG: creating sync snapshot using "  zfs snapshot 'rpool/ROOT'@syncoid_CLONEMIGRATE_pve_2024-08-10:18:27:10-GMT03:00
"...
DEBUG: target destination_pool/ROOT does not exist.  Finding oldest available snapshot on source rpool/ROOT ...
DEBUG: getting locally set values of properties on rpool/ROOT...
zfs get all -s local -H 'rpool/ROOT'
DEBUG: getting estimated transfer size from source  using "  zfs send -L -c  -nvP 'rpool/ROOT@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' 2>&1 |"...
DEBUG: sendsize = 43632
INFO: Sending oldest full snapshot rpool/ROOT@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00 (~ 42 KB) to new target filesystem:
DEBUG:  zfs send -L -c  'rpool/ROOT'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' | mbuffer  -q -s 128k -m 768M | pv -p -t -e -r -b -s 43632 |  zfs receive -u  -s -F 'destination_pool/ROOT'
DEBUG: checking to see if destination_pool/ROOT on  is already in zfs receive using  ps -Ao args= ...
45.8KiB 0:00:00 [1.60MiB/s] [====================================================================================================================================================================================================================================================================================] 107%
DEBUG: getting estimated transfer size from source  using "  zfs send -L -c  -nvP -I 'rpool/ROOT@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' 'rpool/ROOT@syncoid_CLONEMIGRATE_pve_2024-08-10:18:27:10-GMT03:00' 2>&1 |"...
DEBUG: sendsize = 624
DEBUG: checking to see if destination_pool/ROOT on  is already in zfs receive using  ps -Ao args= ...
INFO: Updating new target filesystem with incremental rpool/ROOT@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00 ... syncoid_CLONEMIGRATE_pve_2024-08-10:18:27:10-GMT03:00 (~ 4 KB):
DEBUG:  zfs send -L -c  -I 'rpool/ROOT'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' 'rpool/ROOT'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:27:10-GMT03:00' | mbuffer  -q -s 128k -m 768M | pv -p -t -e -r -b -s 4096 |  zfs receive -u  -s -F 'destination_pool/ROOT'
1.52KiB 0:00:00 [70.2KiB/s] [========================================================================================================>                                                                                                                                                                            ] 38%
DEBUG: syncing source rpool/ROOT/pve-1 to target destination_pool/ROOT/pve-1.
DEBUG: getting current value of syncoid:sync on rpool/ROOT/pve-1...
zfs get -H syncoid:sync 'rpool/ROOT/pve-1'
DEBUG: checking to see if destination_pool/ROOT/pve-1 on  is already in zfs receive using  ps -Ao args= ...
DEBUG: checking to see if target filesystem exists using "  zfs get -H name 'destination_pool/ROOT/pve-1' 2>&1 |"...
DEBUG: getting list of snapshots on rpool/ROOT/pve-1 using   zfs get -Hpd 1 -t snapshot guid,creation 'rpool/ROOT/pve-1' |...
DEBUG: creating sync snapshot using "  zfs snapshot 'rpool/ROOT/pve-1'@syncoid_CLONEMIGRATE_pve_2024-08-10:18:27:10-GMT03:00
"...
DEBUG: target destination_pool/ROOT/pve-1 does not exist.  Finding oldest available snapshot on source rpool/ROOT/pve-1 ...
DEBUG: getting locally set values of properties on rpool/ROOT/pve-1...
zfs get all -s local -H 'rpool/ROOT/pve-1'
DEBUG: will set org.zfsbootmenu:commandline to  ...
DEBUG: will set acltype to posix ...
DEBUG: will set mountpoint to / ...
DEBUG: getting estimated transfer size from source  using "  zfs send -L -c  -nvP 'rpool/ROOT/pve-1@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' 2>&1 |"...
DEBUG: sendsize = 2214117968
INFO: Sending oldest full snapshot rpool/ROOT/pve-1@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00 (~ 2.1 GB) to new target filesystem:
DEBUG:  zfs send -L -c  'rpool/ROOT/pve-1'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' | mbuffer  -q -s 128k -m 768M | pv -p -t -e -r -b -s 2214117968 |  zfs receive -u  -o org.zfsbootmenu:commandline= -o acltype=posix -o mountpoint=/ -s -F 'destination_pool/ROOT/pve-1'
DEBUG: checking to see if destination_pool/ROOT/pve-1 on  is already in zfs receive using  ps -Ao args= ...
2.23GiB 0:00:09 [ 246MiB/s] [====================================================================================================================================================================================================================================================================================] 107%
DEBUG: getting estimated transfer size from source  using "  zfs send -L -c  -nvP -I 'rpool/ROOT/pve-1@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' 'rpool/ROOT/pve-1@syncoid_CLONEMIGRATE_pve_2024-08-10:18:27:10-GMT03:00' 2>&1 |"...
DEBUG: sendsize = 451832
DEBUG: checking to see if destination_pool/ROOT/pve-1 on  is already in zfs receive using  ps -Ao args= ...
INFO: Updating new target filesystem with incremental rpool/ROOT/pve-1@syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00 ... syncoid_CLONEMIGRATE_pve_2024-08-10:18:27:10-GMT03:00 (~ 441 KB):
DEBUG:  zfs send -L -c  -I 'rpool/ROOT/pve-1'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' 'rpool/ROOT/pve-1'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:27:10-GMT03:00' | mbuffer  -q -s 128k -m 768M | pv -p -t -e -r -b -s 451832 |  zfs receive -u  -o org.zfsbootmenu:commandline= -o acltype=posix -o mountpoint=/ -s-F 'destination_pool/ROOT/pve-1'
514KiB 0:00:00 [6.59MiB/s] [====================================================================================================================================================================================================================================================================================] 116%

Well it is definitely because how the zfs recv command is constructed when preserving properties. Although its not a single quote vs double quote it seems. There is no quotes for the value of org.zfsbootmenu:commandline=

@abclution
Copy link
Author

abclution commented Aug 10, 2024

then once it fails reissuing the zfs send | zfs receive command that --debug showed you, but with the quotes you believe are missing?

Oops, trying that now.

Confirmed. Changing the debug output command:

zfs send -L -c 'rpool/ROOT/pve-1'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' | mbuffer -q -s 128k -m 768M | pv -p -t -e -r -b -s 2214117968 | zfs receive -u -o acltype=posix -o mountpoint=/ -o org.zfsbootmenu:commandline=ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4 -s -F 'destination_pool/ROOT/pve-1'

to include the org.zfsbootmenu:commandline values quoted (double quotes) the command completes normally.


zfs send -L -c  'rpool/ROOT/pve-1'@'syncoid_CLONEMIGRATE_pve_2024-08-10:18:20:17-GMT03:00' | mbuffer  -q -s 128k -m 768M | pv -p -t -e -r -b -s 2214117968 |  zfs receive -u  -o acltype=posix -o mountpoint=/ -o org.zfsbootmenu:commandline="ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4" -s -F 'destination_pool/ROOT/pve-1'
2.23GiB 0:00:04 [ 458MiB/s] [====================================================================================================================================================================================================================================================================================] 107%

Additionally the org.zfsbootmenu:commandline is correct on the destination dataset.


zfs get org.zfsbootmenu:commandline destination_pool/ROOT/pve-1
NAME                         PROPERTY                     VALUE                                                                                                                                                                                                                                            SOURCE
destination_pool/ROOT/pve-1  org.zfsbootmenu:commandline  ro pm_debug_messages zswap.enabled=1 zswap.compressor=zstd zswap.max_pool_percent=25 zswap.zpool=z3fold psi=1 iommu=on iommu=pt amdgpu.ppfeaturemask=0xffffffff lsm=landlock,lockdown,yama,integrity,apparmor crashkernel=384M-:128M loglevel=4  local

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants