Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZTS checkpoint_indirect failure #12623

Open
behlendorf opened this issue Oct 8, 2021 · 3 comments
Open

ZTS checkpoint_indirect failure #12623

behlendorf opened this issue Oct 8, 2021 · 3 comments
Labels
Component: Test Suite Indicates an issue with the test framework or a test case Status: Stale No recent activity for issue Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@behlendorf
Copy link
Contributor

System information

Type Version/Name
Distribution Name FreeBSD
Distribution Version 12
Kernel Version
Architecture x86_64
OpenZFS Version zfs-2.1.99-473-ga5b464263

Describe the problem you're observing

Tests with results other than PASS that are unexpected:
    FAIL pool_checkpoint/checkpoint_indirect (expected PASS)

Describe how to reproduce the problem

Reproduces frequently in the CI when running the ZTS on FreeBSD 12

Include any warning/errors/backtraces from the system logs

http://build.zfsonlinux.org/builders/FreeBSD%20stable%2F12%20amd64%20%28TEST%29/builds/6565/steps/shell_4/logs/summary

Test: /usr/local/share/zfs/zfs-tests/tests/functional/pool_checkpoint/checkpoint_indirect (run as root) [06:01] [FAIL]
19:47:31.74 SUCCESS: zpool create -O sync=disabled testpool md0
19:47:31.78 SUCCESS: zfs create testpool/disks
19:48:17.91 SUCCESS: cp /mnt/ckpoint_saved_pool/dsk1 /mnt/ckpoint_saved_pool/dsk2 /testpool/disks
19:48:18.17 SUCCESS: zpool import -d /testpool/disks nestedpool
19:48:18.24 SUCCESS: zpool remove nestedpool /testpool/disks/dsk1
19:48:26.89 SUCCESS: zpool wait -t remove nestedpool
19:48:26.90 SUCCESS: zpool sync nestedpool
19:48:26.92 SUCCESS: is_pool_removed nestedpool
19:48:26.92 SUCCESS: wait_for_removal nestedpool
19:48:26.93 SUCCESS: vdevs_in_pool nestedpool /testpool/disks/dsk1 exited 1
19:48:27.01 SUCCESS: zpool add nestedpool /testpool/disks/dsk1
19:48:29.67 SUCCESS: zpool remove nestedpool /testpool/disks/dsk2
19:48:46.02 SUCCESS: zpool wait -t remove nestedpool
19:48:46.03 SUCCESS: zpool sync nestedpool
19:48:46.04 SUCCESS: is_pool_removed nestedpool
19:48:46.04 SUCCESS: wait_for_removal nestedpool
19:48:46.06 SUCCESS: vdevs_in_pool nestedpool /testpool/disks/dsk2 exited 1
19:48:46.12 SUCCESS: zpool add nestedpool /testpool/disks/dsk2
19:48:46.13 NAME                     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
19:48:46.13 nestedpool              1.88G   361M  1.52G        -         -    32%    18%  1.00x    ONLINE  -
19:48:46.13   /testpool/disks/dsk1   960M   361M   599M        -         -    65%  37.6%      -    ONLINE
19:48:46.13   /testpool/disks/dsk2   960M      0   960M        -         -     0%  0.00%      -    ONLINE
19:48:46.13 testpool                3.75G  2.12G  1.63G        -         -    17%    56%  1.00x    ONLINE  -
19:48:46.13   md0                   3.75G  2.12G  1.63G        -         -    17%  56.6%      -    ONLINE
19:48:46.13 SUCCESS: zpool list -v
19:48:46.16 SUCCESS: zpool checkpoint nestedpool
19:48:46.78 SUCCESS: zfs destroy nestedpool/testfs1
19:48:46.83 SUCCESS: zfs create -o compression=lz4 -o recordsize=8k nestedpool/testfs2
19:48:46.83 SUCCESS: mkfile -n 512M /nestedpool/testfs2/testfile2
19:50:43.80 SUCCESS: randwritecomp /nestedpool/testfs/testfile0 400000
19:53:07.41 SUCCESS: randwritecomp /nestedpool/testfs2/testfile2 400000
19:53:07.41 NAME                     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
19:53:07.41 nestedpool              1.88G   766M  1.13G     359M         -    60%    39%  1.00x    ONLINE  -
19:53:07.41   /testpool/disks/dsk1   960M   494M   466M     359M         -    70%  51.4%      -    ONLINE
19:53:07.41   /testpool/disks/dsk2   960M   272M   688M      17K         -    50%  28.4%      -    ONLINE
19:53:07.41 testpool                3.75G  2.12G  1.63G        -         -    27%    56%  1.00x    ONLINE  -
19:53:07.41   md0                   3.75G  2.12G  1.63G        -         -    27%  56.6%      -    ONLINE
19:53:07.41 SUCCESS: zpool list -v
19:53:08.72 SUCCESS: zpool export nestedpool
...
19:53:33.20 ERROR: zdb -e -p /testpool/disks nestedpool exited 3
19:53:33.21 NOTE: Performing test-fail callback (/usr/local/share/zfs/zfs-tests/callbacks/zfs_dmesg.ksh)
@behlendorf behlendorf added Component: Test Suite Indicates an issue with the test framework or a test case Type: Defect Incorrect behavior (e.g. crash, hang) labels Oct 8, 2021
behlendorf added a commit to behlendorf/zfs that referenced this issue Oct 8, 2021
Add the following test failures to the exception list for FreeBSD
to ensure we notice new unexpected failures.

   pool_checkpoint/checkpoint_big_rewind
   pool_checkpoint/checkpoint_indirect

And the following for Linux.

   zvol/zvol_misc/zvol_misc_snapdev

Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#12621
Issue openzfs#12622
Issue openzfs#12623
@behlendorf behlendorf mentioned this issue Oct 8, 2021
13 tasks
behlendorf added a commit that referenced this issue Oct 11, 2021
Add the following test failures to the exception list for FreeBSD
to ensure we notice new unexpected failures.

   pool_checkpoint/checkpoint_big_rewind
   pool_checkpoint/checkpoint_indirect

And the following for Linux.

   zvol/zvol_misc/zvol_misc_snapdev

Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #12621
Issue #12622
Issue #12623
Closes #12624
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Feb 10, 2022
Add the following test failures to the exception list for FreeBSD
to ensure we notice new unexpected failures.

   pool_checkpoint/checkpoint_big_rewind
   pool_checkpoint/checkpoint_indirect

And the following for Linux.

   zvol/zvol_misc/zvol_misc_snapdev

Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#12621
Issue openzfs#12622
Issue openzfs#12623
Closes openzfs#12624
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Feb 14, 2022
Add the following test failures to the exception list for FreeBSD
to ensure we notice new unexpected failures.

   pool_checkpoint/checkpoint_big_rewind
   pool_checkpoint/checkpoint_indirect

And the following for Linux.

   zvol/zvol_misc/zvol_misc_snapdev

Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#12621
Issue openzfs#12622
Issue openzfs#12623
Closes openzfs#12624
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Feb 16, 2022
Add the following test failures to the exception list for FreeBSD
to ensure we notice new unexpected failures.

   pool_checkpoint/checkpoint_big_rewind
   pool_checkpoint/checkpoint_indirect

And the following for Linux.

   zvol/zvol_misc/zvol_misc_snapdev

Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#12621
Issue openzfs#12622
Issue openzfs#12623
Closes openzfs#12624
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Feb 17, 2022
Add the following test failures to the exception list for FreeBSD
to ensure we notice new unexpected failures.

   pool_checkpoint/checkpoint_big_rewind
   pool_checkpoint/checkpoint_indirect

And the following for Linux.

   zvol/zvol_misc/zvol_misc_snapdev

Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#12621
Issue openzfs#12622
Issue openzfs#12623
Closes openzfs#12624
@stale
Copy link

stale bot commented Oct 12, 2022

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Oct 12, 2022
@behlendorf
Copy link
Contributor Author

Not stale. This failure is still included in the exception file for FreeBSD.

@stale stale bot removed the Status: Stale No recent activity for issue label Oct 12, 2022
@stale
Copy link

stale bot commented Oct 15, 2023

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Oct 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Test Suite Indicates an issue with the test framework or a test case Status: Stale No recent activity for issue Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

1 participant