Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests/block_cloning: try harder to stay on same txg in fallback test #15303

Merged

Conversation

robn
Copy link
Member

@robn robn commented Sep 21, 2023

Description

We've observed this test failing intermittently. When it does, the "same block" check shows that both files have the same content, that is, the file was cloned.

The only way this could have happened is if the open txg moved between the dd and clonefile calls. That's possible because although we set zfs_txg_timeout to be large, that only affects the wait time in the sync thread at the start of a new txg; it doesn't change anything if its currently waiting or working.

So here we just force the txgs to move immediately before, which should get both operations onto the same txg as intented.

Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.

How Has This Been Tested?

Ran the whole block_cloning suite on kernels 6.4.2, 6.4.15 and on Fedora 37 specifically.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

We've observed this test failing intermittently. When it does, the
"same block" check shows that both files have the same content, that is,
the file was cloned.

The only way this could have happened is if the open txg moved between
the dd and clonefile calls. That's possible because although we set
zfs_txg_timeout to be large, that only affects the wait time in the sync
thread at the start of a new txg; it doesn't change anything if its
currently waiting or working.

So here we just force the txgs to move immediately before, which should
get both operations onto the same txg as intented.

Signed-off-by: Rob Norris Rob Norris <[email protected]>
Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
@robn
Copy link
Member Author

robn commented Sep 21, 2023

@behlendorf this is a little bit of guesswork since I couldn't reproduce it myself, and its probably not technically enough because it doesn't actually lockout the sync in any meaningful way, just tries to get the timing right. But its probably not worse than before! If you've got better ideas let me know!

Copy link
Contributor

@behlendorf behlendorf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works for me (and the CI). It's not the first time we've needed to add a zpool sync to force a txg to to be written specifically for a test case. Thanks for running this down.

@behlendorf behlendorf merged commit 2dc89b9 into openzfs:master Sep 22, 2023
18 of 19 checks passed
behlendorf pushed a commit to behlendorf/zfs that referenced this pull request Sep 22, 2023
We've observed this test failing intermittently. When it does, the
"same block" check shows that both files have the same content, that is,
the file was cloned.

The only way this could have happened is if the open txg moved between
the dd and clonefile calls. That's possible because although we set
zfs_txg_timeout to be large, that only affects the wait time in the sync
thread at the start of a new txg; it doesn't change anything if its
currently waiting or working.

So here we just force the txgs to move immediately before, which should
get both operations onto the same txg as intented.

Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Rob Norris Rob Norris <[email protected]>
Closes openzfs#15303
behlendorf pushed a commit that referenced this pull request Sep 22, 2023
We've observed this test failing intermittently. When it does, the
"same block" check shows that both files have the same content, that is,
the file was cloned.

The only way this could have happened is if the open txg moved between
the dd and clonefile calls. That's possible because although we set
zfs_txg_timeout to be large, that only affects the wait time in the sync
thread at the start of a new txg; it doesn't change anything if its
currently waiting or working.

So here we just force the txgs to move immediately before, which should
get both operations onto the same txg as intented.

Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Rob Norris Rob Norris <[email protected]>
Closes #15303
lundman pushed a commit to openzfsonwindows/openzfs that referenced this pull request Dec 12, 2023
We've observed this test failing intermittently. When it does, the
"same block" check shows that both files have the same content, that is,
the file was cloned.

The only way this could have happened is if the open txg moved between
the dd and clonefile calls. That's possible because although we set
zfs_txg_timeout to be large, that only affects the wait time in the sync
thread at the start of a new txg; it doesn't change anything if its
currently waiting or working.

So here we just force the txgs to move immediately before, which should
get both operations onto the same txg as intented.

Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Rob Norris Rob Norris <[email protected]>
Closes openzfs#15303
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants