-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge summaries #1
base: qemu
Are you sure you want to change the base?
Conversation
Thank you a lot, I will use it. |
The FresBSD images have the repository now: https://github.com/mcmilk/openzfs-freebsd-images/releases |
80640e1
to
aa4953b
Compare
88ef792
to
a401588
Compare
FreeBSD 13 has problems with the virtio nic. Just use the e1000 nic, like I gave done it here: https://github.com/mcmilk/zfs/tree/qemu-machines2 |
Ah, some other problem :( |
136cf60
to
73f5c57
Compare
3c65b4d
to
2d028b0
Compare
The timezone "US/Mountain" isn't supported on newer linux versions. Using the correct timezone "America/Denver" like it's done in FreeBSD will fix this. Older Linux distros should behave also okay with this. Signed-off-by: Tino Reichardt <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: George Melikov <[email protected]>
This test was failing before: - FAIL cli_root/zfs_copies/zfs_copies_006_pos (expected PASS) Signed-off-by: Tino Reichardt <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: George Melikov <[email protected]>
This includes the last 12.x release (now EOL) and 13.0 development versions (<1300139). Sponsored-by: https://despairlabs.com/sponsor/ Signed-off-by: Rob Norris <[email protected]> Reviewed-by: Alexander Motin <[email protected]> Reviewed-by: Tino Reichardt <[email protected]> Reviewed-by: Tony Hutter <[email protected]>
@mcmilk the diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_status/zpool_status_008_pos.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_status/zpool_status_008_pos.ksh
index 6be2ad5a7..70f480cbb 100755
--- a/tests/zfs-tests/tests/functional/cli_root/zpool_status/zpool_status_008_pos.ksh
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_status/zpool_status_008_pos.ksh
@@ -69,12 +69,12 @@ for raid_type in "draid2:3d:6c:1s" "raidz2"; do
log_mustnot eval "zpool status -e $TESTPOOL2 | grep ONLINE"
# Check no ONLINE slow vdevs are show. Then mark IOs greater than
- # 10ms slow, delay IOs 20ms to vdev6, check slow IOs.
+ # 40ms slow, delay IOs 80ms to vdev6, check slow IOs.
log_must check_vdev_state $TESTPOOL2 $TESTDIR/vdev6 "ONLINE"
log_mustnot eval "zpool status -es $TESTPOOL2 | grep ONLINE"
- log_must set_tunable64 ZIO_SLOW_IO_MS 10
- log_must zinject -d $TESTDIR/vdev6 -D20:100 $TESTPOOL2
+ log_must set_tunable64 ZIO_SLOW_IO_MS 40
+ log_must zinject -d $TESTDIR/vdev6 -D80:100 $TESTPOOL2
log_must mkfile 1048576 /$TESTPOOL2/testfile
sync_pool $TESTPOOL2
log_must set_tunable64 ZIO_SLOW_IO_MS $OLD_SLOW_IO I'm still trying to figure out why |
e9cbf8b
to
ba24656
Compare
This commit adds functional tests for these systems: - AlmaLinux 8, AlmaLinux 9 - ArchLinux - CentOS Stream 9 - Fedora 39, Fedora 40 - Debian 11, Debian 12 - FreeBSD 13, FreeBSD 14, FreeBSD 15 - Ubuntu 20.04, Ubuntu 22.04, Ubuntu 24.04 Workflow for each operating system: - install QEMU on the github runner - download current cloud image - start and init that image via cloud-init - install deps and poweroff system - start system and build openzfs and then poweroff again - clone the system and start qemu workers for parallel testings - do the functional testings, hopefully < 3h Signed-off-by: Tino Reichardt <[email protected]> Signed-off-by: Tony Hutter <[email protected]>
The error comes from the command
I think. the Option 1: check the status first, and if already some a scrub started, than just wait for it What would you prefer? |
@mcmilk I'm currently testing with this: diff --git a/tests/zfs-tests/tests/functional/raidz/raidz_expand_001_pos.ksh b/tests/zfs-tests/tests/functional/raidz/raidz_expand_001_pos.ksh
index 063d7fa73..167f39cfc 100755
--- a/tests/zfs-tests/tests/functional/raidz/raidz_expand_001_pos.ksh
+++ b/tests/zfs-tests/tests/functional/raidz/raidz_expand_001_pos.ksh
@@ -153,8 +153,12 @@ function test_scrub # <pool> <parity> <dir>
done
log_must zpool import -o cachefile=none -d $dir $pool
+ if is_pool_scrubbing $pool ; then
+ wait_scrubbed $pool
+ fi
log_must zpool scrub -w $pool
+
log_must zpool clear $pool
log_must zpool export $pool
@@ -165,7 +169,9 @@ function test_scrub # <pool> <parity> <dir>
done
log_must zpool import -o cachefile=none -d $dir $pool
-
+ if is_pool_scrubbing $pool ; then
+ wait_scrubbed $pool
+ fi
log_must zpool scrub -w $pool
log_must check_pool_status $pool "errors" "No known data errors"
diff --git a/tests/zfs-tests/tests/functional/raidz/raidz_expand_002_pos.ksh b/tests/zfs-tests/tests/functional/raidz/raidz_expand_002_pos.ksh
index 004f3d1f9..e416926d1 100755
--- a/tests/zfs-tests/tests/functional/raidz/raidz_expand_002_pos.ksh
+++ b/tests/zfs-tests/tests/functional/raidz/raidz_expand_002_pos.ksh
@@ -105,6 +105,10 @@ for disk in ${disks[$(($nparity+2))..$devs]}; do
log_fail "pool $pool not expanded"
fi
+ # It's possible the pool could be auto scrubbing here. If so, wait.
+ if is_pool_scrubbing $pool ; then
+ wait_scrubbed $pool
+ fi
verify_pool $pool
pool_size=$expand_size I think that might help some of the Also, I tweaked my commit a little to add more time in |
I will test run this with a for loop, I think 50 times should be a good start. It's running with -I 55 option: |
Hm, the I used this |
Almalinux 8 |
a0063d0
to
836a672
Compare
Some points to the
Special test run with only So the raid code is maybe okay... but some spl thing? |
Just testing, please ignore