Skip to content

Commit

Permalink
Fix the ZFS checksum error histograms with larger record sizes
Browse files Browse the repository at this point in the history
My analysis in PR openzfs#14716 was incorrect.  Each histogram bucket contains
the number of incorrect bits, by position in a 64-bit word, over the
entire record.  8-bit buckets can overflow for record sizes above 2k.
To forestall that, saturate each bucket at 255.  That should still get
the point across: either all bits are equally wrong, or just a couple
are.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Alan Somers <[email protected]>
Sponsored-by: Axcient
Closes openzfs#15049
  • Loading branch information
asomers authored and behlendorf committed Jul 21, 2023
1 parent 56ed389 commit 4d73940
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion module/zfs/zfs_fm.c
Original file line number Diff line number Diff line change
Expand Up @@ -790,7 +790,7 @@ update_histogram(uint64_t value_arg, uint8_t *hist, uint32_t *count)
/* We store the bits in big-endian (largest-first) order */
for (i = 0; i < 64; i++) {
if (value & (1ull << i)) {
hist[63 - i]++;
hist[63 - i] = MAX(hist[63 - i], hist[63 - i] + 1);
++bits;
}
}
Expand Down

0 comments on commit 4d73940

Please sign in to comment.