testing: pageworkload HeavySkewWriteRead
support deleting data (#6178)
#6286
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is an automated cherry-pick of #6178
What problem does this PR solve?
Issue Number: ref #6163
Problem Summary:
We do not delete the page and update existing pages in the previous pageworkload
HeavySkewWriteRead
. It always appends new pages. So it can not simulate the real workload of delta-tree and cost lots of disk space.We want to use this workload for verifying the bugs reported in #6163
What is changed and how it works?
PSWriter::updatedRandomData
generate random data for updating and reading with a bytes length range of [0, buffer_size_max]. Now it is replaced byPSWriter::getRandomData
and the bytes length range is limited in [buffer_size_min, buffer_size_max]. The random data has a smaller range and more stable reading/writing speed.The read/write performance:
Check List
Tests
I run the workload for 20 minutes. After the workload ends, the remaining page num is 757 rather than a large number of pages. And the disk space used for this testing is more reasonable.
Side effects
Documentation
Release note