-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tokenomics] Use big.Rat to calculate new difficulty hash #831
base: main
Are you sure you want to change the base?
Conversation
if isDecreasingDifficulty && bytes.Equal(prevTargetHash, BaseRelayDifficultyHashBz) { | ||
difficultyScalingRatio := big.NewRat(int64(targetNumRelays), int64(newRelaysEma)) | ||
scaledDifficultyHashBz := ScaleRelayDifficultyHash(prevTargetHash, difficultyScalingRatio) | ||
// If scaledDifficultyHash is longer than BaseRelayDifficultyHashBz, then use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure about this?
Let's assume BaseRelayDifficulty is fff
(allow all relays)
Let's assume ScaledDifficulty is 0001
(allow almost no relays)
We're going to get a completely wrong output.
I think my previous business logic was a bit different for this reason.
- If I'm wrong -> cool. Can you explain?
- If I'm right -> can you add a test case to mitigate the regression?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ScaleRelayDifficultyHash
is getting its hash bytes from a big.Int
which should not append or pad left zero bytes.
To make it extra safe, I'll trim any extra zeros on the left side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just had one suggestion but otherwise, this LGTM! 🚀
scaledHashRat := new(big.Rat).Mul(difficultyHashRat, difficultyScalingRatio) | ||
scaledHashInt := new(big.Int).Div(scaledHashRat.Num(), scaledHashRat.Denom()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👌 Very cool! 😎
require.Equal(t, len(expectedNewHashBz), len(newDifficultyHash), "scaled down difficulty should have been padded") | ||
} else if targetNumRelays > newRelaysEma { | ||
require.Greater(t, len(expectedScaledHashBz), len(newDifficultyHash)) | ||
require.Equal(t, len(expectedNewHashBz), len(newDifficultyHash), "scaled down difficulty should have been padded") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
require.Equal(t, len(expectedNewHashBz), len(newDifficultyHash), "scaled down difficulty should have been padded") | |
require.Equal(t, len(expectedNewHashBz), len(newDifficultyHash), "scaled up difficulty should have been truncated") |
Summary
Refactor the difficulty hash calculation to use
big.Rat
instead of floats to avoid precision loss. It delays integer conversion as much as possible.Fix relay difficulty tests to use
big.Rat
and avoid architecture dependent floating point calculations.Issue
Type of change
Select one or more from the following:
consensus-breaking
label if so. See [Infra] Automatically add theconsensus-breaking
label #791 for detailsTesting
make go_develop_and_test
make test_e2e
devnet-test-e2e
label to the PR.Sanity Checklist