Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tokenomics] Use big.Rat to calculate new difficulty hash #831

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

red-0ne
Copy link
Contributor

@red-0ne red-0ne commented Sep 23, 2024

Summary

Refactor the difficulty hash calculation to use big.Rat instead of floats to avoid precision loss. It delays integer conversion as much as possible.

Fix relay difficulty tests to use big.Rat and avoid architecture dependent floating point calculations.

Issue

image

Type of change

Select one or more from the following:

Testing

  • Unit Tests: make go_develop_and_test
  • LocalNet E2E Tests: make test_e2e
  • DevNet E2E Tests: Add the devnet-test-e2e label to the PR.

Sanity Checklist

  • I have tested my changes using the available tooling
  • I have commented my code
  • I have performed a self-review of my own code; both comments & source code
  • I create and reference any new tickets, if applicable
  • I have left TODOs throughout the codebase, if applicable

@red-0ne red-0ne added tokenomics Token Economics - what else do you need? utility labels Sep 23, 2024
@red-0ne red-0ne added this to the Shannon Beta TestNet Launch milestone Sep 23, 2024
@red-0ne red-0ne self-assigned this Sep 23, 2024
pkg/crypto/protocol/relay_difficulty.go Show resolved Hide resolved
if isDecreasingDifficulty && bytes.Equal(prevTargetHash, BaseRelayDifficultyHashBz) {
difficultyScalingRatio := big.NewRat(int64(targetNumRelays), int64(newRelaysEma))
scaledDifficultyHashBz := ScaleRelayDifficultyHash(prevTargetHash, difficultyScalingRatio)
// If scaledDifficultyHash is longer than BaseRelayDifficultyHashBz, then use
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure about this?

Let's assume BaseRelayDifficulty is fff (allow all relays)
Let's assume ScaledDifficulty is 0001 (allow almost no relays)

We're going to get a completely wrong output.

I think my previous business logic was a bit different for this reason.

  • If I'm wrong -> cool. Can you explain?
  • If I'm right -> can you add a test case to mitigate the regression?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ScaleRelayDifficultyHash is getting its hash bytes from a big.Int which should not append or pad left zero bytes.

To make it extra safe, I'll trim any extra zeros on the left side.

pkg/crypto/protocol/relay_difficulty.go Show resolved Hide resolved
@red-0ne red-0ne changed the title [Tokenomics] Use ratio for relay mining difficulty [Tokenomics] Use big.Rat to calculate new difficulty hash Sep 27, 2024
Copy link
Contributor

@bryanchriswhite bryanchriswhite left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just had one suggestion but otherwise, this LGTM! 🚀

Comment on lines +80 to +81
scaledHashRat := new(big.Rat).Mul(difficultyHashRat, difficultyScalingRatio)
scaledHashInt := new(big.Int).Div(scaledHashRat.Num(), scaledHashRat.Denom())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👌 Very cool! 😎

require.Equal(t, len(expectedNewHashBz), len(newDifficultyHash), "scaled down difficulty should have been padded")
} else if targetNumRelays > newRelaysEma {
require.Greater(t, len(expectedScaledHashBz), len(newDifficultyHash))
require.Equal(t, len(expectedNewHashBz), len(newDifficultyHash), "scaled down difficulty should have been padded")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
require.Equal(t, len(expectedNewHashBz), len(newDifficultyHash), "scaled down difficulty should have been padded")
require.Equal(t, len(expectedNewHashBz), len(newDifficultyHash), "scaled up difficulty should have been truncated")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tokenomics Token Economics - what else do you need? utility
Projects
Status: 👀 In review
Development

Successfully merging this pull request may close these issues.

3 participants