Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NRD rules and "recent" kernel pos index #3302

Merged
merged 53 commits into from
Jun 10, 2020

Conversation

antiochp
Copy link
Member

@antiochp antiochp commented Apr 18, 2020

The new NRD kernel variant was introduced in #3303.

This PR adds the kernel_pos index to track "recent" NRD kernel history for efficient validation of the NRD relative lock height rule.


LMDB index impl

  • add linked list implementation (lmdb based)
  • add kernel_pos index based on linked list implementation
    • push latest kernel pos to head of list
    • peek latest kernel pos
    • pop latest kernel to support rewind

NRD rule validation

  • apply_block (during block processing pipeline):
    • push NRD kernels to index
    • validate NRD relative lock height rule
    • test coverage for NRD rule validation
    • ensure rewind of kernel_pos index when rewinding blocks
  • rewind kernel_pos index when rewinding "single block" during block processing
    • test coverage of rewind scenario
  • validate NRD relative lock heights when processing transactions in stempool/txpool
    • test coverage for NRD rule validation

Txpool

  • pool tests currently "mock" out the chain impl which complicates testing of NRD rules (see Rework pool tests to use real chain (was mock chain) #3342)
  • pool currently identifies txs based on kernel (and kernel hash) and this does not play nicely with NRD kernel reuse (ok for now but should be revisited)

Rebuild Index

  • rebuild kernel_pos index on fast sync
  • rebuild kernel_pos index on node init

Fast Sync

  • validate NRD kernels across full historic kernel set

@antiochp antiochp self-assigned this May 31, 2020
@antiochp antiochp added this to the 4.0.0 milestone May 31, 2020
@antiochp antiochp marked this pull request as ready for review May 31, 2020 12:40
// Process the header first.
// If invalid then fail early.
// If valid then continue with block processing with header_head committed to db etc.
self.process_block_header(&b.header, opts)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in what cases would we call process_block_single without having validated/processed the header already?
(I would naively expect never)
Does header validation not commit it to db?

Copy link
Member Author

@antiochp antiochp Jun 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In most cases we will have processed "header first".
The first broadcast hop from a miner after mining a new block is done as a "full block" and skips the "header first" propagation as we know nobody has seen this block yet.
In this case the receiving node is here in the code, processing the new block without having seen the header yet.

We were validating the header in the later call to pipe::process_block().
We want to validate/process the header here, in a db transaction and a write to the MMR backend files, before we begin processing/validating the full block.

The issue was that if the full block failed validation for any reason then the entire db transaction rolled back, including the header processing. But leaving the header MMR updated with the latest header.

@antiochp
Copy link
Member Author

Merging this to master.
We plan on continuing testing this on master across mainnet, floonet and usertesting to keep things as simple as possible.

@antiochp antiochp merged commit 20e5c19 into mimblewimble:master Jun 10, 2020
// Start at 2 here to differentiate from ListWrapperVariant above.
Head = 2,
Tail = 3,
Middle = 4,
Copy link
Contributor

@tromp tromp Jun 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need a linked list with 5 different variants?
Doesn't the database readily support maps from 64-bit ints to 64-bit ints?
We could map the MMR leaf position of an NRD-kernel to the next lower leaf position of an NRD-kernel with the same top k bits, if any. Besides that, we have a 2^k entry map giving the largest MMR leaf position of any NRD-kernel with those top k bits. I guess we could start with k=8 and it would take a long time before NRD kernel usage increases to the point where we'd want to migrate to k=16 in order to keep the lists relatively short.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants