-
Notifications
You must be signed in to change notification settings - Fork 378
(async backing) parachain-system: track limitations for unincluded blocks #2438
(async backing) parachain-system: track limitations for unincluded blocks #2438
Conversation
Also #1866 mentions
Was it supposed to be dmp? |
And do we want it in master? |
Only if it's fully backwards-compatible
Yes, just an oversight. I've edited the original issue to fix this as well. Thanks |
Do we want to account for |
This comment doesn't point to any specific code, not sure what you mean by this. |
It's a general question, we have max unincluded length which is mapped to But async backing parameters also have |
We could account for it in the runtime, but |
pallets/parachain-system/src/lib.rs
Outdated
@@ -428,6 +547,7 @@ pub mod pallet { | |||
<RelayStateProof<T>>::put(relay_chain_state); | |||
<RelevantMessagingState<T>>::put(relevant_messaging_state.clone()); | |||
<HostConfiguration<T>>::put(host_config); | |||
<PendingDownwardMessages<T>>::put(downward_messages.clone()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe there's no other way to get this value for bandwidth limits besides putting it into storage.
Second option is to introduce migration and save its len along with ProcessedDownwardMessages
and rename it into something like DownwardQueueSize
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's cleared in on_finalize
then it never hits storage in practice, only the in-memory overlay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's inserted in set_validation_data
, fetched in on_finalize
, cleared in on_initialize
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
two things stand out to me:
- only the length of the list is used, not the messages themselves
- it could be cleared in
on_finalize
too, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it also doesn't seem well-defined to me what pending_downward_messages
is in this context.
i.e. is it all pending downward messages that were present in the relay-parent state? If so, then we need to update process_inbound_downward_messages
to avoid processing them in duplicate.
or is it only the pending downward messages which haven't been processed? In that case, we shouldn't have to make any changes to the processing logic.
For efficiency's sake, I'd favor the latter - we shouldn't include the same downward messages into parachain blocks multiple times.
horizontal logic probably needs the same, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I followed the pattern of cleaning everything in on_initialize
As for the messages, yes, I suggested in the first comments to either put only len or
Second option is to introduce migration and save its len along with ProcessedDownwardMessages and rename it into something like DownwardQueueSize.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i.e. is it all pending downward messages that were present in the relay-parent state? If so, then we need to update process_inbound_downward_messages to avoid processing them in duplicate.
These are all messages from the downward queue. They're not used for processing in the current code, so it's not an issue. We should probably just use len.
But the thing is, according to the code each candidate exhaust the dm queue:
let dm_count = downward_messages.len() as u32;
// ...
ProcessedDownwardMessages::<T>::put(dm_count);
Do we even need to track this limitation at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we assume that ParachainInherentData
only contains messages which have not been posted to any prior parachain block (which is the way it should be to avoid redundancy), I don't know that that current bandwidth limits approach works.
The bandwidth limits for DMP/HRMP-in should be based on the total size of the incoming queues at the relay-parent, whereas some (or most) of the messages may have already been processed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
according to the code each candidate exhaust the dm queue
Yes, this looks currently guaranteed. We could avoid tracking this property as a result, and reintroduce the concept if we get to the point where we implement things more piecemeal.
pallets/parachain-system/src/lib.rs
Outdated
|
||
// sanity-check there's place for the block at finalization phase. | ||
// | ||
// TODO: this potentially restricts parachains from decreasing `MaxUnincludedLen` value. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it does - parachains manually adjusting MaxUnincludedLen
would potentially brick their chain. This is fine as long as there is a warning in the storage item doc comment.
We will need good ways for parachains to adjust this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I basically referenced #2438 (comment), need to link to a github issue too before the merge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realized something - I don't think that we can actually access the relay-chain state proof within the on_initialize
area of code. It only gets handled later, in the set_inherent_data
inherent. The logic we currently have in on_initialize
needs to go there, actually.
Data put into storage by |
ae42744
to
0e4b14a
Compare
pallets/parachain-system/src/lib.rs
Outdated
|
||
// sanity-check there's place for the block at finalization phase. | ||
// | ||
// TODO: this potentially restricts parachains from decreasing `MaxUnincludedLen` value. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the answer here, now that we've moved this to run in the inherent, is "it doesn't".
on_initialize
always runs before inherents, so any pallet setting the MaxUnincludedLen
in code that executes in the on_initialize
of any pallet would run before this, so it is safe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about scenario when there're 4 blocks in the unincluded segment, and the len is reduced to e.g. 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is fine. For example, assume that there is some unconditional code that sets the maximum length to 1 in on_initialize
at a certain block height. For more dynamically controlled systems, the reasoning is the same but slightly more complicated.
Let's say we are trying to build on a parent parachain block B, which has unincluded segment length 4. As long as the unincluded segment has length > 1, we can't author a new block. But new relay-parents will "drain" the unincluded segment from the back, so eventually* we will have a (relay-parent, B)
pair where the unincluded segment will be short enough to push a new block B' onto. This is the desired behavior of shrinking the maximum segment length.
*eventually assuming B and its ancestors actually get backed and included in the relay chain
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, failing create_inherent
is ok, while on_initialize
isn't. Thanks for explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's rather that we are creating a block validity rule based on the relay-parent. It'll be impossible to create valid blocks on top of some (para parent, relay parent)
pairs, and that failure will manifest in the inherent that handles the storage proof. Anything that runs in on_initialize
could alter the block validity condition (e.g. altering the maximum unincluded segment length). We will probably also want hooks for people to write dynamic block validity rules based on the relay-parent in question. These dynamic rules would need the relay parent and storage proof as a parameter, so they could only run in the inherent, once those things have been supplied.
In principle, it's OK to fail in on_initialize
as well, but there are footguns like changing the unincluded segment length that could lead the chain to brick.
pallets/parachain-system/src/lib.rs
Outdated
// the inherent shouldn't contain messages that were already processed by any of the | ||
// ancestors. | ||
// | ||
// This invariant should be upheld by the node-side. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A comment for the benefit of other reviewers: the "node-side" here could also include the ProvideInherent
implementation. That is probably the best way to ensure this in practice. Just trimming some MQCs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
* Update substrate & polkadot * min changes to make async backing compile * (async backing) parachain-system: track limitations for unincluded blocks (#2438) * unincluded segment draft * read para head from storage proof * read_para_head -> read_included_para_head * Provide pub interface * add errors * fix unincluded segment update * BlockTracker -> Ancestor * add a dmp limit * Read para head depending on the storage switch * doc comments * storage items docs * add a sanity check on block initialize * Check watermark * append to the segment on block finalize * Move segment update into set_validation_data * Resolve para head todo * option watermark * fix comment * Drop dmq check * fix weight * doc-comments on inherent invariant * Remove TODO * add todo * primitives tests * pallet tests * doc comments * refactor unincluded segment length into a ConsensusHook (#2501) * refactor unincluded segment length into a ConsensusHook * add docs * refactor bandwidth_out calculation Co-authored-by: Chris Sosnin <[email protected]> * test for limits from impl * fmt * make tests compile * update comment * uncomment test * fix collator test by adding parent to state proof * patch HRMP watermark rules for unincluded segment * get consensus-common tests to pass, using unincluded segment * fix unincluded segment tests * get all tests passing * fmt * rustdoc CI * aura-ext: limit the number of authored blocks per slot (#2551) * aura_ext consensus hook * reverse dependency * include weight into hook * fix tests * remove stray println Co-authored-by: Chris Sosnin <[email protected]> * fix test warning * fix doc link --------- Co-authored-by: Chris Sosnin <[email protected]> Co-authored-by: Chris Sosnin <[email protected]> * parachain-system: ignore go ahead signal once upgrade is processed (#2594) * handle goahead signal for unincluded segment * doc comment * add test * parachain-system: drop processed messages from inherent data (#2590) * implement `drop_processed_messages` * drop messages based on relay parent number * adjust tests * drop changes to mqc * fix comment * drop test * drop more dead code * clippy * aura-ext: check slot in consensus hook and remove all `CheckInherents` logic (#2658) * aura-ext: check slot in consensus hook * convert relay chain slot * Make relay chain slot duration generic * use fixed velocity hook for pallets with aura * purge timestamp inherent * fix warning * adjust runtime tests * fix slots in tests * Make `xcm-emulator` test pass for new consensus hook (#2722) * add pallets on_initialize * tests pass * add AuraExt on_init * ".git/.scripts/commands/fmt/fmt.sh" --------- Co-authored-by: command-bot <> --------- Co-authored-by: Ignacio Palacios <[email protected]> * update polkadot git refs * CollationGenerationConfig closure is now optional (#2772) * CollationGenerationConfig closure is now optional * fix test * propagate network-protocol-staging feature (#2899) * Feature Flagging Consensus Hook Type Parameter (#2911) * First pass * fmt * Added as default feature in tomls * Changed to direct dependency feature * Dealing with clippy error * Update pallets/parachain-system/src/lib.rs Co-authored-by: asynchronous rob <[email protected]> --------- Co-authored-by: asynchronous rob <[email protected]> * fmt * bump deps and remove warning * parachain-system: update RelevantMessagingState according to the unincluded segment (#2948) * mostly address 2471 with a bug introduced * adjust relevant messaging state after computing total * fmt * max -> min * fix test implementation of xcmp source * add test * fix test message sending logic * fix + test * add more to unincluded segment test * fmt --------- Co-authored-by: Chris Sosnin <[email protected]> * Integrate new Aura / Parachain Consensus Logic in Parachain-Template / Polkadot-Parachain (#2864) * add a comment * refactor client/service utilities * deprecate start_collator * update parachain-template * update test-service in the same way * update polkadot-parachain crate * fmt * wire up new SubmitCollation message * some runtime utilities for implementing unincluded segment runtime APIs * allow parachains to configure their level of sybil-resistance when starting the network * make aura-ext compile * update to specify sybil resistance levels * fmt * specify relay chain slot duration in milliseconds * update Aura to explicitly produce Send futures also, make relay_chain_slot_duration a Duration * add authoring duration to basic collator and document params * integrate new basic collator into parachain-template * remove assert_send used for testing * basic-aura: only author when parent included * update polkadot-parachain-bin * fmt * some fixes * fixes * add a RelayNumberMonotonicallyIncreases * add a utility function for initializing subsystems * some logging for timestamp adjustment * fmt * some fixes for lookahead collator * add a log * update `find_potential_parents` to account for sessions * bound the loop * restore & deprecate old start_collator and start_full_node functions. * remove unnecessary await calls * fix warning * clippy * more clippy * remove unneeded logic * ci * update comment Co-authored-by: Marcin S. <[email protected]> * (async backing) restore `CheckInherents` for backwards-compatibility (#2977) * bring back timestamp * Restore CheckInherents * revert to empty CheckInherents * make CheckInherents optional * attempt * properly end system blocks * add some more comments * ignore failing system parachain tests * update refs after main feature branch merge * comment out the offending tests because CI runs ignored tests * fix warnings * fmt * revert to polkadot master * cargo update -p polkadot-primitives -p sp-io --------- Co-authored-by: asynchronous rob <[email protected]> Co-authored-by: Ignacio Palacios <[email protected]> Co-authored-by: Bradley Olson <[email protected]> Co-authored-by: Marcin S. <[email protected]> Co-authored-by: eskimor <[email protected]> Co-authored-by: Andronik <[email protected]>
Resolves #1866