Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter votes from disabled validators in BackedCandidates in process_inherent_data #1863

Merged
merged 43 commits into from
Nov 30, 2023
Merged
Show file tree
Hide file tree
Changes from 38 commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
259cb4b
Filter backing votes in `paras_inherent` - initial work
tdimitrov Oct 10, 2023
d451886
Error handling
tdimitrov Oct 12, 2023
a3bafba
Merge branch 'master' into tsv-disabling-runtime
tdimitrov Oct 16, 2023
75d707a
Merge branch 'tsv-disabling' into tsv-disabling-runtime
tdimitrov Oct 16, 2023
cb0d8fc
Add `fn disabled_validators` to `trait DisabledValidators`. Use the t…
tdimitrov Oct 16, 2023
8a8893e
Extract the implementation of `disabled_validators` from rutnime api …
tdimitrov Oct 17, 2023
7b868b0
Error handling
tdimitrov Oct 17, 2023
a41d21c
Remove `DisabledValidators` from paras_inherent
tdimitrov Oct 17, 2023
34a3bf0
Fix a warning
tdimitrov Oct 17, 2023
88eb018
Merge branch 'tsv-disabling' into tsv-disabling-runtime
tdimitrov Oct 23, 2023
305d5b8
Merge branch 'tsv-disabling' into tsv-disabling-runtime
tdimitrov Oct 24, 2023
e241312
Add setters used for unit tests
tdimitrov Oct 26, 2023
2aa1323
Mock for `DisabledValidators` in Test runtime
tdimitrov Oct 26, 2023
a8efd15
Fix test state and add an optimisation for `filter_backed_statements`
tdimitrov Oct 26, 2023
3567a93
Make disabled validators mock mutable
tdimitrov Oct 26, 2023
04dc32a
Extract common code in `get_test_data`
tdimitrov Oct 26, 2023
cce71ee
Additional cases
tdimitrov Oct 26, 2023
7a19f55
Fixes in `filter_backed_statements`
tdimitrov Oct 26, 2023
3260187
Refactor `filter_backed_statements`
tdimitrov Oct 26, 2023
ff577a5
Merge branch 'tsv-disabling' into tsv-disabling-runtime
ordian Oct 26, 2023
876655d
Apply suggestions from code review
tdimitrov Oct 28, 2023
ed2d430
wip: Filtering backed statements from disabled validators is performe…
tdimitrov Oct 31, 2023
9a8985a
para_id_to_core_idx -> scheduled
tdimitrov Oct 31, 2023
f7544d7
Fix code duplication in tests
tdimitrov Oct 31, 2023
59fc176
Fix a compilation error
tdimitrov Oct 31, 2023
cb2fe60
Fix backing votes filtering - if more than `effective_minimum_backing…
tdimitrov Nov 2, 2023
be85b5d
Ensure inherent data contains no backing votes from disabled validato…
tdimitrov Nov 2, 2023
f472bb1
Comments and small fixes
tdimitrov Nov 2, 2023
53aaffa
Fix disabled statements filtering
tdimitrov Nov 3, 2023
16823f4
Updated comments
tdimitrov Nov 3, 2023
452b202
Merge branch 'tsv-disabling' into tsv-disabling-runtime
tdimitrov Nov 6, 2023
cb60cf0
Apply suggestions from code review
tdimitrov Nov 15, 2023
750f2db
Code review feedback
tdimitrov Nov 15, 2023
7022003
Merge branch 'tsv-disabling' into tsv-disabling-runtime
tdimitrov Nov 16, 2023
8b5e5a8
Apply suggestions from code review: remove duplicated assert
tdimitrov Nov 28, 2023
a50800f
Code review feedback - filter_backed_statements_from_disabled -> filt…
tdimitrov Nov 28, 2023
7a308cd
Update parainherent.md with some details about data sanitization
tdimitrov Nov 28, 2023
7ccb942
Add newline to parainherent.md
tdimitrov Nov 28, 2023
1834e20
Apply suggestions from code review
tdimitrov Nov 28, 2023
610b45d
Apply suggestions from code review
tdimitrov Nov 29, 2023
83adc93
Fixes in parainherent.md
tdimitrov Nov 29, 2023
158557d
Merge branch 'tsv-disabling' into tsv-disabling-runtime
tdimitrov Nov 29, 2023
79f4c40
Merge branch 'tsv-disabling' into tsv-disabling-runtime
tdimitrov Nov 30, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions polkadot/roadmap/implementers-guide/src/runtime/parainherent.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,3 +60,32 @@ processing it, so the processed inherent data is simply dropped.
This also means that the `enter` function keeps data around for no good reason. This seems acceptable though as the size
of a block is rather limited. Nevertheless if we ever wanted to optimize this we can easily implement an inherent
collector that has two implementations, where one clones and stores the data and the other just passes it on.

## Data sanitization
`ParasInherent` performs sanitization on the provided input data. When the module is invoked via `create_inherent` the
input data is sanitized. `enter` entry point requires sanitized input. If unsanitized data is provided the module
generates an error.
tdimitrov marked this conversation as resolved.
Show resolved Hide resolved

Disputes are included in the block with a priority for a security reasons. It's important to pump as many dispute votes
tdimitrov marked this conversation as resolved.
Show resolved Hide resolved
onchain as possible so that disputes conclude faster and the offenders are punished. However if there are too many
disputes to include in a block the dispute set is trimmed so that it respects max block weight.

Dispute data is first deduplicated and sorted by block number (older first) and dispute location (local then remote).
Concluded and ancient (disputes initiated before the post conclusion acceptance period ) disputes are filtered out.
Votes with invalid signatures and from unknown validators (not found in the active set for the current session) are also
filtered out.

All dispute statements are included included in the order described in the previous paragraph until the available block
tdimitrov marked this conversation as resolved.
Show resolved Hide resolved
weight is exhausted. After the dispute data is included all remaining weight is filled in with candidates and
availability bitfields. Bitfields are included with priority, then candidates containing code updates and finaly any
tdimitrov marked this conversation as resolved.
Show resolved Hide resolved
backed candidates. If there is not enough weight for all backed candidates they are trimmed by random selection.
Disputes are processed in three separate functions - `deduplicate_and_sort_dispute_data`, `filter_dispute_data` and
`limit_and_sanitize_disputes`.

Availability bitfields are also sanitized by dropping malformed ones, containing disputed cores or bad signatures. Refer
to `sanitize_bitfields` function for implementation details.

Backed candidates sanitization removes malformed ones, candidates which has got concluded invalid disputes against them
tdimitrov marked this conversation as resolved.
Show resolved Hide resolved
or candidates produced by unassigned cores. Furthermore any backing votes from disabled validators for a candidate are
dropped. This is part of the validator disabling strategy. These checks are implemented in `sanitize_backed_candidates`
and `filter_backed_statements_from_disabled_validators`.
tdimitrov marked this conversation as resolved.
Show resolved Hide resolved
4 changes: 3 additions & 1 deletion polkadot/runtime/common/src/assigned_slots/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -743,7 +743,9 @@ mod tests {
type OnNewHead = ();
}

impl parachains_shared::Config for Test {}
impl parachains_shared::Config for Test {
type DisabledValidators = ();
}

parameter_types! {
pub const LeasePeriod: BlockNumber = 3;
Expand Down
4 changes: 3 additions & 1 deletion polkadot/runtime/common/src/integration_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,9 @@ impl configuration::Config for Test {
type WeightInfo = configuration::TestWeightInfo;
}

impl shared::Config for Test {}
impl shared::Config for Test {
type DisabledValidators = ();
}

impl origin::Config for Test {}

Expand Down
4 changes: 3 additions & 1 deletion polkadot/runtime/common/src/paras_registrar/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -798,7 +798,9 @@ mod tests {
type MaxFreezes = ConstU32<1>;
}

impl shared::Config for Test {}
impl shared::Config for Test {
type DisabledValidators = ();
}

impl origin::Config for Test {}

Expand Down
27 changes: 26 additions & 1 deletion polkadot/runtime/parachains/src/mock.rs
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,22 @@ impl crate::configuration::Config for Test {
type WeightInfo = crate::configuration::TestWeightInfo;
}

impl crate::shared::Config for Test {}
pub struct MockDisabledValidators {}
impl frame_support::traits::DisabledValidators for MockDisabledValidators {
/// Returns true if the given validator is disabled.
fn is_disabled(index: u32) -> bool {
disabled_validators().iter().any(|v| *v == index)
}

/// Returns a hardcoded list (`DISABLED_VALIDATORS`) of disabled validators
fn disabled_validators() -> Vec<u32> {
disabled_validators()
}
}

impl crate::shared::Config for Test {
type DisabledValidators = MockDisabledValidators;
}

impl origin::Config for Test {}

Expand Down Expand Up @@ -431,6 +446,8 @@ thread_local! {

pub static AVAILABILITY_REWARDS: RefCell<HashMap<ValidatorIndex, usize>>
= RefCell::new(HashMap::new());

pub static DISABLED_VALIDATORS: RefCell<Vec<u32>> = RefCell::new(vec![]);
}

pub fn backing_rewards() -> HashMap<ValidatorIndex, usize> {
Expand All @@ -441,6 +458,10 @@ pub fn availability_rewards() -> HashMap<ValidatorIndex, usize> {
AVAILABILITY_REWARDS.with(|r| r.borrow().clone())
}

pub fn disabled_validators() -> Vec<u32> {
DISABLED_VALIDATORS.with(|r| r.borrow().clone())
}

parameter_types! {
pub static Processed: Vec<(ParaId, UpwardMessage)> = vec![];
}
Expand Down Expand Up @@ -580,3 +601,7 @@ pub(crate) fn deregister_parachain(id: ParaId) {
pub(crate) fn try_deregister_parachain(id: ParaId) -> crate::DispatchResult {
frame_support::storage::transactional::with_storage_layer(|| Paras::schedule_para_cleanup(id))
}

pub(crate) fn set_disabled_validators(disabled: Vec<u32>) {
DISABLED_VALIDATORS.with(|d| *d.borrow_mut() = disabled)
}
181 changes: 160 additions & 21 deletions polkadot/runtime/parachains/src/paras_inherent/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ use crate::{
metrics::METRICS,
paras,
scheduler::{self, FreedReason},
shared, ParaId,
shared::{self, AllowedRelayParentsTracker},
ParaId,
};
use bitvec::prelude::BitVec;
use frame_support::{
Expand All @@ -42,8 +43,8 @@ use frame_support::{
use frame_system::pallet_prelude::*;
use pallet_babe::{self, ParentBlockRandomness};
use primitives::{
BackedCandidate, CandidateHash, CandidateReceipt, CheckedDisputeStatementSet,
CheckedMultiDisputeStatementSet, CoreIndex, DisputeStatementSet,
effective_minimum_backing_votes, BackedCandidate, CandidateHash, CandidateReceipt,
CheckedDisputeStatementSet, CheckedMultiDisputeStatementSet, CoreIndex, DisputeStatementSet,
InherentData as ParachainsInherentData, MultiDisputeStatementSet, ScrapedOnChainVotes,
SessionIndex, SignedAvailabilityBitfields, SigningContext, UncheckedSignedAvailabilityBitfield,
UncheckedSignedAvailabilityBitfields, ValidatorId, ValidatorIndex, ValidityAttestation,
Expand Down Expand Up @@ -142,6 +143,8 @@ pub mod pallet {
DisputeStatementsUnsortedOrDuplicates,
/// A dispute statement was invalid.
DisputeInvalid,
/// A candidate was backed by a disabled validator
BackedByDisabled,
}

/// Whether the paras inherent was included within this block.
Expand Down Expand Up @@ -378,6 +381,7 @@ impl<T: Config> Pallet<T> {
let bitfields_weight = signed_bitfields_weight::<T>(&bitfields);
let disputes_weight = multi_dispute_statement_sets_weight::<T>(&disputes);

// Weight before filtering/sanitization
let all_weight_before = candidates_weight + bitfields_weight + disputes_weight;

METRICS.on_before_filter(all_weight_before.ref_time());
Expand Down Expand Up @@ -587,17 +591,19 @@ impl<T: Config> Pallet<T> {

METRICS.on_candidates_processed_total(backed_candidates.len() as u64);

let backed_candidates = sanitize_backed_candidates::<T, _>(
backed_candidates,
|candidate_idx: usize,
backed_candidate: &BackedCandidate<<T as frame_system::Config>::Hash>|
-> bool {
let para_id = backed_candidate.descriptor().para_id;
let prev_context = <paras::Pallet<T>>::para_most_recent_context(para_id);
let check_ctx = CandidateCheckContext::<T>::new(prev_context);

// never include a concluded-invalid candidate
current_concluded_invalid_disputes.contains(&backed_candidate.hash()) ||
let SanitizedBackedCandidates { backed_candidates, votes_from_disabled_were_dropped } =
sanitize_backed_candidates::<T, _>(
backed_candidates,
&allowed_relay_parents,
|candidate_idx: usize,
backed_candidate: &BackedCandidate<<T as frame_system::Config>::Hash>|
-> bool {
let para_id = backed_candidate.descriptor().para_id;
let prev_context = <paras::Pallet<T>>::para_most_recent_context(para_id);
let check_ctx = CandidateCheckContext::<T>::new(prev_context);

// never include a concluded-invalid candidate
current_concluded_invalid_disputes.contains(&backed_candidate.hash()) ||
// Instead of checking the candidates with code upgrades twice
// move the checking up here and skip it in the training wheels fallback.
// That way we avoid possible duplicate checks while assuring all
Expand All @@ -607,12 +613,19 @@ impl<T: Config> Pallet<T> {
check_ctx
.verify_backed_candidate(&allowed_relay_parents, candidate_idx, backed_candidate)
.is_err()
},
&scheduled,
);
},
&scheduled,
);

METRICS.on_candidates_sanitized(backed_candidates.len() as u64);

// In `Enter` context (invoked during execution) there should be no backing votes from
// disabled validators because they should have been filtered out during inherent data
// preparation (`ProvideInherent` context). Abort in such cases.
if context == ProcessInherentDataContext::Enter {
ensure!(!votes_from_disabled_were_dropped, Error::<T>::BackedByDisabled);
}

// Process backed candidates according to scheduled cores.
let inclusion::ProcessedCandidates::<<HeaderFor<T> as HeaderT>::Hash> {
core_indices: occupied,
Expand Down Expand Up @@ -900,7 +913,19 @@ pub(crate) fn sanitize_bitfields<T: crate::inclusion::Config>(
bitfields
}

/// Filter out any candidates that have a concluded invalid dispute.
// Result from `sanitize_backed_candidates`
#[derive(Debug, PartialEq)]
struct SanitizedBackedCandidates<Hash> {
// Sanitized backed candidates. The `Vec` is sorted according to the occupied core index.
backed_candidates: Vec<BackedCandidate<Hash>>,
// Set to true if any votes from disabled validators were dropped from the input.
votes_from_disabled_were_dropped: bool,
}

/// Filter out:
/// 1. any candidates that have a concluded invalid dispute
/// 2. all backing votes from disabled validators
/// 3. any candidates that end up with less than `effective_minimum_backing_votes` backing votes
///
/// `scheduled` follows the same naming scheme as provided in the
/// guide: Currently `free` but might become `occupied`.
Expand All @@ -910,15 +935,17 @@ pub(crate) fn sanitize_bitfields<T: crate::inclusion::Config>(
/// `candidate_has_concluded_invalid_dispute` must return `true` if the candidate
/// is disputed, false otherwise. The passed `usize` is the candidate index.
///
/// The returned `Vec` is sorted according to the occupied core index.
/// Returns struct `SanitizedBackedCandidates` where `backed_candidates` are sorted according to the
/// occupied core index.
fn sanitize_backed_candidates<
T: crate::inclusion::Config,
F: FnMut(usize, &BackedCandidate<T::Hash>) -> bool,
>(
mut backed_candidates: Vec<BackedCandidate<T::Hash>>,
allowed_relay_parents: &AllowedRelayParentsTracker<T::Hash, BlockNumberFor<T>>,
mut candidate_has_concluded_invalid_dispute_or_is_invalid: F,
scheduled: &BTreeMap<ParaId, CoreIndex>,
) -> Vec<BackedCandidate<T::Hash>> {
) -> SanitizedBackedCandidates<T::Hash> {
// Remove any candidates that were concluded invalid.
// This does not assume sorting.
backed_candidates.indexed_retain(move |candidate_idx, backed_candidate| {
Expand All @@ -936,6 +963,13 @@ fn sanitize_backed_candidates<
scheduled.get(&desc.para_id).is_some()
});

// Filter out backing statements from disabled validators
let dropped_disabled = filter_backed_statements_from_disabled_validators::<T>(
&mut backed_candidates,
&allowed_relay_parents,
scheduled,
);

// Sort the `Vec` last, once there is a guarantee that these
// `BackedCandidates` references the expected relay chain parent,
// but more importantly are scheduled for a free core.
Expand All @@ -946,7 +980,10 @@ fn sanitize_backed_candidates<
scheduled[&x.descriptor().para_id].cmp(&scheduled[&y.descriptor().para_id])
});

backed_candidates
SanitizedBackedCandidates {
backed_candidates,
votes_from_disabled_were_dropped: dropped_disabled,
}
}

/// Derive entropy from babe provided per block randomness.
Expand Down Expand Up @@ -1029,3 +1066,105 @@ fn limit_and_sanitize_disputes<
(checked, checked_disputes_weight)
}
}

// Filters statements from disabled validators in `BackedCandidate`, non-scheduled candidates and
// few more sanity checks. Returns `true` if at least one statement is removed and `false`
// otherwise.
fn filter_backed_statements_from_disabled_validators<T: shared::Config + scheduler::Config>(
backed_candidates: &mut Vec<BackedCandidate<<T as frame_system::Config>::Hash>>,
allowed_relay_parents: &AllowedRelayParentsTracker<T::Hash, BlockNumberFor<T>>,
scheduled: &BTreeMap<ParaId, CoreIndex>,
) -> bool {
let disabled_validators =
BTreeSet::<_>::from_iter(shared::Pallet::<T>::disabled_validators().into_iter());

if disabled_validators.is_empty() {
// No disabled validators - nothing to do
return false
}

let backed_len_before = backed_candidates.len();

// Flag which will be returned. Set to `true` if at least one vote is filtered.
let mut filtered = false;

let minimum_backing_votes = configuration::Pallet::<T>::config().minimum_backing_votes;

// Process all backed candidates. `validator_indices` in `BackedCandidates` are indices within
// the validator group assigned to the parachain. To obtain this group we need:
// 1. Core index assigned to the parachain which has produced the candidate
// 2. The relay chain block number of the candidate
backed_candidates.retain_mut(|bc| {
// Get `core_idx` assigned to the `para_id` of the candidate
let core_idx = match scheduled.get(&bc.descriptor().para_id) {
Some(core_idx) => *core_idx,
None => {
log::debug!(target: LOG_TARGET, "Can't get core idx of a backed candidate for para id {:?}. Dropping the candidate.", bc.descriptor().para_id);
return false
}
};

// Get relay parent block number of the candidate. We need this to get the group index assigned to this core at this block number
let relay_parent_block_number = match allowed_relay_parents
.acquire_info(bc.descriptor().relay_parent, None) {
Some((_, block_num)) => block_num,
None => {
log::debug!(target: LOG_TARGET, "Relay parent {:?} for candidate is not in the allowed relay parents. Dropping the candidate.", bc.descriptor().relay_parent);
return false
}
};

// Get the group index for the core
let group_idx = match <scheduler::Pallet<T>>::group_assigned_to_core(
core_idx,
relay_parent_block_number + One::one(),
) {
Some(group_idx) => group_idx,
None => {
log::debug!(target: LOG_TARGET, "Can't get the group index for core idx {:?}. Dropping the candidate.", core_idx);
return false
},
};

// And finally get the validator group for this group index
let validator_group = match <scheduler::Pallet<T>>::group_validators(group_idx) {
Some(validator_group) => validator_group,
None => {
log::debug!(target: LOG_TARGET, "Can't get the validators from group {:?}. Dropping the candidate.", group_idx);
return false
}
};

// Bitmask with the disabled indices within the validator group
let disabled_indices = BitVec::<u8, bitvec::order::Lsb0>::from_iter(validator_group.iter().map(|idx| disabled_validators.contains(idx)));
// The indices of statements from disabled validators in `BackedCandidate`. We have to drop these.
let indices_to_drop = disabled_indices.clone() & &bc.validator_indices;
// Apply the bitmask to drop the disabled validator from `validator_indices`
bc.validator_indices &= !disabled_indices;
// Remove the corresponding votes from `validity_votes`
for idx in indices_to_drop.iter_ones().rev() {
bc.validity_votes.remove(idx);
}

// If at least one statement was dropped we need to return `true`
if indices_to_drop.count_ones() > 0 {
filtered = true;
}

// By filtering votes we might render the candidate invalid and cause a failure in
// [`process_candidates`]. To avoid this we have to perform a sanity check here. If there
// are not enough backing votes after filtering we will remove the whole candidate.
if bc.validity_votes.len() < effective_minimum_backing_votes(
validator_group.len(),
minimum_backing_votes

) {
return false
}

true
});

// Also return `true` if a whole candidate was dropped from the set
filtered || backed_len_before != backed_candidates.len()
}
Loading