-
Notifications
You must be signed in to change notification settings - Fork 741
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix phase0 block reward in rewards API #5101
Conversation
Ready for review - Happy to receive feedback, especially regarding the points listed in "Additional Info". |
@michaelsproul any update on the review for this item? Also, should we close out #4882 ? |
I've got the tests passing on this again. Will review properly soon |
|
||
let sqrt_total_active_balance = | ||
SqrtTotalActiveBalance::new(processing_epoch_end.get_total_active_balance()?); | ||
for attester in get_attesting_indices_from_state(state, attestation)? { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This algorithm is not quite right because it pays out for attestations regardless of whether this proposer was the the first one to include them.
In the spec, the epoch processing function iterates the previous_epoch_attestations
and pays the proposer reward to the proposer who includes each attester's attestation with minimal inclusion_delay
:
attestation = min([
a for a in matching_source_attestations
if index in get_attesting_indices(state, a)
], key=lambda a: a.inclusion_delay)
rewards[attestation.proposer_index] += get_proposer_reward(state, index)
It's quite common that a validator's attestation will be included multiple times in different aggregates, e.g. first at slot 10 and then again at slot 11. Only the proposer of the block of slot 10 gets the reward in this case (the slot 11 proposer either gets nothing for the attestation if it covered no new validators, or just the rewards for the newly covered validators for which this is their first inclusion on chain).
I think to remedy this the simplest fix would be to:
- For each attester in each attestation included: check whether this is the inclusion with minimal
inclusion_delay
(usually the first inclusion, but not always). This would involve checking first againststate.previous_epoch_attestations
/state.current_epoch_attestations
as appropriate and then checking against attestations already processed in this block (we don't want to double pay for multi inclusions).
One complication is the min
by inclusion_delay
. It would be helpful if a property like this held:
If an attestation is included for validator
V
at slotN
in epochN // 32
then it has a lowerinclusion_delay
than any other attestation fromV
included at a slotM > N
.
Unfortunately this property does not hold in the case of slashable attestations made by validator V
when the chain is not finalising promptly, in which case it could be that:
- There are 2+ chains with different shufflings that descend from the same
source
checkpoint (this requires finality to lapse for >2 epochs as shufflings only diverge if their common ancestor block is >2 epochs ago). - Validator
V
signs two slashable attestations in epochN // 32
: one on each chain. LetM = N + 1
for simplicity. On the first chain they attest at e.g. slotN
and on the second chain they attest at slotN - 5
. The attestation from slotN - 5
is included atN
for aninclusion_delay
of 5, then the attestation from slotN
is included at slotM = N + 1
for aninclusion_delay
of 1. So it's possible for an attestation included at a later block to have a lowerinclusion_delay
, i.e. the property is false.
Therefore in the general case it's not safe to just use the state at the block's slot (state
) to infer the rewards. I had hoped that if this property held we could use it to avoid loading the current_epoch_end
/next_epoch_end
states.
Instead I think we should just proceed with loading all 3 states most of the time and rely on the fact that all the phase0 states are quite small and should load quickly with the soon-to-be-merged hierarchical state diffs:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review!
I've implemented a potential fix and a (rather messy) test for this.
I have a question about the second part: While such a pair of attestations would be obviously slashable, would they be really be includable into the "other" chain, respectively? As the shuffling is different, the attesting validator would likely be in a different committee for a different slot in each chain and therefore the validation would fail (for one of the attestations), as the signature does not match the expected validator, as per my understanding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The beacon node is able to verify attestations from multiple chains, it will load the relevant state & committees and use those to verify the signature. Only in the case where it views the blocks from the other chain as completely invalid will it fail to process the blocks & attestations from that chain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh no actually you're right. We can't include the attestations from the other fork on chain once the shufflings diverge, because process_attestation
in the spec uses the committees from the current chain
I think this means the property I stated does hold. Which means we could use the passed in state
for all of the calculations 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I currently also think that your statement holds, I don't see why we can avoid the state loads.
In phase0, attestation rewards are processed at the epoch boundary. Therefore, we need the state, as a validator might be slashed at the epoch boundary but not at slot N
where the attestation is included. Furthermore, we need the effective balance and total effective balance from the epoch boundary to properly calculate rewards.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oooh yes, the slashing thing definitely prevents us from using the pre-state. Good catch.
The effective balances are immutable within an epoch, but you're right they'd also be a problem for the current epoch attestations we include which don't get rewarded until the end of the next epoch. Let's leave it as-is 😅
- check for attestations with lower inclusion delay - check for double attestations in block - add test
@michaelsproul test failure seems to be a false negative, can you pls retry? |
@mergify queue |
✅ The pull request has been merged automaticallyThe pull request has been merged automatically at 8b085dd |
Issues Addressed
Proposed Changes
Instead of relying on
compute_block_reward
for phase0, a new custom implementation forcompute_beacon_block_attestation_reward_base
is used, which handles edge cases like attestations from validators that were slashed after attesting and proposals in the epoch before Altair correctly.Additionally, this PR incorporates and improves the changes proposed in PR #4882.
Additional Info
Some points I'm unsure about:
BeaconChain::state_at_slot
to obtain needed states (at most twice per calculation). Another approach is using theBlockReplayer
to advance the passedstate
, which would avoid (maybe) expensive lookups from the cold DB. However I am unsure if the added complexity is worth it, and maybe this is even a non-issue due to smart caching or something else I am unaware of.BeaconState::build_total_active_balance_cache
is notpub
, is that intentional?