Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Won't honest validators update ETH1 data every two voting periods? #2012

Closed
rkapka opened this issue Aug 6, 2020 · 3 comments · Fixed by #2022
Closed

Won't honest validators update ETH1 data every two voting periods? #2012

rkapka opened this issue Aug 6, 2020 · 3 comments · Fixed by #2022

Comments

@rkapka
Copy link
Contributor

rkapka commented Aug 6, 2020

This issue was raised because it was possible to vote for the same block in two successful voting periods because of block overlapping. The solution was to add the following condition: get_eth1_data(block).deposit_count >= state.eth1data.deposit_count (#1836)

I think that the current implementation of Honest Validator's get_eth1_vote, where the validator follows the majority's vote, will result in eth1data being oftentimes updated every two voting periods instead of every period. This is assuming that most validators have a reasonably up-to-date view of the Ethereum 1 chain.

Perhaps a visual representation will make it easier to understand.

table

Additional explanation:

  • We assume that each voting period is 4 slots.
  • Valid ranges overlap as explained in Disallow single Eth1 block in two voting periods #1757.
  • Block 0x1 is valid during P2 because its timestamp is within the valid range (2 090 000 < 2 099 000 < 2 190 000).
  • P2/S2's proposer sees only the vote from P2/S1. This is because in order to obtain the correct state for proposals (which is the state at P2/S1), the state transition function had to transition from P1/S4 to P2/S1 which cleared all votes from P1.
  • Same thing will happen with P3 and P4, P5 and P6 etc.

A possible solution would be to check if the vote is cast in the first slot of a voting period. If yes, ignore the majority vote and propose a new block.

@rkapka rkapka changed the title Voting on ETH 1 data during the first slot of a voting period Won't honest validators update ETH1 data every two voting periods? Aug 12, 2020
@djrtwo
Copy link
Contributor

djrtwo commented Aug 19, 2020

So the issue here is the definition of state to use when running get_eth1_vote. It is assumed throughout the instructions for block that state is for the slot of the block being created. This requires a state transition of process_slots on the state from the previous slot. In doing so, the eth1data_votes would be cleared out in your example, not resulting in the double initial vote as you described.

Although this transition to the next slot is described explicitly in attestation production, it is just assumed in the block production.

I'll make the update to ensure it is explicit, although clients must already be using this logic otherwise they would be producing malformed blocks!

Thank you!

@rkapka
Copy link
Contributor Author

rkapka commented Aug 20, 2020

Thank you for responding.

I think there is an unwanted edge-case/side-effect of the proposed solution. eth1data_votes are used to advance the eth1data in case the threshold of >50% is reached. If the threshold was reached in the last slot of the previous epoch, then clearing votes before the proposal will not advance eth1data.

@djrtwo
Copy link
Contributor

djrtwo commented Aug 24, 2020

So the clearing of votes happens on the transition between two epochs.

Between the slot where (EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH) % SLOTS_PER_EPOCH equals EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH - 1 and 0. So at the start of slot 0, it is cleared out.

Because we use the state at the slot we are trying to prepare a block for, then for the 0th slot in the period, we'll see no votes in the state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants