-
Notifications
You must be signed in to change notification settings - Fork 974
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Research on optimal LMD-GHOST implementation: need help with network topology/expectations #570
Comments
Pre-PR discussion from the sharding gitter:
|
In the worst case up to 4m.
In the worst case ~10k.
We can expect every validator to attest every epoch.
Realistically 1-5. But we need to be able to handle larger reorgs due to attackers as well, so probably have a distribution, and include some blocks going much further back. Also include attesters that just go offline some of the time.
1-1.5?
I think pruning past ~1-3 months is a great idea.
See above.
In general, I'd say keep worst-case performance in mind; we don't want something that works amazingly well for the average case but breaks quickly for the worst case. Also don't want something that gets efficiency from complex heuristics that are attackable. Also, there are some efficiency considerations here that the above did not touch upon. It's not that difficult to make LMD GHOST work in optimistic cases for any number of validators, because you just run a pre-processing step that groups together the impacts from all validators whose latest attestations are any specific block B. So if everyone is participating every epoch, you can compute the fork choice rule in time O(64 * log(t)). The challenge is if there is a long tail of most recent blocks, in the worst case one validator per each slot going back the last month or more. It's important to test the fork choice rule in both cases. |
The above answers help a lot. But the huge amount of validators, in addition to writing a more elaborate simulation, sparked a lot of new questions regarding Update on my LMD-GHOST implementations: I changed every algorithm to work with arbitrary attestation weighting, and batching (aggregation, without signature things, but that can be added later). The current state of the master-branch of my repo is not working currently, because of some uncertainties in how to deal with state. I can either simplify and continue testing, if people need their lmd-ghost implementation testing quick, or wait and implement basic storage, based on insights from my new issue (#582), for my simulation. This would enable me to simulate proper epoch transitions with shuffling. |
@protolambda Any outstanding questions? |
No, I think I'm done for now with my LMD-GHOST implementation work. Implementer teams can figure out their own preferred algorithm, using my simulator + writeup of the implementations. Parameters are easily configurable now, and combined with above answers they can change these parameters based on their speed/usage requirements. I'll close this issue now. |
Since the last Eth 2.0 implementers call (nr 11), I started work on a comparison of the different LMD-GHOST implementations. This issue is NOT to discuss the implementations themselves (please submit an issue/PR to my research repo), but rather what kind of simulation(s) we need to pick the best option.
Network topology is also something other teams are struggling with, e.g. see prysmaticlabs/prysm#1479
The LMD-GHOST implementations are written in Go, with comments to guide you through.
A simulated chain with some parameters is also included, but it may need to be changed to fit the results of the discussion below better.
Repo here: https://github.com/protolambda/lmd-ghost
I'll do a write-up in the Readme of the different implementation features later.
Now, to get a good simulation going, I'd like to have some help with answering the following questions:
The text was updated successfully, but these errors were encountered: