-
Notifications
You must be signed in to change notification settings - Fork 745
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Signature caching in the VC #3216
Comments
Oh wait, I just realised that those |
We already pre-compute all the sync selection proof signatures. It could be that this burst of signing is what shows up on web3signer's end. Or, alternatively if we do adopt a cache we can probably drop the signature pre-compute, as I think a cache would obsolete the pre-compute. |
Closing since #3223 implemented this feature and saw little to no benefit. |
Description
@jmcruz1983 (Juan) has pointed out that Lighthouse is doing orders of magnitude more signing requests to Web3Signer than Teku. At scale (e.g., thousands of validators), this can overload infrastructure and cause real problems.
Based on some data from Juan, I suspect this is caused by duplicate signing of selection proofs (I'm not sure if it's for attestations or sync messages).
I have a proposed solution that should be simple to implement and will:
Proposed Solution
In the validator_store, create a
signature_cache: SignatureCache<T>(HashMap<(T, SigningContext), Signature>)
struct.ValidatorStore
is onesignature_cache
(probably wrapped in anRwLock
) forproduce_selection_proof
and one forproduce_sync_selection_proof
.Whenever the signature cache reaches a certain size (4?) it will prune the entry
The text was updated successfully, but these errors were encountered: