Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chore/update wsts version 7 #4267

Merged
merged 1 commit into from
Jan 19, 2024
Merged

Chore/update wsts version 7 #4267

merged 1 commit into from
Jan 19, 2024

Conversation

jferrant
Copy link
Collaborator

Description

Update wsts version to 7 to enable DKG robustness and voting no on a block

NOTE: also fixes slowdown due to double wait for command events using the poll_timeout

Copy link
Collaborator

@xoloki xoloki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks!

Signed-off-by: Jacinta Ferrant <[email protected]>
Copy link
Member

@kantai kantai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@@ -36,7 +36,7 @@ use wsts::state_machine::PublicKeys;
/// List of key_ids for each signer_id
pub type SignerKeyIds = HashMap<u32, Vec<u32>>;

const EVENT_TIMEOUT_MS: u64 = 5000;
const EVENT_TIMEOUT_MS: u64 = 50;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a little small, don't you think?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue here is that the libsigner runloop has to multiplex two event receivers: one with stackerdb events, the other with main loop commands. So every time we would get an event without a command, we'd wait 5 seconds for a command that never came.

I'm still not exactly sure why this was suddenly a problem. wsts v7 added a DkgEndBegin message, which is being amplified by all of the inactive coordinators, so presumably that was it. But I have no idea why we weren't often hitting this issue, since we don't send many commands but received lots of messages even before this change.

Regardless, this was always broken before, so the fix seems appropriate.

Copy link
Collaborator Author

@jferrant jferrant Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think once we remove the command receiver, we can update it back to 5000 ms. However, for now I think this fix is okay. If we find we need to KEEP the command receiver we should have a sep timeout for it :)

The command receiver was just a temporary change to enable us to test things while the node did not have the state necessary to trigger DKG / sign rounds.

@jcnelson
Copy link
Member

Just had one question about the timeout -- 50ms is a little small for network latency across the broad internet.

Copy link

codecov bot commented Jan 19, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (5084a33) 83.34% compared to head (437d8fa) 58.99%.

Additional details and impacted files
@@             Coverage Diff             @@
##             next    #4267       +/-   ##
===========================================
- Coverage   83.34%   58.99%   -24.35%     
===========================================
  Files         432      432               
  Lines      306549   306557        +8     
===========================================
- Hits       255484   180867    -74617     
- Misses      51065   125690    +74625     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jferrant jferrant merged commit 3094df2 into next Jan 19, 2024
1 of 2 checks passed
@blockstack-devops
Copy link
Contributor

This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@stacks-network stacks-network locked as resolved and limited conversation to collaborators Nov 7, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants