-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: curate peers shared over px protocol #1671
Conversation
8574e79
to
456f486
Compare
@@ -113,6 +113,10 @@ proc addPeer*(pm: PeerManager, remotePeerInfo: RemotePeerInfo) = | |||
|
|||
pm.peerStore[AddressBook][remotePeerInfo.peerId] = remotePeerInfo.addrs | |||
pm.peerStore[KeyBook][remotePeerInfo.peerId] = publicKey | |||
pm.peerStore[SourceBook][remotePeerInfo.peerId] = origin | |||
|
|||
if remotePeerInfo.enr.isSome: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we use () as it is a verb?
if remotePeerInfo.enr.isSome: | |
if remotePeerInfo.enr.isSome(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! 🥇
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approach looks good, though I have a question about race conditions.
wpx.cleanCache() | ||
wpx.populateEnrCache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't this cause a race condition between when someone is querying the cache and the heartbeat is cleaning the cache in order to populate it again? I would suggest that the cache being populated and the one queried be different with an atomic swap between the two after the former is populated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, totally missed that
should be fixed here: 910afc5
require (await node1.peerManager.connectRelay(node2.switch.peerInfo.toRemotePeerInfo())) | ||
|
||
# Give disv5 some time to discover each other | ||
await sleepAsync(5000.millis) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to limit these "arbitrary" sleeps in our tests and rather check for the state (e.g. in a loop with some smaller sleep duration) or would that be an overkill? IMO that could potentially speed up some test runs and at the same time make them more resilient in case the node is slower.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indeed, totally agree that in some tests we should follow this pattern, waitFor instead of arbitrary sleep. we would still need a timeout but ofc tests would finish before, and we can cut some time.
perhaps not for this specific test, but will take it into account for further tests/refactors thanks!
456f486
to
910afc5
Compare
910afc5
to
6e2af35
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! LGTM now.
6e2af35
to
7399710
Compare
closes #1521
MaxPeersCacheSize
peers (see this).Beyond unit tests, can be tested with: