Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(tests): fix flaky test #1972

Merged
merged 2 commits into from
Aug 31, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 19 additions & 8 deletions tests/waku_rln_relay/test_wakunode_rln_relay.nim
Original file line number Diff line number Diff line change
Expand Up @@ -150,17 +150,28 @@ procSuite "WakuNode - RLN relay":
nodes[2].subscribe(pubsubTopics[1], relayHandler)
await sleepAsync(1000.millis)

# publish 3 messages from node[0] (last 2 are spam, window is 10 secs)
# generate some messages with rln proofs first. generating
# the proof takes some time, so this is done before publishing
# to avoid blocking the test
var messages1: seq[WakuMessage] = @[]
var messages2: seq[WakuMessage] = @[]
Comment on lines +156 to +157
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that is equivalent:

Suggested change
var messages1: seq[WakuMessage] = @[]
var messages2: seq[WakuMessage] = @[]
var messages1 = @[WakuMessage]
var messages2 = @[WakuMessage]


let epochTime = epochTime()

for i in 0..<3:
var message1 = WakuMessage(payload: ("Payload_" & $i).toBytes(), contentTopic: contentTopics[0])
doAssert(nodes[0].wakuRlnRelay.appendRLNProof(message1, epochTime()))
await nodes[0].publish(pubsubTopics[0], message1)
var message = WakuMessage(payload: ("Payload_" & $i).toBytes(), contentTopic: contentTopics[0])
doAssert(nodes[0].wakuRlnRelay.appendRLNProof(message, epochTime))
messages1.add(message)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure what the difference is - does appendRLNProof somehow "cache" the proof when called in the same epoch? Because it seems like the only difference is in when does the "await" happen - here vs. a bit later, but the proof generation does not really change. Or what am I missing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mm as it was before:

  • await generate proof
  • await publish message
  • await generate proof
  • await publish message

the problem is that generate proof takes a bit of time. If this time grows > Epoch, then the second published message wont be considered spam, since enough time passed so you are not rate limited. This was the reason the test were failing. I was expecting only 1 message to arrive, but 2 arrives, since the second was sent too late, hence skipping the rate limit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So would it make sense to wrap this in an async proc and execute the proof generation in parallel and then just await all futures? I feel like it's highly probable that if the proof generation takes a lot of time, we could still hit the "too slow" issue this way (even though with lower probability)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are very right, but found an easier solution. "paralellizing" the proof generation would imply more chanes, since its not async, buttt just realized we can use the same epoch time for all of them. so with this it shouldn't matter that proof generation takes 1 hour, since all messages are using the same epoch.

b8b58e2

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ha! Perfect!


# publish 3 messages from node[1] (last 2 are spam, window is 10 secs)
for i in 0..<3:
var message2 = WakuMessage(payload: ("Payload_" & $i).toBytes(), contentTopic: contentTopics[1])
doAssert(nodes[1].wakuRlnRelay.appendRLNProof(message2, epochTime()))
await nodes[1].publish(pubsubTopics[1], message2)
var message = WakuMessage(payload: ("Payload_" & $i).toBytes(), contentTopic: contentTopics[1])
doAssert(nodes[1].wakuRlnRelay.appendRLNProof(message, epochTime))
messages2.add(message)

# publish 3 messages from node[0] (last 2 are spam, window is 10 secs)
# publish 3 messages from node[1] (last 2 are spam, window is 10 secs)
for msg in messages1: await nodes[0].publish(pubsubTopics[0], msg)
for msg in messages2: await nodes[1].publish(pubsubTopics[1], msg)

# wait for gossip to propagate
await sleepAsync(5000.millis)
Expand Down