Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(tests): fix flaky test #1972

Merged
merged 2 commits into from
Aug 31, 2023
Merged

fix(tests): fix flaky test #1972

merged 2 commits into from
Aug 31, 2023

Conversation

alrevuelta
Copy link
Contributor

  • Fixes flaky test testing rln-relay is applied in all rln pubsub/content topics.
  • This is caused most likely because generating rln proof in github actions take a while. Guess the resources are quite low.

Copy link
Collaborator

@Ivansete-status Ivansete-status left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very interesting, thanks for it!
Maybe this will fix the next: #1826

Comment on lines +156 to +157
var messages1: seq[WakuMessage] = @[]
var messages2: seq[WakuMessage] = @[]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that is equivalent:

Suggested change
var messages1: seq[WakuMessage] = @[]
var messages2: seq[WakuMessage] = @[]
var messages1 = @[WakuMessage]
var messages2 = @[WakuMessage]

@github-actions
Copy link

You can find the image built from this PR at

quay.io/wakuorg/nwaku-pr:1972

await nodes[0].publish(pubsubTopics[0], message1)
var message = WakuMessage(payload: ("Payload_" & $i).toBytes(), contentTopic: contentTopics[0])
doAssert(nodes[0].wakuRlnRelay.appendRLNProof(message, epochTime()))
messages1.add(message)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure what the difference is - does appendRLNProof somehow "cache" the proof when called in the same epoch? Because it seems like the only difference is in when does the "await" happen - here vs. a bit later, but the proof generation does not really change. Or what am I missing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mm as it was before:

  • await generate proof
  • await publish message
  • await generate proof
  • await publish message

the problem is that generate proof takes a bit of time. If this time grows > Epoch, then the second published message wont be considered spam, since enough time passed so you are not rate limited. This was the reason the test were failing. I was expecting only 1 message to arrive, but 2 arrives, since the second was sent too late, hence skipping the rate limit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So would it make sense to wrap this in an async proc and execute the proof generation in parallel and then just await all futures? I feel like it's highly probable that if the proof generation takes a lot of time, we could still hit the "too slow" issue this way (even though with lower probability)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are very right, but found an easier solution. "paralellizing" the proof generation would imply more chanes, since its not async, buttt just realized we can use the same epoch time for all of them. so with this it shouldn't matter that proof generation takes 1 hour, since all messages are using the same epoch.

b8b58e2

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ha! Perfect!

Copy link
Contributor

@rymnc rymnc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

await nodes[0].publish(pubsubTopics[0], message1)
var message = WakuMessage(payload: ("Payload_" & $i).toBytes(), contentTopic: contentTopics[0])
doAssert(nodes[0].wakuRlnRelay.appendRLNProof(message, epochTime()))
messages1.add(message)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ha! Perfect!

@alrevuelta alrevuelta merged commit f262397 into master Aug 31, 2023
13 checks passed
@alrevuelta alrevuelta deleted the flaky-2 branch August 31, 2023 10:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants