Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark js-libp2p transfer performance #65

Closed
1 task
Tracked by #63
p-shahi opened this issue Oct 20, 2022 · 5 comments
Closed
1 task
Tracked by #63

Benchmark js-libp2p transfer performance #65

p-shahi opened this issue Oct 20, 2022 · 5 comments

Comments

@p-shahi
Copy link
Member

p-shahi commented Oct 20, 2022

@mxinden
Copy link
Member

mxinden commented Jun 28, 2023

To make sure we are on the same page:

as a test-plan

We no-longer plan to write a Testground test-plan, correct?

Add js-libp2p-transfer-performance as a test-plan and CI job to benchmark transfer times across releases to catch issues like libp2p/js-libp2p#1342

Is this accurate @p-shahi and @maschad?

@maschad
Copy link
Member

maschad commented Jun 28, 2023

One would add the js-libp2p perf implementation implemented in libp2p/js-libp2p#1604 to master/perf/impl. See guide in master/perf#adding-a-new-implementation.

This seems accurate although I think the format of the Output as mentioned in the guide would need to be modified to accommodate the scenario that libp2p/js-libp2p#1604 seeks to address specifically i.e. gauging single connection throughput by measuring how many bytes were transferred by the total time it took from stream open to stream close. So my understanding is that would exclude the time taken to establish the connection.

One would either trigger the perf GitHub workflow from a pull request on libp2p/test-plans before every release, or trigger the workflow on every pull request on libp2p/js-libp2p.

👍🏾

Based on the results reported by the workflow, more specifically the updated benchmark-results.json one can e.g. fail the CI run due to 20% performance regression.

Agreed, although looking at the current benchmark-results.json though there doesn't seem to be a format for running in different environments (such as browser vs webworkers vs node) but I don't suppose the Output format stipulated in https://github.com/libp2p/test-plans/tree/master/perf#adding-a-new-implementation prevents that based on my interpretation, so there would need to be some changes to the format benchmark-results.json

@justin0mcateer
Copy link

justin0mcateer commented Jul 28, 2023

We no-longer plan to write a Testground test-plan, correct?

We have been planning to use Testground for a similar purpose. I see that there was some work committed recently to enable browser environments there. Is there some reason using Testground is undesirable or wouldn't achieve the desired outcome?

@maschad
Copy link
Member

maschad commented Jul 28, 2023

We no-longer plan to write a Testground test-plan, correct?

We have been planning to use Testground for a similar purpose. I see that there was some work committed recently to enable browser environments there. Is there some reason using Testground is undesirable or wouldn't achieve the desired outcome?

In the first use case which was to start up two nodes and have them ping each other, we found Testground was too complicated and slow for what we wanted to do, you can read more about that here

@maschad maschad removed their assignment Aug 14, 2023
@achingbrain
Copy link
Member

This was (re)enabled by #325 so this can probably be closed now?

@p-shahi p-shahi closed this as completed Nov 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

5 participants