-
Notifications
You must be signed in to change notification settings - Fork 445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transfer times got twice longer between [email protected] and [email protected] #1342
Comments
@Alexis-ROYER having not looked at your case specifically, so I can't say 100% that the problem is #1303 but its very likely most of it is. We're actively working on resolving the performance issues, and most of the work has landed in our nightly version. This is our current focus (along with stabilizing the types) before pushing out a v0.38.0. |
@Alexis-ROYER Could you try with the latest 0.38 version? |
Closing as the issue should be resolved - please re-open if you continue to observe it. |
I've updated my measures with [email protected] and [email protected]: Transfer times are still about those of [email protected], higher than [email protected]. The following test script works for both [email protected] and [email protected]: |
@achingbrain Could you please re-open the issue? I can't on my side. |
I've put a test bed together here: https://github.com/ipfs-shipyard/js-libp2p-transfer-performance It takes a slightly different approach to the two files in the OP - notably it doesn't configure a DHT since it's not needed and uses the plaintext connection encrypter instead of noise as it's simpler. It transfers 100MiB of data between the two nodes using increasing chunk sizes, recording how long it takes. Since the thing doing all the work here is the stream multiplexer, it tests What I've found largely tallies with @Alexis-ROYER's investigation: In the graph above I've ignored message sizes below 256b as I don't think many protocols would send messages so small but the values follow the general pattern.
We should look at the performance overhead for messages smaller than this to try to bring the gap down between the blue and green lines in the graph above. Other takeaways are that if you're using |
I will run some.profilling tools today to see what the hot paths are |
2022-09-13 converation: @mpetrunic has done some profiling but libp2p dependencies aren't showing up with clinic. @Alexis-ROYER: thanks for reporting this as clearly there is an issue. We're curious how you noticed this and if this is having an impact on any applications/uses you have with js-libp2p. This will help us gage where to prioritize this work. And certainly any help digging into the issue is welcome. |
@BigLep My pleasure. I was actually investigating on performance issues of an app I'm working on. With some big data to transfer, things got stuck in while with [email protected]. |
Related to libp2p/js-libp2p#1342 Shaves ~100ms when sending small chunks but we are still not close to 0.36 speed
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days. |
I removed the relevant labels so this doesn't get auto closed. |
What I have found is that between In mplex we yield every encoded Uint8Array - in the current release that's one for the header and one (or more) for the data, the theory being if you pump data into the socket opened by I have refactored this code locally to only ever yield one Uint8Array and the time taken to send 105MB of data in 256b chunks decreases from about 5.5s to about 3s, which is good but Digging further,
The messages are added to this pushableV but only ever one at at time. I've yet to ascertain why the older version yields 2x messages in one go - in theory the Once I figure out why the messages don't stack in recent releases the same way they do in I need to stop now but I'll keep digging on Monday. |
Instead of using `it-pipe` to tie the inputs and outputs of the muxer and underlying connection together, pipe them in parallel. When sending 105MB in 32b chunks: ``` testing 0.36.x sender 3276810 messages 1638409 invocations <-- how many mplex messages are sent in how many batches sender 1638412 bufs 68 b <-- how many buffers are passed to the tcp socket and their average size 105 MB in 32 B chunks in 9238ms ``` ``` testing 0.40.x-mplex sender 3276811 messages 3276808 invocations sender 3276811 bufs 34 b 105 MB in 32 B chunks in 15963ms ``` ``` testing 0.40.x-mplex sender 3276811 messages 1638408 invocations 1638411 bufs 68 b 105 MB in 32 B chunks in 8611ms ``` Fixes #1342
Good news, with #1491 and libp2p/js-libp2p-mplex#233 both applied streaming performance is fixed, not only that but it's now faster than |
Instead of using `it-pipe` to tie the inputs and outputs of the muxer and underlying connection together, pipe them in parallel. When sending 105MB in 32b chunks: ## `[email protected]` ``` testing 0.36.x sender 3276810 messages 1638409 invocations <-- how many mplex messages are sent in how many batches sender 1638412 bufs 68 b <-- how many buffers are passed to the tcp socket and their average size 105 MB in 32 B chunks in 9238ms ``` ## `[email protected]` ``` testing 0.40.x-mplex sender 3276811 messages 32 invocations sender 6553636 bufs 17 b 27476 ms 105 MB in 32 B chunks in 27450ms ``` ## With this patch ``` testing 0.40.x-mplex sender 3276811 messages 17 invocations sender 6553636 bufs 17 b 23781 ms 105 MB in 32 B chunks in 23753ms ``` ## With this patch and libp2p/js-libp2p-mplex#233 ``` testing 0.40.x sender 3276811 messages 1638408 invocations 1638411 bufs 68 b 105 MB in 32 B chunks in 8611ms ``` Refs #1342
Related to: libp2p/js-libp2p#1342 but not showing a lot of ms there 😅 Benchmarking against the current master shows a big improvement Co-authored-by: Alex Potsides <[email protected]>
Severity:
Medium (performance issue)
Description:
According to my experimentations, transfer times got twice longer between [email protected] and [email protected].
If I am true:
Steps to reproduce the error:
See the following test scripts (one for each version of js-libp2p):
Those scripts make it possible to launch 2 nodes:
--send
: path to a file which data to send,--size
: size of the chunks for sending the data.--connect
option set with the address printed out by the sender.Timestamps are printed out when the sender starts sending the data, and when the receiver has finished receiving all of it.
With a bit of scripting around it, I made up the following graphs:
Note: Chunks of 100 bytes are not representative in as much as it causes a lot of overhead around the useful data actually transfered (see #1343).
As the graphs show it, [email protected] transfer times are about twice those of [email protected] from chunk sizes of 1Kb.
The text was updated successfully, but these errors were encountered: