-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please setNoDelay(true) on socket #229
Comments
Did a quick test on a small cloud vm and found that performance indeed is a bit better on Linux. On other platforms (OS X) it actually is worse.
|
…Linux, slightly worse for OS X. ``` PUB version average max min samples rate ---------------- -------- ---- ---- ------- ---------------- 1.0.0_tcpnodelay 1050.125 1079 1041 1000000 952,267 msgs/sec 1.0.0 1073.25 1103 1052 1000000 931,749 msgs/sec PUBSUB version average max min samples rate ---------------- ----------------- ---- ---- ------- ---------------- 1.0.0_tcpnodelay 1794.142857142857 1828 1762 1000000 557,369 msgs/sec 1.0.0 1843.375 1859 1818 1000000 542,483 msgs/sec ```
Actually did some testing.
I am getting about 1ms roundtrip without doing anything. Setting the |
These are runs in the same tiny machine at AWS using node 10 and 6.
|
I am going to discuss this with the team. |
Can you test it with Nodejs latest LTS v8.11.3 ? Or expose a option |
Still not seeing any significant change. As you know we do our own buffering of messages which reduces the number of syscalls we do and yields much better performance.
|
…Linux, slightly worse for OS X. (#230)
I think this issue is not about bench performance test. Please check our issue here, 3 person test it with different platform, all result that Linux very slow. I think the buffer of socket is right, but it wait few time (30~40ms) if sending data is too small. In our scenario, we async await the last publish and response finish, that why it is slow. But if you test it in bench performance scenario, the socket buffer fast fill in, so performance will be slightly better. we also find why redis don't have this problem, its |
The main issue is the number of system calls. The internal buffering takes care of that. The test should be simple:
The difference between the receive time and the start time is the latency, The way it is implemented, the first message will trigger the writing to the socket on the next event loop if the buffer is not full. Otherwise, the socket is written immediately. If publishing many events in a loop, the least number of system calls is used yielding greater throughput. "use strict";
var fs = require('fs');
var NATS = require('nats');
var nc = NATS.connect();
var start;
nc.subscribe('foo', () => {
var end = process.hrtime();
var s = end[0] - start[0];
var ns = end[1] - start[1];
console.log(s*1000000 + ns/1000 + " µs");
nc.close();
});
nc.on('connect', () => {
start = process.hrtime();
nc.publish("foo");
}); On the little node where I ran the tests, this yields: ~$ node lat_perf.js
1253.901 µs |
After digging a weird bug, we found that
Nodejs
doc is wrong on LinuxsetNoDelay()
, which is not usetrue
by default.At
node_modules/nats/lib/nats.js
Line 620, please addthis.stream.setNoDelay(true)
also check this on nodejs issue 906
The text was updated successfully, but these errors were encountered: