-
-
Notifications
You must be signed in to change notification settings - Fork 30
Your benchmarks are way off #41
Comments
Hi @Richie765, thanks for your input. I agree, if the numbers are not accurate, they should be updated. I will re-run the tests with my current hardware, could you please also share the details of the machine you are using? There may be many reasons why your numbers are lower... Regards |
Some variation can be expected, but I can't imagine this being just hardware architecture. Not to mention that 46x seems tgtbt. My tests were run on AMD 1920x. I just ran the tests on a i5-5300U, here are the results: fast-proxy-undici/0http: Requests/sec 8769.27 (HTTP pipelining = 10) |
Hi @Richie765, thanks again for your input and for challenging this numbers. I have created a separated project that includes more clean tests without any framework dependencies. Could you please try those on your hardware: https://github.com/jkyberneees/nodejs-proxy-benchmarks The current benchmark results in the README date from version v1.0.0 and an older Node.js version, it clearly seems we had untracked performance regressions, either in this module or in Node.js versions. Regarding the bug you mention: -router.on(['GET', 'POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE'], '/service/*', (req, res) => {
+router.get('/service/*', (req, res) => { The first line actually listen for all HTTP methods instead of only GET, here I was using the |
Hi @jkyberneees, Machine 1
wrk -t8 -c50 -d20s http://127.0.0.1:8080/service/hi
Machine 2
wrk -t8 -c50 -d20s http://127.0.0.1:8080/service/hi
So it seems there is something funny going on with http-proxy, which performs much better on Linux than on Mac. It puzzles me what it could be. At least on my systems we could say fast-proxy performs roughly 2x that of http-proxy, and 3x with unudici. BTW in my earlier tests I noticed that the performance of unidici declines quite a bit with larger body messages. Though still faster, in real-world use the advantage will be less the benchmarks we are running here. Is unidici also piping the data through or does it do some kind of store-forward? |
Hi @Richie765, thanks for providing your benchmarks results. I have added them as a reference to https://github.com/jkyberneees/nodejs-proxy-benchmarks. I will add other linux-based benchmarks as soon as possible. The I am closing this issue for now. Many thanks! |
🐛 Bug Report
I reran the benchmarks, my results are way lower than yours.
To Reproduce
Rerun the benchmarks, with the following bugfix:
Expected behavior
Somewhat similar benchmarks.
My benchmarks:
fast-proxy-undici/0http: Requests/sec 10259.59 (HTTP pipelining = 10)
fast-proxy/0http: Requests/sec 6773.80
fast-proxy/restana: Requests/sec 6460.21
fast-proxy-undici/0http: Requests/sec 9448.67 (HTTP pipelining = 1)
fastify-reply-from: Requests/sec 5635.55
http-proxy: Requests/sec 3105.40
As you can see I'm getting max 3.3x performance gain instead of your 46.6x.
Without the above mentioned bugfix, the first test clocks in at 39305.96 Requests/sec (12x faster than http-proxy). Even then it is WAY slower compared to your benchmarks.
I don't know what is exactly going on, but I think it's fair to say that your benchmarks are wrong and misleading.
Your Environment
The text was updated successfully, but these errors were encountered: