Skip to content
This repository has been archived by the owner on Feb 2, 2024. It is now read-only.

Your benchmarks are way off #41

Closed
Richie765 opened this issue Dec 1, 2020 · 5 comments
Closed

Your benchmarks are way off #41

Richie765 opened this issue Dec 1, 2020 · 5 comments

Comments

@Richie765
Copy link

Richie765 commented Dec 1, 2020

🐛 Bug Report

I reran the benchmarks, my results are way lower than yours.

To Reproduce

Rerun the benchmarks, with the following bugfix:

-router.on(['GET', 'POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE'], '/service/*', (req, res) => {
+router.get('/service/*', (req, res) => {
wrk -t8 -c50 -d20s http://127.0.0.1:8080/service/hi

Expected behavior

Somewhat similar benchmarks.

My benchmarks:

fast-proxy-undici/0http: Requests/sec 10259.59 (HTTP pipelining = 10)
fast-proxy/0http: Requests/sec 6773.80
fast-proxy/restana: Requests/sec 6460.21
fast-proxy-undici/0http: Requests/sec 9448.67 (HTTP pipelining = 1)
fastify-reply-from: Requests/sec 5635.55
http-proxy: Requests/sec 3105.40

As you can see I'm getting max 3.3x performance gain instead of your 46.6x.
Without the above mentioned bugfix, the first test clocks in at 39305.96 Requests/sec (12x faster than http-proxy). Even then it is WAY slower compared to your benchmarks.

I don't know what is exactly going on, but I think it's fair to say that your benchmarks are wrong and misleading.

Your Environment

  • node version: 12
  • os: Linux
@jkyberneees
Copy link
Collaborator

Hi @Richie765, thanks for your input. I agree, if the numbers are not accurate, they should be updated.
However, consider that benchmarking is also an approximation, because it strictly depends on the runtime environment.

I will re-run the tests with my current hardware, could you please also share the details of the machine you are using? There may be many reasons why your numbers are lower...

Regards

@Richie765
Copy link
Author

Some variation can be expected, but I can't imagine this being just hardware architecture. Not to mention that 46x seems tgtbt.

My tests were run on AMD 1920x.

I just ran the tests on a i5-5300U, here are the results:

fast-proxy-undici/0http: Requests/sec 8769.27 (HTTP pipelining = 10)
fast-proxy/0http: Requests/sec 5944.27
fast-proxy/restana: Requests/sec 5593.57
fast-proxy-undici/0http: Requests/sec 7347.51 (HTTP pipelining = 1)
fastify-reply-from: Requests/sec 5243.24
http-proxy: Requests/sec 2378.97

@jkyberneees
Copy link
Collaborator

Hi @Richie765, thanks again for your input and for challenging this numbers.

I have created a separated project that includes more clean tests without any framework dependencies. Could you please try those on your hardware: https://github.com/jkyberneees/nodejs-proxy-benchmarks
(if you can submit the details of your round, that would be great, especially if you can run it under Linux)

The current benchmark results in the README date from version v1.0.0 and an older Node.js version, it clearly seems we had untracked performance regressions, either in this module or in Node.js versions.

Regarding the bug you mention:

-router.on(['GET', 'POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE'], '/service/*', (req, res) => {
+router.get('/service/*', (req, res) => {

The first line actually listen for all HTTP methods instead of only GET, here I was using the find-my-way router. However this demo also points to an older version of 0http. I will do a cleanup on the demos and the benchmark results ASAP.

@Richie765
Copy link
Author

Hi @jkyberneees,
Thanks for looking into it. I ran your new benchmarks on my laptop, I think this gives some interesting results:

Machine 1

  • Thinkpad T450 (i5-5300U 2.3Ghz Dual Core)
  • Node 14.15.0
  • Ubuntu Focal Fossa
wrk -t8 -c50 -d20s http://127.0.0.1:8080/service/hi
  • fast-proxy (^1.7.0) - 5494.47
  • fast-proxy + undici (^1.7.0) - 8062.07
  • http-proxy (^1.18.1) 2583.61

Machine 2

  • Virtual machine: QEMU, 16 CPU's
  • Host machine: Threadripper 1920x 12 core @ 3.5 GHz
  • Node 14.15.1
  • Ubuntu Focal Fossa
wrk -t8 -c50 -d20s http://127.0.0.1:8080/service/hi
  • fast-proxy (^1.7.0) - 6456.90
  • fast-proxy + undici (^1.7.0) - 9118.71
  • http-proxy (^1.18.1) - 3344.76

So it seems there is something funny going on with http-proxy, which performs much better on Linux than on Mac. It puzzles me what it could be.

At least on my systems we could say fast-proxy performs roughly 2x that of http-proxy, and 3x with unudici.

BTW in my earlier tests I noticed that the performance of unidici declines quite a bit with larger body messages. Though still faster, in real-world use the advantage will be less the benchmarks we are running here. Is unidici also piping the data through or does it do some kind of store-forward?

@jkyberneees
Copy link
Collaborator

Hi @Richie765, thanks for providing your benchmarks results. I have added them as a reference to https://github.com/jkyberneees/nodejs-proxy-benchmarks. I will add other linux-based benchmarks as soon as possible.

The undici setup we are using in the demos does make use of connection pools as well as HTTP pipelining. The second, is a technique that is most likely not applicable to distributed systems, however, it is still a good resource to test the module performance.

I am closing this issue for now.

Many thanks!

@jkyberneees jkyberneees mentioned this issue Dec 8, 2020
4 tasks
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants