Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

performance regression #26229

Closed
Trott opened this issue Feb 21, 2019 · 12 comments
Closed

performance regression #26229

Trott opened this issue Feb 21, 2019 · 12 comments
Assignees
Labels
regression Issues related to regressions.

Comments

@Trott
Copy link
Member

Trott commented Feb 21, 2019

Unfortunately, it appears that one of the commits in #21573 (d345b0d) has resulted in a significant performance regression. test/pummel/test-crypto-timing-safe-equal-benchmarks.js now takes about twice as long to run and is timing out in the nightly CI. This may be an edge-case thing that we don't care about, but I don't actually know.

$ time ./node-d345b0d test/pummel/test-crypto-timing-safe-equal-benchmarks.js 

real	0m55.449s
user	0m53.389s
sys	0m8.196s
$ time ./node-8375c706 test/pummel/test-crypto-timing-safe-equal-benchmarks.js 

real	0m28.996s
user	0m36.509s
sys	0m6.947s
$

Refs: #26216

/ping @ryzokuken

@Trott Trott added the regression Issues related to regressions. label Feb 21, 2019
@jasnell
Copy link
Member

jasnell commented Feb 21, 2019

If necessary, we can do a quick revert.

@Trott
Copy link
Member Author

Trott commented Feb 21, 2019

Running the module benchmarks with n=500:

$ node benchmark/compare.js --old ./node-8375c706 --new ./node-d345b0d --set n=500 module > compare.csv
[00:04:16|% 100| 1/1 files | 60/60 runs | 4/4 configs]: Done
$ cat compare.csv | Rscript benchmark/compare.R 
                                                                 confidence improvement accuracy (*)    (**)   (***)
 module/module-loader.js useCache='false' fullPath='false' n=500                 2.30 %       ±7.01%  ±9.36% ±12.22%
 module/module-loader.js useCache='false' fullPath='true' n=500          **     11.13 %       ±6.60%  ±8.79% ±11.47%
 module/module-loader.js useCache='true' fullPath='false' n=500                 10.26 %      ±17.67% ±23.51% ±30.60%
 module/module-loader.js useCache='true' fullPath='true' n=500          ***    -38.15 %      ±10.97% ±14.60% ±19.01%

Be aware that when doing many comparisons the risk of a false-positive
result increases. In this case there are 4 comparisons, you can thus
expect the following amount of false-positive results:
  0.20 false positives, when considering a   5% risk acceptance (*, **, ***),
  0.04 false positives, when considering a   1% risk acceptance (**, ***),
  0.00 false positives, when considering a 0.1% risk acceptance (***)
$ 

@jdalton
Copy link
Member

jdalton commented Feb 21, 2019

I'm seeing a regression (to a lesser extent) in my own non-Node core use related to vm.compileFunction() in my master branch too. @hashseed Do you have any insight into possible implementation red flags or improvements for vm.compileFunction()?

@hashseed
Copy link
Member

hashseed commented Feb 21, 2019

TL;DR: v8::ScriptCompiler::CompileFunctionInContext does not use the compilation cache.

V8 uses a compilation cache to speed up parsing/compiling the same source string. This cache is only keyed on the source string, and transparent to the embedder. We cannot use this cache for v8::ScriptCompiler::CompileFunctionInContext because the parameters passed to it affect how we parse/compile.

Note that this compilation cache is not to be confused with the code cache that is stored on disk and needs to be managed by the embedder. v8::ScriptCompiler::CompileFunctionInContext does support the code cache, but the embedder must make sure that the parameters are the same to ensure consistent results.

This particular test benefits a lot from the compilation cache because it compiles the same script over and over again. With a pre-regression build I get this (reducing numTrials from 1e5 to 1e4):

$ time out/Release/node test/pummel/test-crypto-timing-safe-equal-benchmarks.js

real	0m1.702s
user	0m1.985s
sys	0m0.401s

$ time out/Release/node --no-compilation-cache test/pummel/test-crypto-timing-safe-equal-benchmarks.js

real	0m4.488s
user	0m4.742s
sys	0m0.564s

I don't think what this test represents real-world conditions, so how about we just fix this test to no longer time out?

@Trott
Copy link
Member Author

Trott commented Feb 21, 2019

I don't think what this test represents real-world conditions, so how about we just fix this test to no longer time out?

If we decide that's the approach to take, we'll probably also want to adjust the benchmark too? (See #26229 (comment).)

@Trott
Copy link
Member Author

Trott commented Feb 21, 2019

Making the test not time out but still be reliable may be rather challenging. See #8744.

If we decide this is the way to go, perhaps we can split the test file into two tests: One that runs the timing-safe test and one that runs the not-timing-safe Buffer stuff as the smoke test.

@bnoordhuis @not-an-aardvark

@Trott
Copy link
Member Author

Trott commented Feb 21, 2019

Fix (hopefully) for the test #26237?

@devsnek
Copy link
Member

devsnek commented Feb 21, 2019

just to sum this up, it's only a regression for people who depended on the behaviour of V8 to cache the compilation of source text when they repeatedly cleared the require cache?

if that's correct, this regression seems absolutely fine to me. obviously if there's some way to fix it we should, but I don't think this is a serious problem.

@hashseed
Copy link
Member

Yup. That's why I don't think we need to fix anything in the implementation.

@Trott
Copy link
Member Author

Trott commented Feb 21, 2019

If this is a so-called nothingburger, perhaps we should fix the benchmark to not report this as a problem? I'm not sure if that means removing the one variant in the benchmark or if it would mean adjusting the benchmark. See #26229 (comment).

@Trott
Copy link
Member Author

Trott commented Feb 21, 2019

I'm going to close this but feel free to re-open or comment if anyone thinks that's premature. Thanks, everyone.

@Trott Trott closed this as completed Feb 21, 2019
Trott added a commit to Trott/io.js that referenced this issue Feb 21, 2019
Resetting require.cache() to `Object.create(null)` each time rather than
deleting the specific key results in a 10x improvement in running time.

Fixes: nodejs#25984
Refs: nodejs#26229
Trott added a commit to Trott/io.js that referenced this issue Feb 22, 2019
Using `eval()` rather than `require()`'ing a fixture and deleting it
from the cache results in a roughtly 10x improvement in running time.

Fixes: nodejs#25984
Refs: nodejs#26229

PR-URL: nodejs#26237
Reviewed-By: Joyee Cheung <[email protected]>
Reviewed-By: Colin Ihrig <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Yang Guo <[email protected]>
Reviewed-By: Ruben Bridgewater <[email protected]>
Reviewed-By: Teddy Katz <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
addaleax pushed a commit that referenced this issue Feb 25, 2019
Using `eval()` rather than `require()`'ing a fixture and deleting it
from the cache results in a roughtly 10x improvement in running time.

Fixes: #25984
Refs: #26229

PR-URL: #26237
Reviewed-By: Joyee Cheung <[email protected]>
Reviewed-By: Colin Ihrig <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Yang Guo <[email protected]>
Reviewed-By: Ruben Bridgewater <[email protected]>
Reviewed-By: Teddy Katz <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
rvagg pushed a commit that referenced this issue Feb 28, 2019
Using `eval()` rather than `require()`'ing a fixture and deleting it
from the cache results in a roughtly 10x improvement in running time.

Fixes: #25984
Refs: #26229

PR-URL: #26237
Reviewed-By: Joyee Cheung <[email protected]>
Reviewed-By: Colin Ihrig <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Yang Guo <[email protected]>
Reviewed-By: Ruben Bridgewater <[email protected]>
Reviewed-By: Teddy Katz <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
@SimenB
Copy link
Member

SimenB commented Aug 15, 2020

v8::ScriptCompiler::CompileFunctionInContext does support the code cache, but the embedder must make sure that the parameters are the same to ensure consistent results.

@hashseed I realize this is an old comment, but is this something that we can control from JS? "Embedder" sounds like a lower level than vm's API.

Using Script instead of compileFunction seems to improve performance in Jest by 25-40% (see jestjs/jest#9457 (comment)), but if we can cache compilation that might bring them in line (if I understand your comments correctly).

Does it relate to #24069?


I can of course open up a new issue if that's better 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
regression Issues related to regressions.
Projects
None yet
Development

No branches or pull requests

6 participants