Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Watch mode ~2 times slower than regular test run. #7341

Closed
alloy opened this issue Nov 7, 2018 · 4 comments · Fixed by #8201
Closed

Watch mode ~2 times slower than regular test run. #7341

alloy opened this issue Nov 7, 2018 · 4 comments · Fixed by #8201

Comments

@alloy
Copy link

alloy commented Nov 7, 2018

🐛 Bug Report

Running all tests while in watch mode is ~twice as slow as running all tests outside of watch mode.

This is with the same forced number of workers. The CLI docs state that the default is the number of cores the host machine has, however, in my case I’m noticing that it will actually run by default with N-1 workers, presumably to allow for the parent process and some headroom for other processes (?). Additionally, when running in watch mode it will run by default with N-2 workers; which is less clear to me why that isn’t also N-1, as naively I’d think the master process has to do roughly the same amount of work as when not running in watch mode.

There are other reports elsewhere related to running tests in watch mode:

However, trying out older versions did not yield any results for me. I tried back as far as 21.2.1.

Jest 21.2.1

Note that these seem to take shorter than the other version tests, but that’s only because of (snapshot) test failures. Between the watch and normal test runs the difference is still ~2x.

$ yarn jest -w 3
Test Suites: 12 failed, 2 skipped, 152 passed, 164 of 166 total
Tests:       15 failed, 11 skipped, 825 passed, 851 total
Snapshots:   15 failed, 88 passed, 103 total
Time:        32.847s, estimated 55s
$ yarn jest --watch -w 3
Test Suites: 12 failed, 2 skipped, 152 passed, 164 of 166 total
Tests:       15 failed, 11 skipped, 825 passed, 851 total
Snapshots:   15 failed, 88 passed, 103 total
Time:        56.746s

Jest 22.4.4

$ yarn jest -w 3
Test Suites: 8 failed, 2 skipped, 156 passed, 164 of 166 total
Tests:       11 failed, 11 skipped, 829 passed, 851 total
Snapshots:   11 failed, 92 passed, 103 total
Time:        45.082s, estimated 78s
$ yarn jest --watch -w 3
Test Suites: 8 failed, 2 skipped, 156 passed, 164 of 166 total
Tests:       11 failed, 11 skipped, 829 passed, 851 total
Snapshots:   11 failed, 92 passed, 103 total
Time:        80.825s

Jest 23.0.0

$ yarn jest -w 3
Test Suites: 2 skipped, 164 passed, 164 of 166 total
Tests:       11 skipped, 840 passed, 851 total
Snapshots:   103 passed, 103 total
Time:        47.863s, estimated 63s
$ yarn jest --watch -w 3
Test Suites: 2 skipped, 164 passed, 164 of 166 total
Tests:       11 skipped, 840 passed, 851 total
Snapshots:   103 passed, 103 total
Time:        87.321s

Jest 23.1.0

$ yarn jest -w 3
Test Suites: 2 skipped, 164 passed, 164 of 166 total
Tests:       11 skipped, 840 passed, 851 total
Snapshots:   103 passed, 103 total
Time:        48.739s
$ yarn jest --watch -w 3
Test Suites: 2 skipped, 164 passed, 164 of 166 total
Tests:       11 skipped, 840 passed, 851 total
Snapshots:   103 passed, 103 total
Time:        85.081s

Jest 23.2.0

$ yarn jest -w 3
Test Suites: 2 skipped, 164 passed, 164 of 166 total
Tests:       11 skipped, 840 passed, 851 total
Snapshots:   103 passed, 103 total
Time:        49.722s
$ yarn jest --watch -w 3
Test Suites: 2 skipped, 164 passed, 164 of 166 total
Tests:       11 skipped, 840 passed, 851 total
Snapshots:   103 passed, 103 total
Time:        82.496s

Jest 23.6.0

$ yarn jest -w 3
Test Suites: 2 skipped, 164 passed, 164 of 166 total
Tests:       11 skipped, 840 passed, 851 total
Snapshots:   103 passed, 103 total
Time:        45.78s, estimated 63s
$ yarn jest --watch -w 3
Test Suites: 2 skipped, 164 passed, 164 of 166 total
Tests:       11 skipped, 840 passed, 851 total
Snapshots:   103 passed, 103 total
Time:        81.68s

To Reproduce

Steps to reproduce the behavior:

$ git clone https://github.com/artsy/reaction.git
$ cd reaction
$ yarn install

Then make sure to run tests with and without watch mode using the same number of workers.

$ sysctl -n hw.ncpu
4
$ yarn jest -w 3
$ yarn jest --watch -w 3
[type a]

Expected behavior

  • I expect the number of workers between normal runs and watch mode to be equal.
  • I expect the CLI doc for --maxWorkers to reflect the actual defaults.
  • I expect a run of all tests in watch mode vs normal runs to be much closer to each other in run time.

Link to repl or repo (highly encouraged)

Not a minimal repo, but a real one that shows real numbers https://github.com/artsy/reaction

Run npx envinfo --preset jest

  System:
    OS: macOS High Sierra 10.13.6
    CPU: x64 Intel(R) Core(TM) i5-7600K CPU @ 3.80GHz
  Binaries:
    Node: 8.12.0 - ~/.nvm/versions/node/v8.12.0/bin/node
    Yarn: 1.10.1 - ~/.nvm/versions/node/v8.12.0/bin/yarn
    npm: 6.4.1 - ~/.nvm/versions/node/v8.12.0/bin/npm
@thymikee
Copy link
Collaborator

thymikee commented Jan 8, 2019

@SimenB @rogeliog @rubennorte any ideas what's causing this? Checked just now again and it also affects our master branch.

@SimenB
Copy link
Member

SimenB commented Mar 17, 2019

Totally missed this.

I expect the number of workers between normal runs and watch mode to be equal.

Agreed, not sure why it's not

I expect the CLI doc for --maxWorkers to reflect the actual defaults.

We can mention it's n - 1. The reason for this is that we want your system to be responsive. Tests often take up 100% of the core, which would grind your machine to a halt if it used every core. PR welcome! 🙂

I expect a run of all tests in watch mode vs normal runs to be much closer to each other in run time

Yeah, totally. Or at least minimal overhead. The slowness seems worse than a single less core would indicate, doesn't it?


We'll be focusing on profiling jest at the summit, hopefully we can also make some perf improvements, or at least understand why things perform the way they do

@scotthovestadt scotthovestadt self-assigned this Mar 24, 2019
@scotthovestadt
Copy link
Contributor

I'm looking into this. Will resolve.

thymikee pushed a commit that referenced this issue Mar 24, 2019
## Summary

Resolves #7341 

This PR dramatically improves watch mode performance, bringing it in line with single run mode performance. It accomplishes that by:
- Workers previously initialized a new `ModuleMap` and `Resolver` for every test in watch mode. Now, those objects are only initialized once when the worker is setup.
- In the main thread, caching the conversion of `ModuleMap` to a JSON-friendly object.
- Allowing watch mode to use the same number of CPUs as single run mode.

## Benchmarks

I benchmarked against Jest's own test suite, excluding e2e tests which don't provide good signal because they individually take a long time (so startup time for the test is marginalized). The numbers show that running in Watch mode previously added an extra 35%~ of runtime to the tests but that has now been reduced to almost nothing.

Watch mode should now just be paying a one-time initial cost for each worker when the haste map changes instead of paying that same cost for _every_ test run.

### branch: master

`yarn jest ./packages`
Run time: 15.091s

`yarn jest ./packages --watch`
Run time: 23.234s

### branch: watch-performance

`yarn jest ./packages`
Run time: 14.973s

`yarn jest ./packages --watch`
Run time: 15.196s


## Test plan

- All tests pass.
- Benchmarked to verify the performance wins.
- Verified that when the haste map is updated, the update is propagated out to all workers.
@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 12, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants