-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to run tests within a file in a random order #4386
Comments
To isolate tests in Jest, you can use |
Right, you'll have to use If you create your own ES module plugin that allows imports within functions (instead of only on the top level), you can use ES imports there as well. It's a small change to the flavor of babel that you are using, and would be limited to tests. |
In general, this would be a very useful feature that can root out many issues. I don't think the proposed workaround is realistic for existing bigger code bases. So +1 for this feature. Edit: I haven't tested this myself yet but dynamic imports should be possible with babel-plugin-dynamic-import-node as shown here: #2567 (comment) |
@cpojer Exposing the Jasmine 2 |
IIRC the the randomizing implementation was ripped out of |
@SimenB Do you know if that is something that would be considered to be put back in (i.e. would it be worth me trying to do a PR for)? |
@theblang |
I wonder if it makes sense as a feature for We have guarantees today about execution order within the files, and I highly doubt we'll break those, so it'd have to be opt-in, yeah. |
Maybe there's some other arguments I didn't consider, but I think in the case of order-dependent tests randomization IMO makes the situation worse. I'd rather notice the inter-dependency when reordering some tests than have them run in an order that happens to pass a few times locally and on CI and then once it's in master on the 5th run they randomly break. The proper solution to discovering tests depending on order would be to run all possible orders, which is obviously not feasible. |
As an opt-in feature it would still be hugely helpful! Since yes, running all possible orders is not feasible, I'd be curious to hear about what other strategies folks have other than randomization for discovering these bugs in their tests. |
@felixc What I'm about to suggest isn't very pertinent to the ticket since suites are already run in a random order, but one thing we do with our screenshot tests (which utilize Puppeteer and jest-image-snapshot) is close and reopen Puppeteer between each suite run. Then at least you are getting clean client state between each suite run. Before Puppeteer we were using CasperJS without a proper test runner and would use the same instance for all the suites, which could get really annoying when a test change in one suite affected an entirely different suite. And that would happen a fair amount since the suite order wasn't random either and client state could linger between suites. @jeysal The reason that randomization helps is that the screenshot test causing the problem will be discovered much sooner, potentially even during the development of the feature branch that the problematic test is written in if your CI tests are running for branches. Then the test can hopefully be addressed closer to writing the feature that it was initially for, and with the various state and code involved fresh on the mind. As it stands right now, intermittent failures can collect for a long time then creep up on someone down the road when they make a change that somehow disturbs the order and exposes the problematic test. I definitely want to highlight the |
test.each(randomize([
['title 1', ()=> { setup(); return screenshot() }],
['title 2', ()=> { setup2(); return screenshot() }],
]))('%s', (_title, func) => func()); Written in freehand on a phone, so excuse syntax/API errors. Would something like that work you? |
+1 for the ability to run tests in a random order. I understand best practices like clearing mocks between tests and such, but i've run into a lot of test pollution or even tests accidentally passing due to certain data being created (persistent stores like local storage comes to mind). Randomizing tests helps engineers not be bitten by test number 3 creating data/state that test 1034 relies upon and having no real way of tracking down what's going on. I'd be interested in taking a stab at a PR with the idea that the whole thing would be opt in and wouldn't break any existing contracts/tests. I'd also like to propose that if a random run were to be accepted that it also be shipped with a deterministic seed value so you could run a bisect over the suite to figure out the minimum number of tests that would re-produce the problem. |
Likewise. RSpec in Ruby has randomized test order and it has helped us find a lot of tests that inadvertently impacted others for one reason or another. When I go about refactoring Jest specs I start to come across these sorts of issues where the test is passing when it shouldn't be due to previous tests mistakenly modifying some global state. When RSpec runs randomized, it prints out the seed which you can use to reproduce an issue and fix it. |
@cpojer @jeysal @SimenB Just wanted to see if there's a possibility of this feature being reconsidered after the amount of interest that has been expressed, and the benefits it could bring to the visual and integration testing domains (for which some nice tools exist that make use of Jest, like jest-image-snapshot from AmEx). |
I am still of the opinion that even as an opt-in this would do more harm than good across all the users that would enable it, based on the reason I stated above. |
@jeysal I don't understand how randomising the run order would make things more fragile - on the contrary, doesn't it force our tests to be more robust, to ensure they don't break when the run order changes? The key issue IMO is reproducibility - i.e., when the tests do fail due to ordering, there needs to be a way to reproduce that order. @alex-hall's suggestion hits the nail on the head there:
My 2c, I would imagine it working like this: a new
The seed would be auto-generated if not provided, and if the tests fail, the seed value used would be part of the output - so they could easily be reproduced. |
If we were to ship this, it would 100% be with a seed that can be set and that'd be prominently printed if tests fail. I think it's far less likely to cause hard to track down errors than say @alex-hall with no promises on it being merged (although I personally would use such a feature at work 😅), are you still open to taking a stab at this? At least an initial exploratory try at an implementation might reveal if it slots nicely into Circus or not. |
@jeysal You sound like you take this as some experimental idea. It's not. It's been around as practically a standard in many other frameworks. In most places I've seen it's considered a code smell not to run tests in random order. There's no question about the validity of this feature. To address your specific concern, most order-dependent tests will fail right away or within the first few tries and the devs will learn not to write them anymore. In the rare cases, it can root out subtle edge cases and bugs. I was never unhappy to have found them. |
@SimenB lll do some jest archaeology and take a look this weekend. |
Wonderful, thanks! Feel free to open an early PR with hacky WIP code for feedback as you go. 🙂 |
This would be an incredibly useful feature! I wouldn't also mind if this could be implemented by simply running tests within a module concurrently, because it would help expose other kinds of problems as well. |
Wondering if random order has been worked on? I couldn't find any reference to it in the Jest circus code. |
@alex-hall did you find some time to start something (that might be continued by others)? I am currently stuck with a jest-fetch-mock based test that runs only in one particular order, which drives me crazy. |
@SimenB I don't see a PR by alex-hall on this. It was pretty easy making use of I'm not sure if this repo wants to make |
@jhwang98 |
I absolutely agree, I've added a pseudorandom number generator and a shuffling algorithm to |
https://github.com/facebook/jest/releases/tag/v29.2.0 has shipped with support for seeding. While randomization is not included out of the box (for now, at least), this can be used to implement a solid solution in userland (I think). |
Thanks @SimenB! |
@pke I believe you can add --randomize to the CLI, so e.g. if you have a test script in the package.json, you can replace
with
or just run |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Do you want to request a feature or report a bug?
Feature
There have been cases where a test passes but when reordering the test in a file it actually fails. To prevent this false positive it would beneficial to randomize the order of tests within a specific file.
Jasmine has enabled this (https://jasmine.github.io/2.8/node.html#section-22), would it be possible to expose a configuration to specify the order or just randomize it by default?
The text was updated successfully, but these errors were encountered: