Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conditional testing for platform-specific tests #380

Closed
emmacasolin opened this issue Jun 9, 2022 · 24 comments · Fixed by #381
Closed

Conditional testing for platform-specific tests #380

emmacasolin opened this issue Jun 9, 2022 · 24 comments · Fixed by #381
Assignees
Labels
development Standard development enhancement New feature or request

Comments

@emmacasolin
Copy link
Contributor

emmacasolin commented Jun 9, 2022

Specification

With our new CI/CD, which runs tests across three different platforms (Linux, MacOS, and Windows), we need a way to ensure that tests that are designed to only work on a specific platform(s) do not run on other platforms.

Additional context

This will involve looking at the jest CLI options when running tests so that they can be incorporated into the lines of the CI/CD that run tests on each platform.

Tasks

  1. Research jest CLI options
  2. Test options locally
  3. Incorporate into CI/CD
@emmacasolin emmacasolin added enhancement New feature or request development Standard development labels Jun 9, 2022
@emmacasolin emmacasolin self-assigned this Jun 9, 2022
@emmacasolin
Copy link
Contributor Author

There seem to be two ways that we could achieve this:

  1. At the file-level i.e. only running specific test files/directories on certain platforms
  2. At the individual test level within files i.e. the ability to specify a platform inside describe or even test blocks

1 seems the easiest to implement, whereas 2 is more flexible.

@emmacasolin
Copy link
Contributor Author

Potential options we could use:

  • --filter=<file> - we could use either one filter that checks process.platform and switches filter cases based on this or three separate files each exporting one filter for each platform
    • I believe this can only filter test paths rather than individual tests within a single file
    • The filter must return allowed paths, but there would be more allowed tests than not allowed so it would be more convenient to be able to specify which tests shouldn't run rather than tests that should
  • --ignoreProjects <project1> ... <projectN> - if it's possible to just give the displayName attribute to certain files/directories then this may work
    • Like filter this most likely would not work at the individual test level
  • --passWithNoTests - this option should allow test files to run even if none of the tests can run on a given platform, but this theory should be checked
    • This would allow us to filter out individual tests by using process.platform to check the platform inside individual describe blocks or tests
  • --testNamePattern=<regex> - if we can use this to exclude tests rather than include them then this might be exactly what we need
    • Might need to be used with --passWithNoTests

@emmacasolin
Copy link
Contributor Author

The problem with using --testNamePattern is that it may accidentally match unintended tests if the regex isn't strict enough. The safer option would be to exclude entire directories using --testPathIgnorePatterns=<regex>|[array] - doing this means the certain test files/directories would be completely excluded from the test output though, which might not be what we want. --testNamePattern runs all test files but skips those that don't match the supplied regex. It doesn't require the use of --passWithNoTests.

@CMCDragonkai
Copy link
Member

I have a simpler suggestion. Just write a conditional test using describeCondition. You write your own describeCondition function:

function describeCondition (name, f) {
  if (condition) {
    describe(name, f);
  }
}

That's what I suggested you should try first to see if it works, if it does, it's the fastest way to do this.

@CMCDragonkai
Copy link
Member

If it works, then you can proceed with things like:

describeIf(condition, name, () => {
  testIf(condition, name, ...);
});

Where describeIf and testIf is defined in tests/utils.ts.

@CMCDragonkai
Copy link
Member

The condition could just be true or false, or it can be predicate which has to be executed. But I reckon it's the same.

So...

describeIf(process.env['BLAH'] === 'BLAH', 'some test', () => {

});

The thing is, how does jest treat test files with no activated describe/tests...?

@CMCDragonkai
Copy link
Member

Maybe the option you said --passWithNoTests should be enough if combined with describeIf and testIf?

@CMCDragonkai
Copy link
Member

The issue with --filter and --testNamePattern is that they ultimately rely on either directory paths or test file names. We would have to specially mark certain file directories or test files as being for a different platform.

This means we end up having to change our structure of our test directories, or have to remember which tests are for which platform.

On the other hand, this may make it easier to do test load balancing, because otherwise it's possible for a test scheduler to schedule a test that won't run on a particular platform. But this will be a minor problem.

@CMCDragonkai
Copy link
Member

Having it programmatic at the describeIf and testIf is less likely to result in accidental executions, it means the conditional logic is embedded in the code. Whereas using file paths can be broken by any changes to the test directory structure or file names, and we end up needing to configure it at package.json or in the jest.config.js which spreads the logic around.

@emmacasolin
Copy link
Contributor Author

The issue with --filter and --testNamePattern is that they ultimately rely on either directory paths or test file names. We would have to specially mark certain file directories or test files as being for a different platform.

This means we end up having to change our structure of our test directories, or have to remember which tests are for which platform.

On the other hand, this may make it easier to do test load balancing, because otherwise it's possible for a test scheduler to schedule a test that won't run on a particular platform. But this will be a minor problem.

Fair enough - I'll look into that.

On the topic of load balancing though this --shard option looks interesting and might potentially give us more control over how the tests get split? https://jestjs.io/docs/cli#--shard

@CMCDragonkai
Copy link
Member

The --shard option is indeed interesting! Note the issue discussion about it: jestjs/jest#11252

It requires as custom test sequencer written for jest, that is configured https://jestjs.io/docs/configuration#testsequencer-string.

There's a trade off here, the ideal load balancer would a "work-stealing queue". That is 1 CI/CD job maintains a queue, and test CI/CD jobs pull tests from the queue. This ensures that no runner will be stuck waiting to finish.

The sharding system seems like it you would have to hope that your shards are roughly equally sized, it's possible that you may shard 20% of the tests which take 80% of the time, and the resulting 80% of tests take 20% of the time. The sharder/sequencer wouldn't know long certain tests take, so it's just a guess.

It is however better than nothing.

You should link this issue with MatrixAI/TypeScript-Demo-Lib#58, I had extra details about this there.

I think there's a commercial service which does the queuing https://knapsackpro.com/integrations/javascript/jest/ci-server. But I wonder if jest supports a live queueing system. The Gitlab CI/CD job would have to configure a job that only finishes when all tests are pulled off, or when all test jobs complete.

@CMCDragonkai
Copy link
Member

I think for now we will just use the shard option. It should work well enough if we randomise our tests, or maybe if we had some estimated time to completion. I believe jest has some mechanism for this already: https://jestjs.io/blog/2016/03/11/javascript-unit-testing-performance

@CMCDragonkai
Copy link
Member

They did it based on file size as a proxy for how long it would take. See: https://jestjs.io/blog/2016/03/11/javascript-unit-testing-performance#optimal-scheduling-of-a-test-run

image

@CMCDragonkai
Copy link
Member

The --shard feature only exists on jest28, are we still on jest27? https://jestjs.io/blog/2022/04/25/jest-28#sharding-of-test-run You may need to update it.

But this problem is a separate issue: MatrixAI/TypeScript-Demo-Lib#58.

We can address that in typescript-demo-lib first then port it to PK.

@emmacasolin
Copy link
Contributor Author

Using --passWithNoTests doesn't seem to work for a file with no describe or a file with no tests, but looking at its description I think that's the expected behaviour.

--passWithNoTests
Allows the test suite to pass when no files are found.

The solution is simply to just skip the describe/test if the condition isn't met:

function describeIf(condition, name, f) {
  if (condition) {
    describe(name, f);
  } else {
    describe.skip(name, f);
  }
}

function testIf(condition, name, f, timeout?) {
  if (condition) {
    test(name, f, timeout);
  } else {
    test.skip(name, f, timeout);
  }
}

This does what we want it to, however, if the top-level describe is skipped then that file won't show up in the test output (unlike skipped tests, which do show up)

[nix-shell:~/Projects/js-polykey]$ npm run test tests/nat/noNat

> @matrixai/[email protected] test
> jest "tests/nat/noNat"

Determining test suites to run...
GLOBAL SETUP
Global Data Dir: /tmp/polykey-test-global-HukooD
Test Suites: 1 skipped, 0 of 1 total
Tests:       3 skipped, 3 total
Snapshots:   0 total
Time:        8.351 s
Ran all test suites matching /tests\/nat\/noNat/i.
GLOBAL TEARDOWN
Destroying Global Data Dir: /tmp/polykey-test-global-HukooD

@CMCDragonkai
Copy link
Member

Isn't that what we want?

Whether it shows up on the console or not is a secondary concern. Most important is that it doesn't error out if a test file doesn't have any active tests.

Interesting that describe.skip doesn't show up though.

@CMCDragonkai
Copy link
Member

CMCDragonkai commented Jun 9, 2022

Note that:

Using describe.skip is often a cleaner alternative to temporarily commenting out a chunk of tests. Beware that the describe block will still run. If you have some setup that also should be skipped, do it in a beforeAll or beforeEach block.

Can you check that, I would have thought that the block doesn't run at all.

I was actually thinking of not using describe.skip at all.

@emmacasolin
Copy link
Contributor Author

--passWithNoTests doesn't prevent errors from a file with no describe and no tests. This is an example where the describe doesn't run if the platform condition is not met. The description says it allows a test suite to pass when no files are found, which I think is different from if no tests are found.

[nix-shell:~/Projects/js-polykey]$ npm run test tests/nat -- --passWithNoTests

> @matrixai/[email protected] test
> jest "tests/nat" "--passWithNoTests"

Determining test suites to run...
GLOBAL SETUP
Global Data Dir: /tmp/polykey-test-global-4wY8uy
 FAIL  tests/nat/noNAT.test.ts
  ● Test suite failed to run

    Your test suite must contain at least one test.

      at onResult (node_modules/@jest/core/build/TestScheduler.js:175:18)
      at node_modules/@jest/core/build/TestScheduler.js:316:17
      at node_modules/emittery/index.js:260:13
          at Array.map (<anonymous>)
      at Emittery.emit (node_modules/emittery/index.js:258:23)

@emmacasolin
Copy link
Contributor Author

Note that:

Using describe.skip is often a cleaner alternative to temporarily commenting out a chunk of tests. Beware that the describe block will still run. If you have some setup that also should be skipped, do it in a beforeAll or beforeEach block.

Can you check that, I would have thought that the block doesn't run at all.

I was actually thinking of not using describe.skip at all.

I thought that it didn't run at all either, and the tests are skipped if the describe is skipped, but I can experiment.

@emmacasolin
Copy link
Contributor Author

Note that:

Using describe.skip is often a cleaner alternative to temporarily commenting out a chunk of tests. Beware that the describe block will still run. If you have some setup that also should be skipped, do it in a beforeAll or beforeEach block.

Can you check that, I would have thought that the block doesn't run at all.
I was actually thinking of not using describe.skip at all.

I thought that it didn't run at all either, and the tests are skipped if the describe is skipped, but I can experiment.

So it looks like anything directly inside a describe.skip but not contained within another block, will run, but nothing else will. E.g.

describe.skip('describe', () => {
  console.log('in the describe');
  beforeAll(() => {
    console.log('in the beforeAll');
  });
  beforeEach(() => {
    console.log('in the beforeEach');
  });
  afterEach(() => {
    console.log('in the afterEach');
  });
  afterAll(() => {
    console.log('in the afterAll');
  });
  test('test', () => {
    console.log('in the test');
  });
});
in the describe

@CMCDragonkai
Copy link
Member

CMCDragonkai commented Jun 9, 2022 via email

@emmacasolin
Copy link
Contributor Author

As mentioned here #380 (comment) if a file has no describe then even if you use --passWithNoTests you still get an error. I think describe.skip is the simplest solution here.

@CMCDragonkai CMCDragonkai linked a pull request Jun 10, 2022 that will close this issue
11 tasks
@CMCDragonkai
Copy link
Member

Ok we are using describeIf and testIf, so the logic is all encoded in the tests. Along with shelljs we are able to check if certain binaries/commands exist on the system to perform those tests. We also check the platform to see if it is linux. Can be extended to other kinds of platform-specific tests in the future.

@CMCDragonkai
Copy link
Member

With the merging of #381, this issue is completed. Not sure why this wasn't auto-closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
development Standard development enhancement New feature or request
Development

Successfully merging a pull request may close this issue.

2 participants