Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggested improvements to Deno.test APIs #4166

Closed
kitsonk opened this issue Feb 27, 2020 · 9 comments
Closed

Suggested improvements to Deno.test APIs #4166

kitsonk opened this issue Feb 27, 2020 · 9 comments
Assignees

Comments

@kitsonk
Copy link
Contributor

kitsonk commented Feb 27, 2020

I was just looking at all the APIs in Deno and noticed the Deno.test and Deno.runTests. I think there are a couple improvements we could make:

  • RunTestOptions.disableLog could be renamed RunTestOptions.silent, it would be more consistent with other parts of Deno.
  • runTests() could return some sort of test results, where each test name and a fail flag could be provided, as well as an error field which would be a reference to whatever was thrown if fail is true. This would allow people to implement silent plus do post processing on a test run.
  • RunTestOptions.only and RunTestOptions.skip could take a string | RegExp and if a string we would do testName.includes() check. This makes it a bit more approachable filtering for simply containing checks, versus having to do a RegExp all the time.
@bartlomieju
Copy link
Member

bartlomieju commented Mar 1, 2020

I've been working with cli/js/ tests for past few days; here are some of my thoughts on this subject:

RunTestOptions.disableLog could be renamed RunTestOptions.silent, it would be more consistent with other parts of Deno.
RunTestOptions.only and RunTestOptions.skip could take a string | RegExp and if a string we would do testName.includes() check. This makes it a bit more approachable filtering for simply containing checks, versus having to do a RegExp all the time.

👍 no comment, I agree on both points

runTests() could return some sort of test results, where each test name and a fail flag could be provided, as well as an error field which would be a reference to whatever was thrown if fail is true. This would allow people to implement silent plus do post processing on a test run.

Implementing that means changing signature of runTests, it would probably look like:

// exists
export interface RunTestsOptions {
  exitOnFail?: boolean;
  only?: string | RegExp;
  skip?: string | RegExp;
  silent?: boolean;
}

// exists, not exported
interface TestStats {
  filtered: number;
  ignored: number;
  measured: number;
  passed: number;
  failed: number;
}

// exists in similar form, not exported
interface TestResult {
  name: string;
  fn: TestFunction;
  success: boolean;
  timeElapsed: number;
  error?: Error;
}

// doesn't exist
interface RunTestsResult {
  success: boolean;
  tests: TestResult[];
  stats: TestStats;
}

export function runTests(opts?: RunTestsOptions): Promise<RunTestsResult>;

There is one problem with that approach; calling runTests with silent flag on will cause to lose output of all tests - basically all console.logs will be discarded. It's probably a good idea to capture that output for further processing as well. That can be easily done by adding output: Array<[string, isError]> field on the TestResult interface.

Then there's exitOnFail flag which actually should be split into exitOnFail and failFast - right now exitOnFail causes to stop running tests on first failure, and runTests() always exists with status code 1 if there's a failure.

One more thing that was a PITA was that there's no way to get access to registered tests - there is globalThis.__DENO_TEST_REGISTRY which I added some time ago, but I changed my mind and don't like how it's exposed. That means there should be some public getter to get a list of registered cases using Deno.test() so you can do some pre-processing on them, like wrap them in some additional checks.

In light of recent discussion in #4092 we should set some clear boundaries what do we want to get into Deno testing and what should be done by third-parties.

Edit: Ref #4098

BTW, I got most of changes proposed in this issue working on a branch

@kitsonk
Copy link
Contributor Author

kitsonk commented Mar 1, 2020

export function runTests(opts?: RunTestsOptions): Promise<void>;

Where do you return the test results?

There is one problem with that approach; calling runTests with silent flag on will cause to lose output of all tests - basically all console.logs will be discarded. It's probably a good idea to capture that output for further processing as well.

The display logic of tests should be seperate from the acquisition. In my opinion it should be refactored so that the results displayed by runTests() are based on the same structures that are returned. Then it simply becomes a one liner that optionally displays those results or not.

In light of recent discussion in #4092 we should set some clear boundaries what do we want to get into Deno testing and what should be done by third-parties.

By runTests() returning results, it means we have less pressure to bake in features. 😄

@nayeemrmn
Copy link
Collaborator

Ref #3865 (comment). I suggested that Deno.runTests() should return results and NOT report them itself via logging. That should be factored out to some other function:

const results = Deno.runTests();
Deno.reportTestResults(results);

Nice and modular. For the example use case of replacing Deno.reportTestResults() with a custom reporter, this is a lot more logical than adding a silent flag.

--

On the other hand, my first suggestion was to make Deno.runTests() internal (and try to do all of this with hooks), so deno test can still be used...

@bartlomieju
Copy link
Member

export function runTests(opts?: RunTestsOptions): Promise<void>;

Where do you return the test results?

My bad, forgot to change return type; that should obviously be:

export function runTests(opts?: RunTestsOptions): Promise<RunTestsResult>;

Fixed in original comment.

There is one problem with that approach; calling runTests with silent flag on will cause to lose output of all tests - basically all console.logs will be discarded. It's probably a good idea to capture that output for further processing as well.

The display logic of tests should be seperate from the acquisition. In my opinion it should be refactored so that the results displayed by runTests() are based on the same structures that are returned. Then it simply becomes a one liner that optionally displays those results or not.

I agree with that. I see another problem though: when you call await Deno.runTests() it will return results after all tests have finished. So if reporting is lifted to the caller then implements AsyncIterableIterator<TestResult>, PromiseLike<RunTestsResult> return type would be better so caller is able to report on each TestResult as they are run.

In light of recent discussion in #4092 we should set some clear boundaries what do we want to get into Deno testing and what should be done by third-parties.

By runTests() returning results, it means we have less pressure to bake in features. 😄

Yeah, I'm trying to refactor unit tests of cli/js/ using extended built-in API and so far haven't found satisfactory solution. Good to have discussion going.

Ref #3865 (comment). I suggested that Deno.runTests() should return results and NOT report them itself via logging. That should be factored out to some other function:

const results = Deno.runTests();
Deno.reportTestResults(results);

Nice and modular. For the example use case of replacing Deno.reportTestResults() with a custom reporter, this is a lot more logical than adding a silent flag.

--

On the other hand, my first suggestion was to make Deno.runTests() internal (and try to do all of this with hooks), so deno test can still be used...

I think we're going into right direction - there's an issue for coverage reporting (#106). It's worth considering at this point .

@kitsonk
Copy link
Contributor Author

kitsonk commented Mar 1, 2020

there's an issue for coverage reporting (#106)

Sounds like something that should be added into rusty_v8... that one has been in the back of my mind for a long time (that and caching bytecode).

@chiefbiiko
Copy link
Contributor

was wondering what u think about having sth like

RunTestsOptions {
  ...
  timeFormat: "milli" | "micro" | "nano";
}

exposed to the cli for indicating the desired timing output
0ms sometimes feels like looking into the dark

@bartlomieju
Copy link
Member

was wondering what u think about having sth like

RunTestsOptions {
  ...
  timeFormat: "milli" | "micro" | "nano";
}

exposed to the cli for indicating the desired timing output
0ms sometimes feels like looking into the dark

@chiefbiiko there's no way to guarantee better resolution. You'd have to run with --allow-hr time for better accuracy, but a) some tests require fixed set of permissions (say, without --allow-hr) b) you can't force user to run with a certain flag.

Current test runner follows closely Rust test runner, which doesn't print duration at all...

@bartlomieju
Copy link
Member

Update: after all of the PRs to update testing framework there's single item left to do from @kitsonk's original list:

  • RunTestOptions.disableLog could be renamed RunTestOptions.silent, it would be more consistent with other parts of Deno.

Right now disableLog changes globalThis.console to a "silent" console (output is discarded), but still provides text output to console for test results. It might be a bit surprising if we change it to silent and also disable output from test runner. Maybe split it into two separate options? One to disable console the other to disable test runner output completely (something like NoopReporter)?

@bartlomieju
Copy link
Member

Last point addressed in #4451

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants