Skip to content

Commit

Permalink
Merge pull request #99 from dorny/dev
Browse files Browse the repository at this point in the history
Version 1.4.0
  • Loading branch information
dorny committed Apr 19, 2021
2 parents 2c87efa + c74b76e commit 0c4e165
Show file tree
Hide file tree
Showing 44 changed files with 28,121 additions and 1,150 deletions.
1 change: 1 addition & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ jobs:
- run: npm test

- name: Upload test results
if: success() || failure()
uses: actions/upload-artifact@v2
with:
name: test-results
Expand Down
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# Changelog

## v1.4.0
- [Add support for mocha-json](https://github.com/dorny/test-reporter/pull/90)
- [Use full URL to fix navigation from summary to suite details](https://github.com/dorny/test-reporter/pull/89)
- [New report rendering with code blocks instead of tables](https://github.com/dorny/test-reporter/pull/88)
- [Improve test error messages from flutter](https://github.com/dorny/test-reporter/pull/87)

## v1.3.1
- [Fix: parsing of .NET duration string without milliseconds](https://github.com/dorny/test-reporter/pull/84)
- [Fix: dart-json - remove group name from test case names](https://github.com/dorny/test-reporter/pull/85)
Expand Down
60 changes: 40 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@ This [Github Action](https://github.com/features/actions) displays test results
✔️ Provides final `conclusion` and counts of `passed`, `failed` and `skipped` tests as output parameters

**How it looks:**
|![](assets/fluent-validation-report.png)|![](assets/provider-error-summary.png)|![](assets/provider-error-details.png)|![](assets/provider-groups.png)|
|![](assets/fluent-validation-report.png)|![](assets/provider-error-summary.png)|![](assets/provider-error-details.png)|![](assets/mocha-groups.png)|
|:--:|:--:|:--:|:--:|

**Supported languages / frameworks:**
- .NET / [xUnit](https://xunit.net/) / [NUnit](https://nunit.org/) / [MSTest](https://github.com/Microsoft/testfx-docs)
- Dart / [test](https://pub.dev/packages/test)
- Flutter / [test](https://pub.dev/packages/test)
- JavaScript / [JEST](https://jestjs.io/)
- Java / [JUnit](https://junit.org/)
- JavaScript / [JEST](https://jestjs.io/) / [Mocha](https://mochajs.org/)

For more information see [Supported formats](#supported-formats) section.

Expand Down Expand Up @@ -54,9 +54,9 @@ jobs:
## Recommended setup for public repositories
Workflows triggered by pull requests from forked repositories are executed with read-only token and therefore can't create check runs.
To workaround this security restriction it's required to use two separate workflows:
1. `CI` runs in the context of PR head branch with read-only token. It executes the tests and uploads test results as build artifact
2. `Test Report` runs in the context of repository main branch with read/write token. It will download test results and create reports
To workaround this security restriction, it's required to use two separate workflows:
1. `CI` runs in the context of the PR head branch with the read-only token. It executes the tests and uploads test results as a build artifact
2. `Test Report` runs in the context of the repository main branch with read/write token. It will download test results and create reports

**PR head branch:** *.github/workflows/ci.yml*
```yaml
Expand Down Expand Up @@ -116,7 +116,7 @@ jobs:
# Coma separated list of paths to test results
# Supports wildcards via [fast-glob](https://github.com/mrmlnc/fast-glob)
# All matched result files must be of same format
# All matched result files must be of the same format
path: ''
# Format of test results. Supported options:
Expand All @@ -125,6 +125,7 @@ jobs:
# flutter-json
# java-junit
# jest-junit
# mocha-json
reporter: ''
# Limits which test suites are listed:
Expand All @@ -142,7 +143,7 @@ jobs:
# Must be less or equal to 50.
max-annotations: '10'
# Set action as failed if test report contain any failed test
# Set action as failed if test report contains any failed test
fail-on-error: 'true'
# Relative path under $GITHUB_WORKSPACE where the repository was checked out.
Expand Down Expand Up @@ -224,8 +225,8 @@ Or with (undocumented) CLI argument:


According to documentation `dart_test.yaml` should be at the root of the package, next to the package's pubspec.
On current `stable` and `beta` channels it doesn't work and you have to put `dart_test.yaml` inside your `test` folder.
On `dev` channel it's already fixed.
On current `stable` and `beta` channels it doesn't work, and you have to put `dart_test.yaml` inside your `test` folder.
On `dev` channel, it's already fixed.

For more information see:
- [test package](https://pub.dev/packages/test)
Expand All @@ -239,17 +240,17 @@ For more information see:
<summary>java-junit (Experimental)</summary>

Support for [JUnit](https://Junit.org/) XML is experimental - should work but it was not extensively tested.
To have code annotations working properly it's required your directory structure matches package name.
This is due to the fact Java stacktraces doesn't contains full path to the source file.
Some heuristic was necessary to figure out mapping between line in stack trace and actual source file.
To have code annotations working properly, it's required your directory structure matches the package name.
This is due to the fact Java stack traces don't contain a full path to the source file.
Some heuristic was necessary to figure out the mapping between the line in the stack trace and an actual source file.
</details>

<details>
<summary>jest-Junit</summary>

[JEST](https://jestjs.io/) testing framework support requires usage of [jest-Junit](https://github.com/jest-community/jest-Junit) reporter.
[JEST](https://jestjs.io/) testing framework support requires the usage of [jest-Junit](https://github.com/jest-community/jest-Junit) reporter.
It will create test results in Junit XML format which can be then processed by this action.
You can use following example configuration in `package.json`:
You can use the following example configuration in `package.json`:
```json
"scripts": {
"test": "jest --ci --reporters=default --reporters=jest-Junit"
Expand All @@ -272,19 +273,38 @@ You can use following example configuration in `package.json`:
Configuration of `uniqueOutputName`, `suiteNameTemplate`, `classNameTemplate`, `titleTemplate` is important for proper visualization of test results.
</details>

<details>
<summary>mocha-json</summary>

[Mocha](https://mochajs.org/) testing framework support requires:
- Mocha version [v7.2.0](https://github.com/mochajs/mocha/releases/tag/v7.2.0) or higher
- Usage of [json](https://mochajs.org/#json) reporter.

You can use the following example configuration in `package.json`:
```json
"scripts": {
"test": "mocha --reporter json > test-results.json"
}
```

Test processing might fail if any of your tests write anything on standard output.
Mocha, unfortunately, doesn't have the option to store `json` output directly to the file, and we have to rely on redirecting its standard output.
There is a work in progress to fix it: [mocha#4607](https://github.com/mochajs/mocha/pull/4607)
</details>

## GitHub limitations

Unfortunately there are some known issues and limitations caused by GitHub API:
Unfortunately, there are some known issues and limitations caused by GitHub API:

- Test report (i.e. Check Run summary) is markdown text. No custom styling or HTML is possible.
- Maximum report size is 65535 bytes. Input parameters `list-suites` and `list-tests` will be automatically adjusted if max size is exceeded.
- Test report can't reference any additional files (e.g. screenshots). You can use `actions/upload-artifact@v2` to upload them and inspect manually.
- Check Runs are created for specific commit SHA. it's not possible to specify under which workflow test report should belong if there are more
workflows running for same SHA. Thanks to this GitHub "feature" it's possible your test report will appear in unexpected place in GitHub UI.
For more information see [#67](https://github.com/dorny/test-reporter/issues/67).
- Test report can't reference any additional files (e.g. screenshots). You can use `actions/upload-artifact@v2` to upload them and inspect them manually.
- Check Runs are created for specific commit SHA. It's not possible to specify under which workflow test report should belong if more
workflows are running for the same SHA. Thanks to this GitHub "feature" it's possible your test report will appear in an unexpected place in GitHub UI.
For more information, see [#67](https://github.com/dorny/test-reporter/issues/67).

## See also
- [paths-filter](https://github.com/dorny/paths-filter) - Conditionally run actions based on files modified by PR, feature branch or pushed commits
- [paths-filter](https://github.com/dorny/paths-filter) - Conditionally run actions based on files modified by PR, feature branch, or pushed commits

## License

Expand Down
50 changes: 23 additions & 27 deletions __tests__/__outputs__/dart-json.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,28 @@
![Tests failed](https://img.shields.io/badge/tests-1%20passed%2C%204%20failed%2C%201%20skipped-critical)
## <a id="user-content-r0" href="#r0">fixtures/dart-json.json</a>
**6** tests were completed in **3.760s** with **1** passed, **4** failed and **1** skipped.
## ❌ <a id="user-content-r0" href="#r0">fixtures/dart-json.json</a>
**6** tests were completed in **4s** with **1** passed, **4** failed and **1** skipped.
|Test suite|Passed|Failed|Skipped|Time|
|:---|---:|---:|---:|---:|
|[test/main_test.dart](#r0s0)|1✔️|3❌||74ms|
|[test/second_test.dart](#r0s1)||1❌|1✖️|51ms|
### <a id="user-content-r0s0" href="#r0s0">test/main_test.dart</a> ❌
**4** tests were completed in **74ms** with **1** passed, **3** failed and **0** skipped.

**Test 1**
|Result|Test|Time|
|:---:|:---|---:|
|✔️|Passing test|36ms|

**Test 1 Test 1.1**
|Result|Test|Time|
|:---:|:---|---:|
||Failing test|20ms|
||Exception in target unit|6ms|

**Test 2**
|Result|Test|Time|
|:---:|:---|---:|
||Exception in test|12ms|
### <a id="user-content-r0s1" href="#r0s1">test/second_test.dart</a> ❌
**2** tests were completed in **51ms** with **0** passed, **1** failed and **1** skipped.

|Result|Test|Time|
|:---:|:---|---:|
||Timeout test|37ms|
|✖️|Skipped test|14ms|
### ❌ <a id="user-content-r0s0" href="#r0s0">test/main_test.dart</a>
```
Test 1
✔️ Passing test
Test 1 Test 1.1
❌ Failing test
Expected: <2>
Actual: <1>
❌ Exception in target unit
Exception: Some error
Test 2
❌ Exception in test
Exception: Some error
```
### ❌ <a id="user-content-r0s1" href="#r0s1">test/second_test.dart</a>
```
❌ Timeout test
TimeoutException after 0:00:00.000001: Test timed out after 0 seconds.
✖️ Skipped test
```
31 changes: 17 additions & 14 deletions __tests__/__outputs__/dotnet-trx.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,21 @@
![Tests failed](https://img.shields.io/badge/tests-3%20passed%2C%203%20failed%2C%201%20skipped-critical)
## <a id="user-content-r0" href="#r0">fixtures/dotnet-trx.trx</a>
**7** tests were completed in **1.061s** with **3** passed, **3** failed and **1** skipped.
## ❌ <a id="user-content-r0" href="#r0">fixtures/dotnet-trx.trx</a>
**7** tests were completed in **1s** with **3** passed, **3** failed and **1** skipped.
|Test suite|Passed|Failed|Skipped|Time|
|:---|---:|---:|---:|---:|
|[DotnetTests.XUnitTests.CalculatorTests](#r0s0)|3✔️|3❌|1✖️|110ms|
### <a id="user-content-r0s0" href="#r0s0">DotnetTests.XUnitTests.CalculatorTests</a> ❌
**7** tests were completed in **110ms** with **3** passed, **3** failed and **1** skipped.

|Result|Test|Time|
|:---:|:---|---:|
||Exception_In_TargetTest|0ms|
||Exception_In_Test|2ms|
||Failing_Test|3ms|
|✔️|Passing_Test|0ms|
|✔️|Passing_Test_With_Name|0ms|
|✖️|Skipped_Test|1ms|
|✔️|Timeout_Test|102ms|
### ❌ <a id="user-content-r0s0" href="#r0s0">DotnetTests.XUnitTests.CalculatorTests</a>
```
❌ Exception_In_TargetTest
System.DivideByZeroException : Attempted to divide by zero.
❌ Exception_In_Test
System.Exception : Test
❌ Failing_Test
Assert.Equal() Failure
Expected: 3
Actual: 2
✔️ Passing_Test
✔️ Passing_Test_With_Name
✖️ Skipped_Test
✔️ Timeout_Test
```
Loading

0 comments on commit 0c4e165

Please sign in to comment.