-
-
Notifications
You must be signed in to change notification settings - Fork 401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI workflow improvements (mostly Test262) #889
Conversation
Codecov Report
@@ Coverage Diff @@
## master #889 +/- ##
==========================================
- Coverage 59.79% 59.44% -0.36%
==========================================
Files 165 166 +1
Lines 10410 10474 +64
==========================================
+ Hits 6225 6226 +1
- Misses 4185 4248 +63
Continue to review full report at Codecov.
|
I think this is now ready for review. I would like some feedback on this:
And any general feedback about the PR. |
This improves several things in the CI workflows: - More conformant Test262 result generation - Benchmarks should now show comments for all users - Added Test262 result comparison comments to Pull Requests
9fac5a6
to
118fcf8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks awesome! :D
BTW shouldn't the test262 comment be posted on this PR
Good question, but since I have changed the Something I could try would be to temporarily set the workflow trigger to As you can see, we are not running neither the benchmarks nor the Test262 suite in this PR. |
we could try this, there aren't any PRs (besides #887 but review is still needed) that are ready to be merged anyway, even if there were we can postpone them. |
OK, let's merge it and I will change the triggers. Here are the new results, though:
Conformance is a bit higher, and we now properly count tests that we run twice. |
@Razican the workflow seems to be failing on |
Test262 conformance changes:
|
It seems it's not capturing the full comment markdown, I will re-check that. |
also shouldn't it be overriding the original comment? |
Yep, I have identified 2 bugs: it ignores lines after the first one in the output of the comparison, and it will not override the previous comment. I will give them a look now. |
I think this is ready to be merged. It looks that the comments and so on are working. I don't think we can check if the |
This requires some testing and probably some improvements/fixes, but I wanted to open a PR to get some feedback.
I have moved the PR workflows to run on
pull_request_target
instead ofpull_request
. This should allow the criterion comparisons to be posted as comments even for people outside the org, I think.I have also added a system to compare two Test262 results. For now, the comparison is very simple, it will just print the counts of passed/ignored/failed/panics in Test262 results. But it will generate some Markdown and it should post a comment in the PR. This comment would be re-written on each change (so no new comments would be added).
I have also improved a bit how the Test262 testing is done. We now ignore some extra tests that could not be passed. We now count as 2 the tests that need to be executed twice (in strict mode and in non-strict mode). And, I added an extra parameter to the parser to directly start in strict mode, so that we can test that.
There are some things left in the tester that is probably making some tests fail:
$262
object yet (some of its methods will be tricky to implement)print()
global function just panics as unimplemented for now (but we are ignoringawait
tests, so it's not a big deal)This requires a small change in the results format, that I should have included in my last PR, but I forgot, so I might need to do an extra PR to fix results in the GitHub Pages.
I also took the opportunity to update the test suite and to upgrade all dependencies.