Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PR Build bot: show pass rate in summary screen? #6800

Closed
RByers opened this issue Aug 10, 2017 · 5 comments
Closed

PR Build bot: show pass rate in summary screen? #6800

RByers opened this issue Aug 10, 2017 · 5 comments

Comments

@RByers
Copy link
Contributor

RByers commented Aug 10, 2017

I'm loving the new PR build bot results, thanks!

One minor nit: could we easily put a summary of the test pass rate in the 'status' column of the main results page so I don't have to click into each job to see how many tests passed and failed for each browser?

@bobholt
Copy link
Contributor

bobholt commented Aug 10, 2017

This is going to be a little more complicated because I'm in the process of adding not just the browser jobs, but the unit test and lint jobs (otherwise we run into situations where unit tests fail, but the dashboard doesn't tell you why).

(To give you an idea: https://pulls-staging.web-platform-tests.org/build/339)

Are you more interested in the browser-only pass rate (currently would be one of 0, 25, 50, 75, or 100 percent) or all-job pass rate including unit tests, lint, etc? I think it's the former, but want to capture it here for sure.

@RByers
Copy link
Contributor Author

RByers commented Aug 10, 2017

Thanks, I see. What I'm interested in here is not the job pass rate, but the WPT pass rate. Ultimately (as you know) what I want is some visual indication of how many tests/sub-tests changed from passing to failing (or vice versa) as part of the commit. But I know that's a lot more work.

The main use case I'm thinking of here is a developer adding new tests that they expect should pass completely on some set of browsers, and then being able to easily tell whether that happened or not. I.e. easily answer the question: "are my new/modified tests behaving as I expect them to behave" (passing on the browsers known to match the spec, failing on those known not to).

Maybe something like this?
image

Or this?
image

/cc @foolip for his thoughts

@foolip
Copy link
Member

foolip commented Aug 10, 2017

That mockup LGTM, and then I think we'd also want something drawing attention to the fact that there are non-passing tests to look at, right in the GitHub comment.

I know there's a comment size limit to be aware of, but maybe just saying for each browser if there's a problem? So "All passed" or "Some tests failed" etc.?

@bobholt
Copy link
Contributor

bobholt commented Aug 17, 2017

I'm looking into this. This is definitely a metric we're not currently capturing in the app (we're only looking at stable/unstable), but could definitely be added.

@foolip
Copy link
Member

foolip commented Jan 3, 2018

Closing this in favor of #7475, which won't expose all results in comments directly, but is all about making the results clear and actionable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants