Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Summarize PR changes for ease of review #6863

Open
foolip opened this issue Oct 8, 2020 · 10 comments
Open

Summarize PR changes for ease of review #6863

foolip opened this issue Oct 8, 2020 · 10 comments
Labels
idle 🐌 Issues and pull requests with no recent activity infra 🏗️ Infrastructure issues (npm, GitHub Actions, releases) of this project

Comments

@foolip
Copy link
Collaborator

foolip commented Oct 8, 2020

When reviewing a PR with a lot of changes, I spend most of the time expanding the diff to see which entry is being modified. This is especially annoying when reviewing changes towards the end of the browser list, where multiple clicks are needed to reveal the relevant context.

@vinyldarkscratch and I have discussed a GitHub Action workflow to produce a summary of changes as a PR comment (or something) to speed things up. I have written a proof-of-concept npm run diff in #6862 which I've used on a few PRs today, and found that indeed it increased my review velocity by a lot.

Here's a sample of what it currently looks like for #6838:

api.Window.blur
  webview_android support set to 1 (was true)
api.Window.cancelAnimationFrame
  chrome support set to 24 (was true)
  chrome_android support set to 25 (was true)
  samsunginternet_android support set to 1.5 (was true)
  webview_android support set to ≤37 (was true)
api.Window.crypto
  chrome support set to 1 (was 37)
  chrome_android support set to 18 (was 37)
  opera support set to 15 (was 24)
  opera_android support set to 14 (was 24)
  samsunginternet_android support set to 1.0 (was 3.0)
  webview_android support set to 1 (was 37)
api.Window.document
  chrome support set to 1 (was true)
  chrome_android support set to 18 (was true)
  samsunginternet_android support set to 1.0 (was true)
  webview_android support set to 1 (was true)
api.Window.external
  chrome support set to 1 (was true)
  chrome_android support set to 18 (was true)
  samsunginternet_android support set to 1.0 (was true)
  webview_android support set to 1 (was true)
api.Window.find
  webview_android support set to 1 (was true)
api.Window.focus
  webview_android support set to 1 (was true)

I could quickly spot when data for other browsers than expected was being modified, and most importantly I didn't have to spend as much time understanding the diff context.

If others think this might be useful, there are a number of questions before making it real:

  • Where should the summary be shown? Options are modifying the PR description, posting a new comment, inline annotations on the individual files, or as the output of a check.
  • What's a good way to visualize the changes? Options that come to mind are a table with entry/browser/before/after columns, or as a single compat table with the changes somehow highlighted within, perhaps as insertions and deletions.

The choices will inform how this has to be implemented. Thoughts appreciated.

@foolip foolip added the infra 🏗️ Infrastructure issues (npm, GitHub Actions, releases) of this project label Oct 8, 2020
@foolip
Copy link
Collaborator Author

foolip commented Oct 8, 2020

@ddbeck is there a team I can ping for feedback on an issue like this, with all active reviewers? I suspect not, so my question is: how should I circulate an issue like this?

@foolip
Copy link
Collaborator Author

foolip commented Oct 9, 2020

@saschanaz @sideshowbarker I see and appreciate your rocket reactions :) Would you happen to have thoughts on what would most speed up your reviews in BCD?

@ddbeck
Copy link
Collaborator

ddbeck commented Oct 9, 2020

@foolip

is there a team I can ping for feedback on an issue like this, with all active reviewers? I suspect not, so my question is: how should I circulate an issue like this?

No, and I'm not sure how to approach this. I'm going to think on it.

Also, since I'm not 100% sure I'll get to this today and I cannot contain my enthusiasm any longer: I am extremely excited at the prospect of this.

@saschanaz
Copy link
Contributor

Would you happen to have thoughts on what would most speed up your reviews in BCD?

I'm not really a reviewer here but inline annotation per file sounds even better. I've seen some annotations with colors so it would also be great.

@sideshowbarker
Copy link
Collaborator

Here's a sample of what it currently looks like for #6838:

I could quickly spot when data for other browsers than expected was being modified, and most importantly I didn't have to spend as much time understanding the diff context.

Yup, the sample looks very useful

  • Where should the summary be shown? Options are modifying the PR description, posting a new comment, inline annotations on the individual files, or as the output of a check.

Output of a check would be the least disruptive and least prone to breakage

Modifying the PR description would be the most disruptive (and potentially most annoying) and the most prone to breakage.

Those assessments are from my experience with similar things we tried for WPT.

Inlne annotations on the individual files sounds quite complicated.

  • What's a good way to visualize the changes? Options that come to mind are a table with entry/browser/before/after columns, or as a single compat table with the changes somehow highlighted within, perhaps as insertions and deletions.

I don’t have any strong opinions on how that part should be done, nor do any good suggestions come to mind for me.

@foolip
Copy link
Collaborator Author

foolip commented Oct 13, 2020

Output of a check would be the least disruptive and least prone to breakage

Yeah, that's what we did for WPT, with a custom GitHub Check that summarizes test results. Example:
https://github.com/web-platform-tests/wpt/pull/26089/checks?check_run_id=1244245864

The downside is it's a web service to maintain (react to webhooks + do stuff + create checks with outcome) and not very discoverable.

Modifying the PR description would be the most disruptive (and potentially most annoying) and the most prone to breakage.

Those assessments are from my experience with similar things we tried for WPT.

With much fewer reviewers in BCD, I wonder if we could get sidestep some of these concerns. Modifying the description is very widespread in the form of PR Preview, and posting comments is also common in the form of coverage bots. Both can be annoying, but I feel like I could live with either, especially the former.

Inline annotations on the individual files sounds quite complicated.

Do you mean complicated for reviewers to understand, or complicated to implement? It's not very hard to implement I think, I've successfully done it in experiments using both the API for writing annotations directly, and the stdout-parsing support that's built into GitHub Actions. One challenge would be only determining which line to write the annotation for, but a naive approach could be the first edit for each file.

@sideshowbarker
Copy link
Collaborator

Output of a check would be the least disruptive and least prone to breakage

Yeah, that's what we did for WPT, with a custom GitHub Check that summarizes test results. Example:
https://github.com/web-platform-tests/wpt/pull/26089/checks?check_run_id=1244245864

The downside is it's a web service to maintain (react to webhooks + do stuff + create checks with outcome) and not very discoverable.

Yeah, I agree about those being downsides in general. I think the discoverability downside would not be a problem for us in practice for our use case here. But the maintenance issue is real.

Modifying the PR description would be the most disruptive (and potentially most annoying) and the most prone to breakage.
Those assessments are from my experience with similar things we tried for WPT.

With much fewer reviewers in BCD, I wonder if we could get sidestep some of these concerns.

Yes, agreed.

For WPT, I personally never found either the description-modifying option nor comment mechanism annoying. I just know that others did.

Modifying the description is very widespread in the form of PR Preview, and posting comments is also common in the form of coverage bots. Both can be annoying, but I feel like I could live with either, especially the former.

Yes, same here. I would be fine with either of them. I’d also be fine with the output-of-a-check option. So I guess I really don’t feel strong enough about any of those being undesirable.

Inline annotations on the individual files sounds quite complicated.

Do you mean complicated for reviewers to understand

No

or complicated to implement?

Yeah, I just meant I imagined it being complicated to implement

It's not very hard to implement I think, I've successfully done it in experiments using both the API for writing annotations directly, and the stdout-parsing support that's built into GitHub Actions. One challenge would be only determining which line to write the annotation for, but a naive approach could be the first edit for each file.

OK, knowing that, I have zero objections to that inline-annotations approach — in fact, from a usability point of view, it sounds like it could be the most helpful, by virtue of providing the information at point-of-use. And given that in #6863 (comment) @saschanaz weighed in in favor of inline annotations too, I guess I’d put my vote there also.

@ddbeck
Copy link
Collaborator

ddbeck commented Oct 15, 2020

Thanks for opening this issue, @foolip. I am extremely excited about the prospect of a summary for reviewing PRs. This would be an outstanding addition to the project.


Where should the summary be shown? Options are modifying the PR description, posting a new comment, inline annotations on the individual files, or as the output of a check.

There's been mention of all of these as options, but I'm not sure how all of them work. If anyone would like to share various annotations they've appreciated from other projects, I'd love to see what that looks like in context, with real-world examples.

But it sounds like there's broad support for a line annotation. I imagine that's a good place to start.


What's a good way to visualize the changes?

As you said, "a table with entry/browser/before/after columns" was my first thought, though I realized there's a another perspective that could useful: by browser, then by feature (e.g., suppose I work at a browser vendor and I want to provide feedback only for changes applicable to the browser I actually work on). But, honestly, just your sample output dressed up with some basic Markdown would be a great start.

Also, having more than way to represent the diff might be useful (more on this in the implementation notes bit below).


Implementation notes:

  • I believe it's possible to generate a Check run from within GitHub Actions without an independent web service (example), so it might be possible to provide a highly-visible annotation, with additional details or representations tucked away in a Check.

  • Would it be possible to produce some sort of separate intermediate representation of the diff (e.g., a JSON payload) and then generate different outputs from that representation?

    I could see this being very useful for generating release notes (e.g. providing a detailed list of created/modified features) or to request reviewers by feature area or browser. The diff could be the basis for a lot of interesting automation.


The meta question:

is there a team I can ping for feedback on an issue like this, with all active reviewers?

No, we don't have that. I know there exists some GitHub feature to do this sort of thing, but I cannot for the life of me figure out what to search for to find it in the docs. If you can remember what this is called, please let me know so I (or someone else) can put it in an issue title. 😆

In the meantime, please @ anyone individually you want to get the attention of. Though with the exception of vinyldarkscratch, I think we've got the most active contributors right now already participating.

@sideshowbarker
Copy link
Collaborator

The meta question:

is there a team I can ping for feedback on an issue like this, with all active reviewers?

No, we don't have that. I know there exists some GitHub feature to do this sort of thing, but I cannot for the life of me figure out what to search for to find it in the docs. If you can remember what this is called, please let me know so I (or someone else) can put it in an issue title. 😆

I believe GitHub teams can’t be created at the repo level but instead need to be created at the organization level.

So in our case, that means somebody who has owner perms for the https://github.com/mdn/ org would need to go into the UI at https://github.com/orgs/mdn/teams and create whatever teams we needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
idle 🐌 Issues and pull requests with no recent activity infra 🏗️ Infrastructure issues (npm, GitHub Actions, releases) of this project
Projects
None yet
Development

No branches or pull requests

5 participants
@ddbeck @sideshowbarker @foolip @saschanaz and others