-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Summarize PR changes for ease of review #6863
Comments
@ddbeck is there a team I can ping for feedback on an issue like this, with all active reviewers? I suspect not, so my question is: how should I circulate an issue like this? |
@saschanaz @sideshowbarker I see and appreciate your rocket reactions :) Would you happen to have thoughts on what would most speed up your reviews in BCD? |
No, and I'm not sure how to approach this. I'm going to think on it. Also, since I'm not 100% sure I'll get to this today and I cannot contain my enthusiasm any longer: I am extremely excited at the prospect of this. |
I'm not really a reviewer here but inline annotation per file sounds even better. I've seen some annotations with colors so it would also be great. |
Yup, the sample looks very useful
Output of a check would be the least disruptive and least prone to breakage Modifying the PR description would be the most disruptive (and potentially most annoying) and the most prone to breakage. Those assessments are from my experience with similar things we tried for WPT. Inlne annotations on the individual files sounds quite complicated.
I don’t have any strong opinions on how that part should be done, nor do any good suggestions come to mind for me. |
Yeah, that's what we did for WPT, with a custom GitHub Check that summarizes test results. Example: The downside is it's a web service to maintain (react to webhooks + do stuff + create checks with outcome) and not very discoverable.
With much fewer reviewers in BCD, I wonder if we could get sidestep some of these concerns. Modifying the description is very widespread in the form of PR Preview, and posting comments is also common in the form of coverage bots. Both can be annoying, but I feel like I could live with either, especially the former.
Do you mean complicated for reviewers to understand, or complicated to implement? It's not very hard to implement I think, I've successfully done it in experiments using both the API for writing annotations directly, and the stdout-parsing support that's built into GitHub Actions. One challenge would be only determining which line to write the annotation for, but a naive approach could be the first edit for each file. |
Yeah, I agree about those being downsides in general. I think the discoverability downside would not be a problem for us in practice for our use case here. But the maintenance issue is real.
Yes, agreed. For WPT, I personally never found either the description-modifying option nor comment mechanism annoying. I just know that others did.
Yes, same here. I would be fine with either of them. I’d also be fine with the output-of-a-check option. So I guess I really don’t feel strong enough about any of those being undesirable.
No
Yeah, I just meant I imagined it being complicated to implement
OK, knowing that, I have zero objections to that inline-annotations approach — in fact, from a usability point of view, it sounds like it could be the most helpful, by virtue of providing the information at point-of-use. And given that in #6863 (comment) @saschanaz weighed in in favor of inline annotations too, I guess I’d put my vote there also. |
Thanks for opening this issue, @foolip. I am extremely excited about the prospect of a summary for reviewing PRs. This would be an outstanding addition to the project.
There's been mention of all of these as options, but I'm not sure how all of them work. If anyone would like to share various annotations they've appreciated from other projects, I'd love to see what that looks like in context, with real-world examples. But it sounds like there's broad support for a line annotation. I imagine that's a good place to start.
As you said, "a table with entry/browser/before/after columns" was my first thought, though I realized there's a another perspective that could useful: by browser, then by feature (e.g., suppose I work at a browser vendor and I want to provide feedback only for changes applicable to the browser I actually work on). But, honestly, just your sample output dressed up with some basic Markdown would be a great start. Also, having more than way to represent the diff might be useful (more on this in the implementation notes bit below). Implementation notes:
The meta question:
No, we don't have that. I know there exists some GitHub feature to do this sort of thing, but I cannot for the life of me figure out what to search for to find it in the docs. If you can remember what this is called, please let me know so I (or someone else) can put it in an issue title. 😆 In the meantime, please @ anyone individually you want to get the attention of. Though with the exception of vinyldarkscratch, I think we've got the most active contributors right now already participating. |
I believe GitHub teams can’t be created at the repo level but instead need to be created at the organization level. So in our case, that means somebody who has owner perms for the https://github.com/mdn/ org would need to go into the UI at https://github.com/orgs/mdn/teams and create whatever teams we needed. |
When reviewing a PR with a lot of changes, I spend most of the time expanding the diff to see which entry is being modified. This is especially annoying when reviewing changes towards the end of the browser list, where multiple clicks are needed to reveal the relevant context.
@vinyldarkscratch and I have discussed a GitHub Action workflow to produce a summary of changes as a PR comment (or something) to speed things up. I have written a proof-of-concept
npm run diff
in #6862 which I've used on a few PRs today, and found that indeed it increased my review velocity by a lot.Here's a sample of what it currently looks like for #6838:
I could quickly spot when data for other browsers than expected was being modified, and most importantly I didn't have to spend as much time understanding the diff context.
If others think this might be useful, there are a number of questions before making it real:
deletions.The choices will inform how this has to be implemented. Thoughts appreciated.
The text was updated successfully, but these errors were encountered: