Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Mono] TE plaintext and json benchmarks regression #61484

Closed
fanyang-mono opened this issue Nov 11, 2021 · 13 comments
Closed

[Mono] TE plaintext and json benchmarks regression #61484

fanyang-mono opened this issue Nov 11, 2021 · 13 comments
Assignees
Labels
Milestone

Comments

@fanyang-mono
Copy link
Member

fanyang-mono commented Nov 11, 2021

A recent change has caused regression for TE plaintext and json benchmarks running on Mono. The PR which caused this regression is #53450

Screen Shot 2021-11-11 at 5 44 10 PM

cc: @imhameed

@dotnet-issue-labeler
Copy link

I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label.

@dotnet-issue-labeler dotnet-issue-labeler bot added the untriaged New issue has not been triaged by the area owner label Nov 11, 2021
@fanyang-mono fanyang-mono added the tenet-performance-benchmarks Issue from performance benchmark label Nov 11, 2021
@tannergooding
Copy link
Member

@fanyang-mono, could you provide guidance on how to get the disassembly for Mono on these benchmarks, etc?

@tannergooding
Copy link
Member

For what it's worth, this looks like a Mono bug and something its not handling correctly.
CoreCLR didn't regress and in fact looks like it has slightly better latency.

image

Given there were only two very minor mono file changes here: https://github.com/dotnet/runtime/pull/53450/files#diff-ce66e94ce2cdb8e54003a124a4057ce5d5dc5325ad32c9601a4662f8fa39367a

My guess is that Mono isn't handling some pattern or API which is getting handled by CoreCLR.

@fanyang-mono
Copy link
Member Author

Could be. I will obtain the trace before and after the regression to see what changed for Mono.

@sebastienros
Copy link
Member

sebastienros commented Nov 12, 2021

@tannergooding the drop in the charts happens when we switched from net6.0 to net7.0 so it could have been from any commits for about 1-2 months of changes on main while we were still benchmarking net6.0. The improvements you are mentioning are for this 1-2 months period too, not specifically this change. @fanyang-mono identified the specific mono regression to two consecutive builds by running benchmarks manually on previous builds. It's possible that it also impacts the coreclr benchmarks, but other improvements in net7.0 still have a positive impact (ThreadPool improvements for instance).

@fanyang-mono can you share the runtime versions you used, and maybe do a run on coreclr for the same benchmarks between these two versions?

@tannergooding
Copy link
Member

tannergooding commented Nov 12, 2021

It's possible that it also impacts the coreclr benchmarks, but other improvements in net7.0 still have a positive impact (ThreadPool improvements for instance).

@sebastienros, that's certainly possible, but I also think its somewhat unlikely. We have the dotnet/performance repo and run daily per commit benchmarks with semi-weekly triage to identify regressions and improvements.

Before merging PRs like this we get diffs and there was nothing significant for RyuJIT in the frameworks, tests, or benchmarks libraries. There were likewise several improvements that were directly correlated to the PR and no flagged regressions from what we observed.

@naricc
Copy link
Contributor

naricc commented Nov 15, 2021

It looks like we are also seeing this regression in Microbenchmarks: dotnet/perf-autofiling-issues#2173 (that is a link to the automated issue). I don't think it makes sense for me to create another one, but using the microbenchmarks might be easier for validation of a fix.

@fanyang-mono
Copy link
Member Author

@fanyang-mono can you share the runtime versions you used, and maybe do a run on coreclr for the same benchmarks between these two versions?

Runtime packages of this regression:
7.0.0-alpha.1.21480.11 -> good
7.0.0-alpha.1.21480.12 --> regression

@fanyang-mono
Copy link
Member Author

It looks like we are also seeing this regression in Microbenchmarks: dotnet/perf-autofiling-issues#2173 (that is a link to the automated issue). I don't think it makes sense for me to create another one, but using the microbenchmarks might be easier for validation of a fix.

Among all the regressions filed for the above issue, these microbenchmarks showed regression exactly on Sept 30th.

https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/11_08_2021/refs/heads/main_x64_ubuntu%2018.04_LLVM=false_MonoAOT=true_MonoInterpreter=false_Regression/System.Text.Json.Tests.Perf_Reader.html

https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/11_08_2021/refs/heads/main_x64_ubuntu%2018.04_LLVM=false_MonoAOT=true_MonoInterpreter=false_Regression/System.Memory.Span(Char).html

@tannergooding
Copy link
Member

tannergooding commented Nov 16, 2021

Just noting a related PR went in: #60094

It would be interesting to see if Mono likewise sees a regression or any change here, particularly since these ones are showing 43% and 25% improvements in our perf triage from today: dotnet/perf-autofiling-issues#2339 and dotnet/perf-autofiling-issues#2346

@marek-safar marek-safar added this to the .NET 7.0 milestone Nov 19, 2021
@marek-safar marek-safar removed the untriaged New issue has not been triaged by the area owner label Nov 19, 2021
@jeffhandley jeffhandley modified the milestones: .NET 7.0, 7.0.0 Nov 22, 2021
@fanyang-mono
Copy link
Member Author

The solution of this regression is tracked by #64072

@tannergooding tannergooding removed their assignment Jun 12, 2022
@SamMonoRT
Copy link
Member

@fanyang-mono : assigning to you for tracking

@SamMonoRT SamMonoRT modified the milestones: 7.0.0, 8.0.0 Aug 9, 2022
@fanyang-mono
Copy link
Member Author

Closing this issue as the planned work has finished.

@ghost ghost locked as resolved and limited conversation to collaborators Aug 14, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

8 participants