-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regressions in SciMark2.kernel #77841
Comments
Suspect: #62689 cc @pedrobsaila |
I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label. |
Tagging subscribers to this area: @JulieLeeMSFT, @jakobbotsch Issue DetailsRun Information
Regressions in SciMark2.kernel
Reprogit clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'SciMark2.kernel*' Related IssuesRegressionsImprovementsPayloadsHistogramEdge Detector InfoSciMark2.kernel.benchmarkLU
Description of detection logic
DocsProfiling workflow for dotnet/runtime repository
|
My PR generated some regressions on SciMark2 :
My change is probably the one causing the regression, let me see if I can fix them. Just one question about these comments on SciMark2 :
Do I really risk something if I debug these tests locally ? |
We run them on daily basis so hopefully not? 🙂 No idea who put that comment there |
I think it's a joke -- best I can tell it was added when the benchmark code was ported from Java to C#, many years ago. |
Let me know if you need any help digging into this. The issues here might be similar to the problems we have with if conversion. For example, by changing something like
to
we run the risk that if One way to avoid the potential downside is to not do this optimization when the predicates are in a loop, or when profile data indicates evaluation of |
I ran the benchmark on x64/x86 Windows/Linux and I could not reproduce the 100 ms regression : I have light regressions/improvements (less than 30 ms) depending on the environment. It would be of help to see assembly diffs if you have an arm64 machine just to be sure there's no bad assembly generated
Thanks for the hint. I'll see if I can improve regressions based on this. I see something similar in runtime/src/coreclr/jit/optimizer.cpp Lines 9693 to 9700 in c04cf0c
|
I've been playing for a while with SciMark2.kernel perf tests and I don't think that regression is caused by the fact that we might evaluate an expensive Here are assembly diffs in Windows X64 (same diffs for Linux x64) for the main methods of SciMark2.kernel :
Did not find any suspicious diffs in assembly. The diffs are not that significant to generate important improvements/regressions. |
Likely from #77728 |
Run Information
Regressions in SciMark2.kernel
Test Report
Repro
Related Issues
Regressions
Improvements
Payloads
Baseline
Compare
Histogram
Edge Detector Info
Collection Data
SciMark2.kernel.benchmarkLU
Description of detection logic
Docs
Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository
The text was updated successfully, but these errors were encountered: