-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Perf] Linux/x64: 8 Regressions on 5/12/2023 7:53:45 AM #17715
Comments
@pavelsavara looks like dotnet/runtime#85730 |
It seems to me that the test itself needs fixing, where do I find the code which measures this ? |
These blazor scenarios are located here https://github.com/dotnet/performance/blob/main/eng/performance/blazor_scenarios.proj and the |
@LoopedBard3 how could we see more details about this regression? is there an archive with the build somewhere or the log? are these clean builds? |
@radekdoulik Since this is wasm, this blazor scenario set is defined here: https://github.com/dotnet/runtime/blob/main/eng/testing/performance/blazor_perf.proj, although it appears to be the same as the blazor_scenarios.proj file in the performance repo. When it comes to the archive and logs, if you find the runtime git commits you want to view, the runs are completed in this pipeline: https://dev.azure.com/dnceng/internal/_build?definitionId=702&_a=summary. I can find the two specific pipeline runs if you can provide the baseline and compare commits you want. Once in a pipeline run, the job for the test will be "Performance linux x64 release wasm JIT blazor_scenarios perftiger NoJS False net8.0" and artifacts can be found in the Azdo artifacts or through the helix API by clicking the Send to Helix job step and following the work item links. |
Where is the code that chooses which files should be measured ? |
Do you mean like which csproj files are used for the tests? If so, since this was a wasm compilation and blazor scenarios, the run was specified here: https://github.com/dotnet/runtime/blob/main/eng/pipelines/coreclr/perf-wasm-jobs.yml#L153-L173 |
No I mean, something found file |
Ah, I think I understand. The primary code for the size gathering tool is located https://github.com/dotnet/performance/blob/main/src/tools/ScenarioMeasurement/SizeOnDisk/SizeOnDisk.cs. The general flow is the app is published to a "pub" directory, the SizeOnDisk test is started with https://github.com/dotnet/performance/blob/main/src/scenarios/blazorminapp/test.py, the test.py calls https://github.com/dotnet/performance/blob/main/src/scenarios/shared/runner.py with L783-L790 using a wrapper to call the SizeOnDisk tool. |
cc @SamMonoRT |
Probably caused by dotnet/runtime#88142 and fixed by dotnet/runtime#89079 |
@lewing could you please teach me how to validate if this is fixed or not ? Many thanks! |
I think we're looking into all the size stuff now and there isn't anything specific worth tracking here so we can call this closed, @radekdoulik feel free to close it. |
Run Information
Regressions in SOD - Minimum Blazor Template - Publish
Test Report
Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
Payloads
Baseline
Compare
Payloads
Baseline
Compare
Histogram
SOD - Minimum Blazor Template - Publish
Description of detection logic
JIT Disasms
Docs
Profiling workflow for dotnet/runtime repository
Benchmarking workflow for dotnet/runtime repository
The text was updated successfully, but these errors were encountered: