Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Randomized allocation sampling #104955

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

noahfalk
Copy link
Member

@noahfalk noahfalk commented Jul 16, 2024

This feature allows profilers to do allocation profiling based off randomized samples. It has better theoretical and empirically observed accuracy than our current allocation profiling approaches while also maintaining low performance overhead. It is designed for use in production profiling scenarios. For more information about usage and implementation, see the included doc docs/design/features/RandomizedAllocationSampling.md

[Updated 10/30]
This PR supersedes #100356. Currently it includes 2 commits, the first has the (mostly) rebased changes from July and the 2nd addressed a few things I missed in the rebase and followed up on issues flagged in July.

Most of the testing happened during July but I did re-run some of the functional tests as a smoke test to ensure nothing broke in the few recent changes. We'll also get updated CI regression testing and I'm coordinating with @chrisnas to have him validate it works in a real profiler.

@dotnet-issue-labeler dotnet-issue-labeler bot added the needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners label Jul 16, 2024
@noahfalk noahfalk self-assigned this Jul 16, 2024
@noahfalk noahfalk added area-VM-coreclr community-contribution Indicates that the PR has been added by a community member area-NativeAOT-coreclr and removed needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners labels Jul 16, 2024
@noahfalk
Copy link
Member Author

noahfalk commented Jul 20, 2024

Functional testing found and fixed an off-by-one error in the RNG code but otherwise things looked fine. I also resynced this PR on top of the latest changes in #104849 and #104851. The last commit, now number 10, remains the interesting one.

I also did some performance testing using GCPerfSim as an allocation benchmark. My default configuration was 4 threads, workstation mode, 500GB of allocations entirely with small objects and no survival. It is intended to put maximum stress on the allocation code paths. GCPerfSim command line: -tc 4 -tagb 500 -tlgb 0.05 -lohar 0 -lohsr 100000-2000000 -sohsi 0 -lohsi 0 -pohsi 0 -sohpi 0 -lohpi 0 -sohfi 0 -lohfi 0 -pohfi 0 -allocType reference -testKind time. I also ran a few variations that added modest amounts of survival and LOH allocations that are a bit more realistic, though still extremely allocation heavy.

EDIT: Don't rely on these numbers, they are misleading. See #104955 (comment)

Benchmarks - No Tracing enabled

Scenario Baseline time PR time
Default 20.8 21.4
Default + sohsi 100 40.9 41.8
Default + sohsi 100 -lohar 10 42.8 43.9
Default + sohsi 100 -lohar 10 -lohsi 200 43.6 44.6

Benchmarks - Tracing AllocSampling+GC keywords, verbose level

Scenario Baseline time PR time
Default 22 23.1
Default + sohsi 100 41.6 43.0
Default + sohsi 100 -lohar 10 44.7 44.5
Default + sohsi 100 -lohar 10 -lohsi 200 44.2 45.0

Overall it looks like around ~0.9 additional seconds for the PR to do 500GB of allocations. On a tight microbenchmark its noticeable and then as other GC or non-allocation costs increase it becomes relatively less noticeable. I'm investigating to see if it can be improved at all.

@noahfalk
Copy link
Member Author

Continued perf investigation+testing has cast my previous results into doubt. After lots of searching for what could have caused the regression, my best explanation is that it actually had nothing to do with the source changes in this PR and instead it is either non-determinism in the build process or some user error on my part. I reached that conclusion by doing the following:

I've had a folder on my machine C:\git\runtime3 that throughout the entire process has been synced here:

commit 42b2b19e883f06af5771b5d85b26af263c62e781 (HEAD)
Author: Matous Kozak <[email protected]>
Date:   Fri Jul 12 09:42:55 2024 +0200

This folder has no changes from any of my PRs in it and I've been using the build here for all the baseline measurements. Then I executed the following changes:

move artifacts -> artifacts_backup
build.cmd clr+libs -c release
src\tests\build.cmd generatelayoutonly Release
copy artifacts_backup -> artifacts_backup_2

I can consistently reproduce the same magnitude perf regression using the coreclr built in the artifacts directory, but the regression doesn't appear using the build in the backup or backup_2 directory. I've done many runs on each binary switching between them in a semi-randomized ordering trying to ensure that the results for each binary are repeatable and robust relative to background noise on the machine.

Beyond that I've also got many other builds that include different subsets of the change but there is no clear relationship between the source and the perf results. During one period I progressively added functionality starting from the baseline without triggering the regression to occur, then during another period I was progressively removing functionality from the final PR and the regression would always occur. Even deleting the entirety of the source changes in that folder and syncing it back to the baseline didn't eliminate the perf overhead. Every build was done in a new folder starting without an artifacts folder to remove opportunity for incremental build problems to play a role.

The only explanations I have that make sense to me are either: (a) non-deterministic builds are giving bi-modal perf results for the same input source code or (b) I am making some other error in my testing methodology repeatedly

I'm going to see if I can get another machine to repeat some of the original experiments but at the moment I no longer have any evidence the PR is causing a regression.

@noahfalk
Copy link
Member Author

@jkotas @MichalStrehovsky - Functional and perf testing both looked good now, all outstanding comments on the PRs have been addressed, and CI is green. From my perspective this is ready to be merged unless any further review is planned?

I could check in #104849, #104851, then this PR in sequence but I'm not sure that gives any advantage over just checking in this PR alone and closing 104849 and 104851 as no longer needed.

Copy link
Contributor

Tagging subscribers to this area: @tommcdon
See info in area-owners.md if you want to be subscribed.

This feature allows profilers to do allocation profiling based off randomized samples. It has better theoretical and empirically observed accuracy than our current allocation profiling approaches while also maintaining low performance overhead. It is designed for use in production profiling scenarios. For more information about usage and implementation, see the included doc docs/design/features/RandomizedAllocationSampling.md

Much of this code was originally posted in dotnet#100356 by @chrisnas, but for logistical reasons we superceded it with this new PR.
  - Removed some code that was trying to access private GC APIs (gc_heap::heap_of).
  - Without the private APIs there is currently no way to access the heap number, however Maoni believed this parameter is not particularly valuable in allocation investigations so I removed it rather than changing the GC interface to re-implement it.
  - Updated some m_ naming conventions
@noahfalk
Copy link
Member Author

@brianrob @chrisnas - I wanted to give a heads up that I removed the HeapIndex parameter on the AllocationSampling event. The value wasn't readily accessible from outside the GC and @Maoni0 recommended that it probably isn't very useful so it is easiest to drop it. If you have any concerns let me know.

@chrisnas - do you want to give this branch a spin with your profiler to test it out? I'm guessing this is the final source or pretty close to it.

@chrisnas
Copy link
Contributor

@brianrob @chrisnas - I wanted to give a heads up that I removed the HeapIndex parameter on the AllocationSampling event. The value wasn't readily accessible from outside the GC and @Maoni0 recommended that it probably isn't very useful so it is easiest to drop it. If you have any concerns let me know.

Sounds good to me: I never found a usage for it :^)

@chrisnas - do you want to give this branch a spin with your profiler to test it out? I'm guessing this is the final source or pretty close to it.

I'll run my tests on monday - the rest of the week is off for me sorry

@noahfalk
Copy link
Member Author

I'll run my tests on monday - the rest of the week is off for me sorry

Sounds great! Its not a rush :)

@noahfalk
Copy link
Member Author

noahfalk commented Nov 4, 2024

/ba-g remaining issues are deadletters for which an issue isn't appropriate

@noahfalk
Copy link
Member Author

noahfalk commented Nov 4, 2024

@jkotas - Thanks for the earlier feedback. I think that has been covered now but let me know if you have any other feedback. Thanks!

@noahfalk
Copy link
Member Author

noahfalk commented Nov 7, 2024

/ba-g remaining issues are deadletters for which an issue isn't appropriate

@noahfalk
Copy link
Member Author

noahfalk commented Nov 7, 2024

@chrisnas - just let me know whenever you've had a chance to test.
@jkotas - all the feedback so far should be addressed, let me know if there is anything else.

Thanks!

@@ -0,0 +1,21 @@
using System;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

License headers?

@jkotas
Copy link
Member

jkotas commented Nov 7, 2024

I do not have any additional feedback. As I have mentioned in #104955 (review), this should get a review from somebody familiar with tracing.

Comment on lines +19 to +21
void minipal_xoshiro128pp_init(struct minipal_xoshiro128pp* pState, uint32_t seed);

uint32_t minipal_xoshiro128pp_next(struct minipal_xoshiro128pp* pState);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are adding doxygen-style docs to minipal APIs:

Suggested change
void minipal_xoshiro128pp_init(struct minipal_xoshiro128pp* pState, uint32_t seed);
uint32_t minipal_xoshiro128pp_next(struct minipal_xoshiro128pp* pState);
/**
* Initialize the Xoshiro128++ random number generator state
*
* @param pState Pointer to the Xoshiro128++ state structure to be initialized.
* @param seed Seed value to initialize the generator. Different seeds produce different random sequences.
*/
void minipal_xoshiro128pp_init(struct minipal_xoshiro128pp* pState, uint32_t seed);
/**
* Generate the next random number in the Xoshiro128++ sequence
*
* @param pState Pointer to the Xoshiro128++ state structure.
* @return The next 32-bit unsigned random number in the sequence.
*/
uint32_t minipal_xoshiro128pp_next(struct minipal_xoshiro128pp* pState);

@noahfalk
Copy link
Member Author

noahfalk commented Nov 7, 2024

As I have mentioned in #104955 (review), this should get a review from somebody familiar with tracing.

@brianrob - would you mind taking a look if you haven't already?

@chrisnas
Copy link
Contributor

chrisnas commented Nov 8, 2024

@chrisnas - just let me know whenever you've had a chance to test.

I'm back from a conference in Belgium and I'm starting to work on it.
I've validated the numbers with the "manual" tests from src\tests\tracing\eventpipe\randomizedallocationsampling\manual.
I need now to change our profiler to leverage the new event and then compare to the really allocated objects for a more general scenario

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants