-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: Add multipacket async read target buffer caching #285
Conversation
As noted in #245 (comment), we found out that it isn't about LOH allocations, but rather that allocations in other threads aren't taken into account. BDN 0.15.0 fixed this. |
@Wraith2 Can you resolve the conflicts and merge the latest from master? |
98e73aa
to
33735e4
Compare
Done |
@roji BDN prior to that version uses |
@Suchiman thanks. I'm not sure if the discrepancies in this particular case were about the LOH allocations or about allocations in other threads, but it doesn't matter much. |
…arts of dotnet#285 to netfx).
…arts of dotnet#285 to netfx).
…arts of dotnet#285 to netfx).
…arts of dotnet#285 to netfx).
closes #245
Async reads that span multiple packets can allocate large target arrays and drop them repeatedly causing a large memory and cpu problem. This PR uses the async snapshot already present for all async calls to allow storing the presized target buffer between packet receipt decode attempts avoiding re-allocations.
Before:
After:
BDN isn't showing LOH allocations which is why the memory numbers look odd. Using profiler results you can see the savings.
Before:
After:
Overall 3x faster, 400 times less GC, 160 times less memory used on the benchmark provided in the report by @roji