-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: memory profiling #524
Conversation
* refactor: move parseEndpoint into utils * feat: implement exporting pprof * fix: add missing attributes * fix: fix message on debug logging * fix: force logs endpoint to collector * feat: take OTLP endpoint for a default * Change files
Codecov Report
@@ Coverage Diff @@
## main #524 +/- ##
==========================================
- Coverage 88.94% 87.24% -1.71%
==========================================
Files 27 27
Lines 914 972 +58
Branches 204 210 +6
==========================================
+ Hits 813 848 +35
- Misses 101 124 +23
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
docs/profiling.md
Outdated
|
||
### Memory profiling | ||
|
||
Memory profiling is disabled by default, it can be enabled via the `memoryProfilingEnabled` flag. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Memory profiling is disabled by default. You can enable it via the memoryProfilingEnabled
flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Perhaps you already know, but GitHub lets you add suggestions directly to the comment via a suggest
code block, so the author can press a button to bring the change in, for example:
Memory profiling is disabled by default, it can be enabled via the `memoryProfilingEnabled` flag. | |
Memory profiling is disabled by default. You can enable it via the `memoryProfilingEnabled` flag. |
docs/profiling.md
Outdated
|
||
Internally the profiler uses V8's sampling heap profiler, where it periodically queries for new allocation samples from the allocation profile. | ||
|
||
The [V8 heap profiler's parameters](https://v8.github.io/api/head/classv8_1_1HeapProfiler.html#a6b9450bbf1f4e1a4909df92d4df4a174) can additionally be tuned by an optional `memoryProfilingOptions` configuration field: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: You can tune V8 heap profiler's parameters using the memoryProfilingOptions
configuration field:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mainly curious questions.
|
||
auto jsResult = Nan::New<v8::Object>(); | ||
auto jsSamples = Nan::New<v8::Array>(); | ||
auto jsNodeTree = Nan::New<v8::Object>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are the node id's unique across the lifetime of the program? I ask to suss out why use object instead of an array where the keys are integers already - saves a conversion and perhaps some (type) errors(even though arr[2] === arr['2']).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Node ID is an incrementing counter. No idea how large it gets and how many gaps will exist. Just used a denser form 🤷♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const arr = [];
array[5] = "hello";
array[10] = "world";
Makes a "holey" array, which is equivalent in terms of the density. I assume calling the same API from the native side behaves the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested it and it actually turns to DICTIONARY_ELEMENTS
type even with a small app as the node IDs get large, thus it's basically the same as using an object 🤔 And for some reason the average read speed was actually faster when using an object (no idea why) while the write speeds stayed the same.
tldr
v8::HeapProfiler
to periodically capture an allocation profile along with the samples.node_id
which is its node ID in the call graph. To generate a stack trace this node needs to be found from the allocation profile.Misc
prebuild:os
script