-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of Memory during garbage collection #5
Comments
Are there no other servers running at the same time? The current implementation uses a single file watcher instance for however many servers are started. Can you reproduce this issue with just this single project being opened in ST? On Mac the LSP-file-watcher-chokidar process seems to be using around 60MB on that project but since the file watching implementations can vary greatly between operating systems, that might not mean much. |
Heya, I haven't gathered solid evidence, but I think there's only one instance -- I only use More importantly, I think the issue is aggravated by what I was doing, which is a combination of the following:
I switched back to the vendored RA, and the out-of-memory from the chokidar plugin still happened, with a slightly shorter stack trace: // I removed all the `LSP-file-watcher-chokidar: ERROR: ` prefixes
<--- Last few GCs --->
[6820:0x6804010] 653289 ms: Mark-Compact 7993.0 (8234.1) -> 7981.7 (8238.6) MB, 3425.54 / 0.00 ms (average mu = 0.546, current mu = 0.012) allocation failure; scavenge might not succeed
[6820:0x6804010] 658644 ms: Mark-Compact 7997.6 (8238.6) -> 7986.6 (8243.9) MB, 5327.27 / 0.00 ms (average mu = 0.306, current mu = 0.005) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0xc8d700 node::Abort() [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
2: 0xb6b8f3 [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
3: 0xeac370 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
4: 0xeac657 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
5: 0x10bdcc5 [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
6: 0x10d5b48 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
7: 0x10abc61 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
8: 0x10acdf5 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
9: 0x108a366 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
10: 0x14e5196 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
11: 0x7f031bed9ef6
LSP-file-watcher-chokidar: Watcher process ended. Exception: None It's much stabler now when using the vendored RA, so I guess that means:
1 Sorry I don't have logs from the |
Heya, I'm using this alongside
LSP-rust-analyzer
, and getting the following crash:stack trace
The 8 gigs is what I added to my environment using:
I couldn't figure out the reason so much memory is used, but the code I work with is relatively large (repo, 55k LOC for the project itself, +429 dependencies).
Can you see something in that stack trace that I can't?
The text was updated successfully, but these errors were encountered: