-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak on Subscription (Windows) #500
Comments
Hi @nirbenda, do you know approximately how many messages you have waiting to be processed? If you're only getting 1 every 2 seconds that would mean over the course of 10 minutes you've only processed about 300 messages. |
Hi @callmehiphop , thanks for looking into this. |
I guess what I'm trying to get at is if your application is expecting about 30 messages per minute. One possible cause of a memory leak could be the clients flow control mechanisms not functioning properly, but in this case that seems unlikely given the low number of pending messages. |
After a while I get this heap out of memory message:
|
@nirbenda so far I haven't been able to reproduce the issue you're seeing. I thought maybe it was specific to Windows, but my PC runs your reproduction code well enough and memory seems to sit around 60MB on my machine. I'm running the following publisher script at the same time to try and emulate getting 1 message every 2 seconds. const {PubSub} = require('@google-cloud/pubsub');
const pubsub = new PubSub();
const topic = pubsub.topic('my-topic', {
batching: {maxMilliseconds: 0}
});
const message = Buffer.from('Hello, world!');
setInterval(() => topic.publish(message), 2000); Is there anything I could do differently that would give me the errors you're seeing? |
@nirbenda Our dependency list looks the same, although I'm not getting any |
I got this problem on another project I have, but I had created this totally separated project that includes only the code I presented above. |
@nirbenda this might not make any difference, but can you try loading
|
@nirbenda Could you share your |
@ajaaym , tried to do what you offered, same results :(. |
Unfortunately I'm still unable to reproduce :( would you mind testing with older versions of the client? If we can find the last version that does not produce this error it might help us to determine what changes could be causing it. |
@callmehiphop Sure, which version do you suggest to start from? |
Lets try with the following
|
@callmehiphop In version 0.24.1 I could not reproduce this issue. I will let it work for a couple of hours just to be sure. |
@nirbenda I don't think there's any need to check other versions, I'm going to go through the commits between 0.24.1 and 0.25.0 and see if there's anything obvious that might be causing the issue. Thanks for testing it out! |
@callmehiphop unfortunately, it took longer but it happened also in 0.24.1 as well. |
@callmehiphop In version 0.23.0 it doesn't seem to reproduce, I am having another issue after running for a while (ERROR: error: 14 UNAVAILABLE: TCP Read failed) but I think this might be already fixed in newer versions. |
@nirbenda thanks for the report! It looks like between 0.23.0 and 0.24.1 we upgraded 2 of our core dependencies. I suspect the issue is coming from one of them, but I'm probably going to need your help determining which one since I've been largely unsuccessful at reproducing your issue so far 😢 The dependencies in question are: It looks like neither one required a code change, so we should be able to just rollback the version in order to test. |
I tried reverting those two libraries to previous versions, but the memory leak still occurred after a while. |
@callmehiphop By the way, on the 0.23.0 version I get this error EXACTLY after 900 messages (checked twice), therefore it is problematic to check a long-running process. Although monitoring the memory in the 0.23.0 version seems to show now indication of a memory leak.
|
I'm at a bit of a loss really, being unable to reproduce makes it almost impossible to determine the root issue. If I had to speculate I would think the memory leak would come from 1 of 2 places
Which would lead me to guess that this might be a grpc error since we've kind of ruled out the possibility of a flow control bug. @nirbenda if you want to try and profile the memory leak and give us back results that might help? Otherwise maybe we should try and pull in another pair of eyes. @stephenplusplus do you have any bandwidth to try and reproduce this? |
@JustinBeckwith is there anyone else we could try and pull in on this to attempt a repro? |
Hasn't been any activity here in quite some time, I'm going to close this for now. If any new info becomes available I'll be more than happy to re-open and investigate again. |
* build: switch to GitHub magic proxy * lint: fix linting
Hello,
I am experiencing what seems to be a memory leak.
At first the NodeJS process memory is ~40MB and after about 5-10 minutes (of continiuosly receiving messages every two seconds and acking them) it rapidly grows from 40MB to 1.6GB in term of seconds, and after a few minutes Node crashes with OOM because it excceeded the maximum capacity allowed for memory allocation.
Inspecting the allocations points to the promisify and JSBufferArray allocations.
I might be doing something wrong, but this code was taken from the examples (only that the listener timeout was removed).
Help would me much appreciated.
Thanks,
Nir
Environment details
@google-cloud/pubsub
version: 0.25.0Steps to reproduce
The text was updated successfully, but these errors were encountered: