Graceful handling of low memory states #2398
Unanswered
NathanSavageUoP
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Yes, in general you should not store big chunks of data in the jobs as RAM is much more expensive than disk, so you should offload the heavy elements to a different storage system or disk based database. Regarding 3., do you have a callstack where this happens, in theory if you handle all the errors appropriately it should not crash the process, also there are some hints here that may be useful for you: https://docs.bullmq.io/guide/going-to-production and specifically this one: https://docs.bullmq.io/guide/going-to-production#unhandled-exceptions-and-rejections |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is your feature request related to a problem? Please describe.
Im using bullmq to schedule and handle web data scraping for a project. It involves downloading html pages as well as binary document types such as DOCX and PDF, running a feature extraction job against them, shrinking and compressing any images and then finally saving them to a DB. Recently I have been having trouble with running out of memory in the JS heap and in redis with jobs with large payloads or return values.
Describe the solution you'd like
There are a few ways i can think of to improve handling of large payloads and specificly buffers.
Describe alternatives you've considered
Im going to implement a "pass by reference" type system for the binary data im passing between jobs. this should solve issues 1 and 2 hopefully. Im also going to implement a system that pauses particular queues based on the amount of available JS and redis memory and unpauses them once the jobs have completed and memory consumption gone down again.
Beta Was this translation helpful? Give feedback.
All reactions