-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'Realloc' could not re-allocate memory #100
Comments
Yeah, if so, we'll need to wait a decade or two :p
I'm not sure exactly where this happens, but I suspect we're running out of memory in If you specify readCounts <- binReadCounts(bins, bamfiles = bam, chunkSize = 10e6) then each BAM file will be read in chunks on 10 million basepairs, instead of the whole genome at once. Does that make a difference for you? BTW, what's your |
sessionInfo() :
When I try to use
|
Update the future package to fix this. But, I'd say update all your packages because several of them have been updated since you installed them. |
Yes indeed it works better with up-to-date libraries ^_^ ! And dividing it in chunks did the trick, it works now fine. Thank you very much for your help ! |
Thanks for confirming. So, I'm thinking of make read-in-chunks the new default to avoid these hiccups (#101). To do this, we need to set a sensible default for
Knowing that will help figuring out a sensible default. |
|
Thanks - this is helpful. Back of the envelope calculation:
|
Hi,
I have some issue using QDNAseq with high coverage WGS (~300x).
Using this command:
readCounts <- binReadCounts(bins, bamfiles=bam)
throws this error
I never encountered this error with my lower coverage WGS (e.g. 50x), but it happens systematically with all the high coverages.
I don't think it's a memory issue. The job has 200 GB of RAM available, and I monitored it: it grows to ~11 GB of RAM used before crashing. I also don't think it tries to allocate 16 EB of RAM :-), so the error message is probably misleading.
I didn't looked much in the code, but I suspect a problem linked to the number of reads which is greater than a 32 bits integer. With 50x WGS we have something around 1 billion reads (smaller than the max int value of 2147483647), while with 300x WGS it's more than 6 billions reads. So if 32 bits integers are used for e.g. storing the number of reads, I suppose we would have this kind of trouble.
What do you think can cause this issue ?
The text was updated successfully, but these errors were encountered: