-
Notifications
You must be signed in to change notification settings - Fork 508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage by packager (memory leak?) #61
Comments
what I can confirm mem usage after 8h test increase (in my case) 0.1 - 0.2% per process, what I notice as well, when I stop input stream (udp://224.0.0.1:50xxx as example below) MEM usage back to initial value (in my case 0.3 - 0.4% per process) my CLI (same for each stream) /usr/src/edash_packager/src/out/Release/packager input=udp://224.0.0.1:50232,stream=audio,init_segment=/chunks/37/dash/live-audio-high.mp4,segment_template=/chunks/37/dash/live-audio-high-$Number$.mp4,bandwidth=128000 input=udp://224.0.0.1:50232,stream=video,init_segment=/chunks/37/dash/live-video-high.mp4,segment_template=/chunks/37/dash/live-video-high-$Number$.mp4,bandwidth=1000000 input=udp://224.0.0.1:50233,stream=audio,init_segment=/chunks/37/dash/live-audio-low.mp4,segment_template=/chunks/37/dash/live-audio-low-$Number$.mp4,bandwidth=32000 input=udp://224.0.0.1:50233,stream=video,init_segment=/chunks/37/dash/live-video-low.mp4,segment_template=/chunks/37/dash/live-video-low-$Number$.mp4,bandwidth=128000 --profile=live --fragment_duration 10 --segment_duration 10 --single_segment=false --time_shift_buffer_depth 30 --mpd_output /chunks/37/dash/manifest.mpd |
Hi brylek, Thanks for reporting this issue. Looks like there may be some resident memory issue. We will take a look and get back to you once there is any finding. |
hi, after 20h stress test, MEM usage is still increasing, I am able to give you an access to my system if needed. Maybe it will help find a reason of memory leak. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND |
Profiling can be enabled by setting profiling=1 in gyp, e.g. GYP_DEFINES="profiling=1" gclient runhooks To turn on heap profiling, use the HEAPPROFILE environment variable to specify a filename for the heap profile dump, e.g. HEAPPROFILE=/tmp/heapprofile out/Release/packager ... To turn on cpu profiling, use the CPUPROFILE environment variable to specify a filename for the cpu profile dump, e.g. CPUPROFILE=/tmp/cpuprofile out/Release/packager ... Note that profiling may not work for debug builds, so use release build if possible. See docs/linux_profiling.md for details. This change will help identify and resolve problem behind Issue #61. Change-Id: I6f85a04ed82dd0cb3588e6b38e8ceb68dac6c436
We have tracked down the cause of the problem, which is related to tracking of thread objects. Here is a tentative fix: 53dfd3e. You can sync to that revision with
Let us know if it is able to fix your problem. Will push the fix to master once it is full tested. |
hi, After 55h test memory usage increase up to 1% so based on my graphs 5G per day... Patch insalled, let me start test again :) I will update you tomorrow simpleBox@ott:/usr/src/edash_packager$ /usr/src/depot_tools/gclient sync -r 53dfd3e ________ running '/usr/bin/python src/packager/tools/clang/scripts/update.py --if-needed' in '/usr/src/edash_packager' ________ running '/usr/bin/python src/gyp_packager.py --depth=src/packager' in '/usr/src/edash_packager' 100971 www-data 5 -15 668188 327680 4332 S 3,3 1,0 50:54.50 packager |
Initial values ~5% CPU and 0,3% MEM 39169 www-data 5 -15 541524 92096 4524 S 5,3 0,3 0:02.56 packager |
ThreadedIoFile spawns a new thread for every new file. Thread information is stored for tracking purpose by base::tracked_objects. The tracking object remains even if the thread itself is destroyed. This results in memory usage increased by a couple of bytes for every new segment created in live mode (new segments spawns new threads). Use WorkerPool instead to avoid spawning new threads. Fixes Issue #61. Change-Id: Id93283903c3ba8ebf172a0d58e19b082a72c6cf0
This issue should have been fixed. Feel free to reopen it if you are still seeing problems. |
Hi,
please, take a look at my screenshot. I am doing stress test on my system and every edash package process gets 2% installed ram, I have 25 TV channels on my system (transcoding HD channels to pads/smartphones resolution, hls plus dash packager 2 profiles each) only packager gives me 50% RAM usage, is this normal behavior?
brylek
The text was updated successfully, but these errors were encountered: