Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory usage by packager (memory leak?) #61

Closed
brylek opened this issue Dec 27, 2015 · 8 comments
Closed

High memory usage by packager (memory leak?) #61

brylek opened this issue Dec 27, 2015 · 8 comments
Labels
status: archived Archived and locked; will not be updated

Comments

@brylek
Copy link

brylek commented Dec 27, 2015

Hi,

please, take a look at my screenshot. I am doing stress test on my system and every edash package process gets 2% installed ram, I have 25 TV channels on my system (transcoding HD channels to pads/smartphones resolution, hls plus dash packager 2 profiles each) only packager gives me 50% RAM usage, is this normal behavior?

brylek

zrzut ekranu 2015-12-27 o 13 29 12

@brylek
Copy link
Author

brylek commented Dec 28, 2015

All,

I believe this is memory leak issue. After reboot memory consumption drop to 0.3%, now expected processes get the highest value of memory (ffmpeg transcoding). could you please check it?
zrzut ekranu 2015-12-28 o 08 14 22

@brylek brylek changed the title High memory usage by packager High memory usage by packager (memory leak?) Dec 28, 2015
@brylek
Copy link
Author

brylek commented Dec 28, 2015

what I can confirm mem usage after 8h test increase (in my case) 0.1 - 0.2% per process, what I notice as well, when I stop input stream (udp://224.0.0.1:50xxx as example below) MEM usage back to initial value (in my case 0.3 - 0.4% per process)

my CLI (same for each stream)

/usr/src/edash_packager/src/out/Release/packager input=udp://224.0.0.1:50232,stream=audio,init_segment=/chunks/37/dash/live-audio-high.mp4,segment_template=/chunks/37/dash/live-audio-high-$Number$.mp4,bandwidth=128000 input=udp://224.0.0.1:50232,stream=video,init_segment=/chunks/37/dash/live-video-high.mp4,segment_template=/chunks/37/dash/live-video-high-$Number$.mp4,bandwidth=1000000 input=udp://224.0.0.1:50233,stream=audio,init_segment=/chunks/37/dash/live-audio-low.mp4,segment_template=/chunks/37/dash/live-audio-low-$Number$.mp4,bandwidth=32000 input=udp://224.0.0.1:50233,stream=video,init_segment=/chunks/37/dash/live-video-low.mp4,segment_template=/chunks/37/dash/live-video-low-$Number$.mp4,bandwidth=128000 --profile=live --fragment_duration 10 --segment_duration 10 --single_segment=false --time_shift_buffer_depth 30 --mpd_output /chunks/37/dash/manifest.mpd

@kqyang
Copy link
Contributor

kqyang commented Dec 28, 2015

Hi brylek, Thanks for reporting this issue. Looks like there may be some resident memory issue. We will take a look and get back to you once there is any finding.

@brylek
Copy link
Author

brylek commented Dec 29, 2015

hi,

after 20h stress test, MEM usage is still increasing, I am able to give you an access to my system if needed. Maybe it will help find a reason of memory leak.

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
105375 www-data 5 -15 537332 207224 4452 S 5,0 0,6 20:30.63 packager
105867 www-data 5 -15 537336 176540 4372 S 5,0 0,5 21:35.89 packager
106958 www-data 5 -15 537188 181396 4428 S 5,0 0,6 21:08.56 packager
108232 www-data 5 -15 537336 187972 4496 S 5,0 0,6 18:20.83 packager
99734 www-data 5 -15 537112 177344 4484 S 4,6 0,5 19:56.09 packager
99847 www-data 5 -15 537208 179192 4340 S 4,6 0,5 20:28.30 packager
100588 www-data 5 -15 537376 181036 4380 S 4,6 0,6 21:13.64 packager
101540 www-data 5 -15 537240 185276 4288 S 4,6 0,6 20:12.49 packager
101606 www-data 5 -15 536668 180264 4428 S 4,6 0,5 19:22.22 packager
102596 www-data 5 -15 537464 189836 4340 S 4,6 0,6 21:55.37 packager
103835 www-data 5 -15 537068 186344 4444 S 4,6 0,6 20:23.66 packager
104852 www-data 5 -15 537068 177836 4392 S 4,6 0,5 19:56.85 packager
107466 www-data 5 -15 537080 177672 4484 S 4,6 0,5 19:04.71 packager
100453 www-data 5 -15 537068 192172 4452 S 4,3 0,6 20:42.03 packager
102193 www-data 5 -15 537300 202084 4288 S 4,3 0,6 19:58.45 packager
104472 www-data 5 -15 537068 193844 4408 S 4,3 0,6 20:02.34 packager
106038 www-data 5 -15 537352 185560 4444 S 4,3 0,6 21:13.11 packager
106531 www-data 5 -15 537192 189716 4376 S 4,3 0,6 19:19.28 packager
99534 www-data 5 -15 537232 197228 4288 S 4,0 0,6 18:50.14 packager
100971 www-data 5 -15 537116 187400 4332 S 4,0 0,6 19:44.41 packager
108435 www-data 5 -15 537108 211228 4452 S 4,0 0,6 19:01.88 packager
108897 www-data 5 -15 537244 201824 4444 S 4,0 0,6 19:14.18 packager

kqyang added a commit that referenced this issue Dec 30, 2015
Profiling can be enabled by setting profiling=1 in gyp, e.g.
  GYP_DEFINES="profiling=1" gclient runhooks

To turn on heap profiling, use the HEAPPROFILE environment variable
to specify a filename for the heap profile dump, e.g.
  HEAPPROFILE=/tmp/heapprofile out/Release/packager ...

To turn on cpu profiling, use the CPUPROFILE environment variable
to specify a filename for the cpu profile dump, e.g.
  CPUPROFILE=/tmp/cpuprofile out/Release/packager ...

Note that profiling may not work for debug builds, so use release
build if possible.

See docs/linux_profiling.md for details.

This change will help identify and resolve problem behind Issue #61.

Change-Id: I6f85a04ed82dd0cb3588e6b38e8ceb68dac6c436
@kqyang
Copy link
Contributor

kqyang commented Dec 30, 2015

We have tracked down the cause of the problem, which is related to tracking of thread objects.

Here is a tentative fix: 53dfd3e. You can sync to that revision with

gclient sync -r 53dfd3e95f80cc36267ecc69a987c38ff59b7b1b

Let us know if it is able to fix your problem. Will push the fix to master once it is full tested.

@brylek
Copy link
Author

brylek commented Dec 30, 2015

hi,

After 55h test memory usage increase up to 1% so based on my graphs 5G per day...

Patch insalled, let me start test again :) I will update you tomorrow

simpleBox@ott:/usr/src/edash_packager$ /usr/src/depot_tools/gclient sync -r 53dfd3e
Syncing projects: 100% (23/23), done.

________ running '/usr/bin/python src/packager/tools/clang/scripts/update.py --if-needed' in '/usr/src/edash_packager'
Clang already at 241602-3

________ running '/usr/bin/python src/gyp_packager.py --depth=src/packager' in '/usr/src/edash_packager'
Updating projects from gyp files...
simpleBox@ott:/usr/src/edash_packager$ cd src/
simpleBox@ott:/usr/src/edash_packager/src$ /usr/src/depot_tools/ninja -C out/Release
ninja: Entering directory `out/Release'
[52/52] STAMP obj/All.actions_depends.stamp

100971 www-data 5 -15 668188 327680 4332 S 3,3 1,0 50:54.50 packager
108232 www-data 5 -15 668408 332076 4496 S 3,3 1,0 48:16.72 packager
99734 www-data 5 -15 668184 331200 4484 S 3,0 1,0 51:35.90 packager
100588 www-data 5 -15 668448 312736 4380 S 3,0 1,0 55:00.91 packager
101606 www-data 5 -15 667740 310016 4428 S 3,0 0,9 50:40.55 packager
105375 www-data 5 -15 668404 310336 4452 S 2,3 0,9 54:50.90 packager
105867 www-data 5 -15 668408 318180 4372 S 2,3 1,0 56:16.00 packager
106038 www-data 5 -15 668424 340748 4444 S 2,3 1,0 55:09.92 packager
106958 www-data 5 -15 668260 330856 4428 S 2,3 1,0 55:08.08 packager
100453 www-data 5 -15 668140 347608 4452 S 2,0 1,1 52:51.18 packager
102193 www-data 5 -15 668372 340396 4288 S 2,0 1,0 51:42.22 packager

@brylek
Copy link
Author

brylek commented Dec 30, 2015

Initial values ~5% CPU and 0,3% MEM

39169 www-data 5 -15 541524 92096 4524 S 5,3 0,3 0:02.56 packager
39110 www-data 5 -15 537464 87972 4492 S 5,0 0,3 0:02.64 packager
39128 www-data 5 -15 537620 96232 4548 S 5,0 0,3 0:02.70 packager
39106 www-data 5 -15 537188 85204 4608 S 4,6 0,3 0:02.45 packager
39112 www-data 5 -15 537196 89256 4484 S 4,6 0,3 0:02.41 packager
39114 www-data 5 -15 537596 94836 4480 S 4,6 0,3 0:02.63 packager
39140 www-data 5 -15 537192 94360 4516 S 4,6 0,3 0:02.71 packager
39143 www-data 5 -15 537204 88324 4480 S 4,6 0,3 0:02.64 packager
39150 www-data 5 -15 542044 94608 4484 S 4,6 0,3 0:02.96 packager
39153 www-data 5 -15 537608 87260 4564 S 4,6 0,3 0:02.61 packager
39157 www-data 5 -15 541928 94976 4484 S 4,6 0,3 0:02.93 packager
39161 www-data 5 -15 537324 94328 4452 S 4,6 0,3 0:02.43 packager
39172 www-data 5 -15 537364 91360 4408 S 4,6 0,3 0:02.40 packager
39117 www-data 5 -15 537348 95112 4564 S 4,3 0,3 0:02.43 packager

kqyang added a commit that referenced this issue Jan 9, 2016
ThreadedIoFile spawns a new thread for every new file. Thread
information is stored for tracking purpose by base::tracked_objects.
The tracking object remains even if the thread itself is destroyed.
This results in memory usage increased by a couple of bytes for every
new segment created in live mode (new segments spawns new threads).

Use WorkerPool instead to avoid spawning new threads.

Fixes Issue #61.

Change-Id: Id93283903c3ba8ebf172a0d58e19b082a72c6cf0
@kqyang
Copy link
Contributor

kqyang commented Jan 13, 2016

This issue should have been fixed. Feel free to reopen it if you are still seeing problems.

@kqyang kqyang closed this as completed Jan 13, 2016
@shaka-bot shaka-bot added the status: archived Archived and locked; will not be updated label Apr 19, 2018
@shaka-project shaka-project locked and limited conversation to collaborators Apr 19, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
status: archived Archived and locked; will not be updated
Projects
None yet
Development

No branches or pull requests

3 participants