Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uncontrolled memory usage growth when covering a large number of service #28

Closed
dvhh opened this issue Jul 11, 2019 · 11 comments
Closed
Labels
bug Something isn't working

Comments

@dvhh
Copy link

dvhh commented Jul 11, 2019

covering >20 services ( using the docker container ), the memory usage is increasing steadily until the memory limit is reached (~1GB)
image

@peterbourgon
Copy link
Contributor

Which memory metric is this?

@peterbourgon
Copy link
Contributor

Can you give https://github.com/peterbourgon/fastly-exporter/releases/tag/v2.2.1 a shot?

@peterbourgon peterbourgon added the bug Something isn't working label Jul 11, 2019
@dvhh
Copy link
Author

dvhh commented Jul 12, 2019

Hello, Sorry for the late reply, will give the version test and will report if same issue occur, unfortunately the Memory metric is an ad-hoc monitoring expressing memory usage in % of the container ( an probably missing a few beats ).

image

metrics for :

  • go_memstats_heap_alloc_bytes (stable)
  • go_memstats_heap_idle_bytes (increase during lifetime)
  • go_memstats_heap_sys_bytes

These metrics are produced before using the suggested version

@dvhh
Copy link
Author

dvhh commented Jul 12, 2019

Started running the new version, will update with reported metrics in ~24h

@peterbourgon
Copy link
Contributor

Bear in mind also that the operating system may not necessarily reclaim idle memory and thus steadily increasing values there don't necessarily imply a memory leak.

@dvhh
Copy link
Author

dvhh commented Jul 13, 2019

image
current status

@peterbourgon
Copy link
Contributor

Cool, I’ll call this fixed unless you object.

@dvhh
Copy link
Author

dvhh commented Jul 13, 2019

No issue, the application seems to be more stable in its memory consumption.
Feel free to close the ticket.

@dvhh
Copy link
Author

dvhh commented Jul 15, 2019

Just to update on the status
based on limits for killing containers (limited to 1GB)

  • version 2.2.0 : ~30h uptime for a container
  • version 2.2.1 : ~34h uptime for a container
    Thanks for the support, and the clarification about where the issue mainly remain

@dvhh
Copy link
Author

dvhh commented Jul 29, 2019

version 3.0.1: uptime ~5 days and ongoing, memory usage stable, thanks for your work

@peterbourgon
Copy link
Contributor

Great news, thanks for the reports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants