Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increased memory useage with latest version #321

Open
s4mur4i opened this issue Sep 4, 2024 · 2 comments
Open

Increased memory useage with latest version #321

s4mur4i opened this issue Sep 4, 2024 · 2 comments
Labels
performance question Further information is requested

Comments

@s4mur4i
Copy link

s4mur4i commented Sep 4, 2024




Describe the bug
We upgrade our popeye from version 0.11.1 to 0.21.3
Previously popeye was using limits of 100 CPU and 100-200 MB Memory. with latest version it needs 500 CPU and under 1 GB of memory it gets oom killed. on some clusters it even requires 4.5GB of memory. is this something expected, or why did popeye suddenly start using so much memory.

To Reproduce
Steps to reproduce the behavior:

  1. Upgraded popeye to latest version

Expected behavior
I would be okay with some increase, but 3-4 GB of memory requirement seems to much, when previous one used around 100-200 MB

Screenshots
Grafana output of one run
Screenshot 2024-09-04 at 11 37 00

Versions (please complete the following information):

  • Popeye v0.21.3
  • K8s 1.29.7-eks
@derailed
Copy link
Owner

@s4mur4i Thanks for reporting this!
How big is your cluster? nodes, pods, etc...
Also how are you running popeye ie wide open or using filters?

@derailed derailed added question Further information is requested performance labels Sep 14, 2024
@s4mur4i
Copy link
Author

s4mur4i commented Sep 30, 2024

Hello,
Sorry, I was on holiday and could not respond for some time.
Our clusters usually have around 10-20 nodes, and some might go up to 30 nodes.
With pods, I would say 300-600 pods.
We have different products.. each product has its cluster for the dev/prod environment.
We use following arguments:

-A -f  /spinach/spinach.yaml --out  html -l  info --s3-bucket  xyz --push-gtwy-url http://pushgateway-service:9091 --cluster xyz --kubeconfig /etc/kubeconfig/config.yaml --force-exit-zero=true
                ```
We tested with separating pushgateway and bucket upload into 2 separate pods as it was done previously, but it didn't lower memory useage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants