Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Addon memory request and limit match. #281

Merged
merged 1 commit into from
Nov 17, 2023

Conversation

jortel
Copy link
Contributor

@jortel jortel commented Nov 16, 2023

Deploying the pod with request = limit should be a good (incremental) step towards preventing OOM kills.
My memory profiling suggests that 4Gi is a good number for our TestApp. Although, I can run analysis on the TestApp using 2Gi/2Gi just fine (though memory peaks at 1.9Gi) on my minikube but Pranav reports that he cannot.
We should still follow up with passing -Xmx=80% of limit to the Java provider.

IMHO, 4Gi is much higher than I would have expected for the analyzer. This means that when running on most personal, single node clusters (like minikube), we will only be able to analyze a few applications in parallel.

@jmontleon
Copy link
Member

jmontleon commented Nov 16, 2023

LGTM

This probably leaves a lot less room for multiple pods to be scheduled at 2Gi, bursting to ~4Gi and running a node out of memory.

This also makes the pods Guaranteed and so they aren't as likely to be evicted:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants