Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
🐛 Addon memory request and limit match. (#281)
Deploying the pod with request = limit should be a good (incremental) step towards preventing OOM kills. My memory profiling suggests that 4Gi is a good number for our TestApp. Although, I can run analysis on the TestApp using 2Gi/2Gi just fine (though memory peaks at 1.9Gi) on my minikube but Pranav reports that he cannot. We should still follow up with passing -Xmx=80% of limit to the Java provider. IMHO, 4Gi is much higher than I would have expected for the analyzer. This means that when running on most personal, single node clusters (like minikube), we will only be able to analyze a few applications in parallel. Signed-off-by: Jeff Ortel <[email protected]>
- Loading branch information