Skip to content

Commit

Permalink
🐛 Addon memory request and limit match. (#281)
Browse files Browse the repository at this point in the history
Deploying the pod with request = limit should be a good (incremental)
step towards preventing OOM kills.
My memory profiling suggests that 4Gi is a good number for our TestApp.
Although, I can run analysis on the TestApp using 2Gi/2Gi just fine
(though memory peaks at 1.9Gi) on my minikube but Pranav reports that he
cannot.
We should still follow up with passing -Xmx=80% of limit to the Java
provider.

IMHO, 4Gi is much higher than I would have expected for the analyzer.
This means that when running on most personal, single node clusters
(like minikube), we will only be able to analyze a few applications in
parallel.

Signed-off-by: Jeff Ortel <[email protected]>
  • Loading branch information
jortel committed Nov 17, 2023
1 parent a0135d8 commit da09dab
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion roles/tackle/defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ analyzer_service_name: "{{ app_name }}-{{ analyzer_name }}-{{ analyzer_component
analyzer_container_limits_cpu: "1"
analyzer_container_limits_memory: "4Gi"
analyzer_container_requests_cpu: "1"
analyzer_container_requests_memory: "2Gi"
analyzer_container_requests_memory: "4Gi"

cache_name: "cache"
cache_data_volume_size: "100Gi"
Expand Down

0 comments on commit da09dab

Please sign in to comment.