Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Adjust resource requirements. #373

Closed
wants to merge 1 commit into from

Conversation

jortel
Copy link
Contributor

@jortel jortel commented Aug 1, 2024

The resource requirements for the analyzer and providers can be lowered. This will support better scaling. Also, for the java provider, the JDK will not use 2Gi.

Scale testing has shown the hub memory spike to 2.3Gi with 3k applications being analyzed as fast as possible. Recommend limit should be bumped.

NOTE: I have some real concerns about large differences between request/limit. I'm concerned about oversubscribing the nodes when launching hundreds/thousands of pods without a Resource Quota. k8s does not seem to protect itself (nodes) against this.

Can the builtin requirements be lower?

@@ -176,7 +176,7 @@ provider_java_component_name: "extension"
provider_java_container_limits_cpu: "1"
provider_java_container_limits_memory: "2Gi"
provider_java_container_requests_cpu: "1"
provider_java_container_requests_memory: "2Gi"
provider_java_container_requests_memory: "1Gi"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should stay at 2GI for the request the JVM will use 70% of max memory, and means that we can assume that it will use the full amount IMO. It will be safer to set this limit equal to the requests

@jortel
Copy link
Contributor Author

jortel commented Aug 1, 2024

Looks like requested resources have already been adjusted.

@jortel jortel closed this Aug 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants