You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 28, 2024. It is now read-only.
A number of DUBBD components (to include Flux) have equal CPU/memory limits+requests in order to ensure they get evicted last when there is node pressure.
While this seems to be a good practice to ensure the core platform never "goes down" I think we could leverage priorityclass -https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ - as an alternative so that we don't need CPU/memory limits/requests set equally. This would enable us to run on lower resourced systems likely and help to alleviate some situations where we under resourced things on CPU to prevent large requirements.
The text was updated successfully, but these errors were encountered:
A number of DUBBD components (to include Flux) have equal CPU/memory limits+requests in order to ensure they get evicted last when there is node pressure.
While this seems to be a good practice to ensure the core platform never "goes down" I think we could leverage priorityclass -https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ - as an alternative so that we don't need CPU/memory limits/requests set equally. This would enable us to run on lower resourced systems likely and help to alleviate some situations where we under resourced things on CPU to prevent large requirements.
The text was updated successfully, but these errors were encountered: