Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vpa-recommender: Make max allowed recommendation configurable #7147

Open
ialidzhikov opened this issue Aug 8, 2024 · 5 comments
Open

vpa-recommender: Make max allowed recommendation configurable #7147

ialidzhikov opened this issue Aug 8, 2024 · 5 comments
Labels
area/vertical-pod-autoscaler kind/feature Categorizes issue or PR as related to a new feature.

Comments

@ialidzhikov
Copy link
Contributor

ialidzhikov commented Aug 8, 2024

Which component are you using?:

vertical-pod-autoscaler

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

Currently, it is possible to configure max allowed recommendation per VPA resource using the field .spec.resourcePolicy.containerPolicies[].maxAllowed. Setting this field is useful to make sure that the recommendations won't exceed largest Node's allocatable. Otherwise, due to the high recommendation the Pod will be unschedulable and this will be a downtime for the corresponding Pod.

It seems that GKE's VPA already offers such a feature on top of the open-source VPA:

GKE vertical Pod autoscaling provides the following benefits over the Kubernetes open source autoscaler:

  • Takes maximum node size and resource quotas into account when determining the recommendation target.

We would like to make vpa-recommender aware of the maximum node size and prevent from providing recommendations that would make the corresponding Pods unschedulable.

Describe the solution you'd like.:

Does it make sense to provide flags in vpa-recommender about the max recommendation it can make for a Pod? I see that there are already similar flags for min allowed cpu and memory:

podMinCPUMillicores = flag.Float64("pod-recommendation-min-cpu-millicores", 25, `Minimum CPU recommendation for a pod`)
podMinMemoryMb = flag.Float64("pod-recommendation-min-memory-mb", 250, `Minimum memory recommendation for a pod`)
.

PS: However, for min allowed recommendation the flags seem to be per Pod and the recommendations are distributed proportionally based on the Pod's containers count. See

fraction := 1.0 / float64(len(containerNameToAggregateStateMap))
minResources := model.Resources{
model.ResourceCPU: model.ScaleResource(model.CPUAmountFromCores(*podMinCPUMillicores*0.001), fraction),
model.ResourceMemory: model.ScaleResource(model.MemoryAmountFromBytes(*podMinMemoryMb*1024*1024), fraction),
}

I am not sure if the same thing would work well for Pods that have multiple containers under VPA. For example, for a Pod with 2 containers, one of the containers could be limited at 50% of the configured max allowed value.

Describe any alternative solutions you've considered.:

Other folks may share the way they are tackling this issue.

One alternative we are trying to avoid is to have a mutating webhook on VPA that sets maxAllowed per VPA resource.

Additional context.:

This topic was also brought up in the community meeting on 2024-06-17.

@ialidzhikov ialidzhikov added the kind/feature Categorizes issue or PR as related to a new feature. label Aug 8, 2024
@ialidzhikov
Copy link
Contributor Author

/area vertical-pod-autoscaler

@raywainman
Copy link
Contributor

raywainman commented Aug 12, 2024

Thanks for submitting this!

This makes sense and is nice and simple, as you mention, it's very analogous to the min flags.

One possible alternative that we didn't talk about during the SIG meeting was to use the ContainerResourcePolicy.MaxAllowed field to cap the recommendations. Though the downside to using this is that you would need every single VPA in the cluster to have this field set.

Regarding this proposal, as we talked about, the downside is that you need to manually set this flag, this value could change over the lifetime of your cluster as different sized nodes are adopted.

If you were to set this "max" value dynamically you would want to make this adjustment in:

  • the updater to make sure it can only evict a pod if there is enough room in the cluster
  • the admission-controller to trim down and cap recommendations on new or restarted pods to make sure they can fit in what is available in the cluster

However, doing this statically cluster wide is a great first step and seems totally reasonable to me!

@vlerenc
Copy link

vlerenc commented Sep 25, 2024

That would be very helpful. Usually, the configuration of the cluster and its VPA installation will be done by the same (human or technical) operators. It would be more difficult to change all VPA resources, especially if they are controlled by third-party (technical) operators that we cannot influence in all aspects. Then, the alternative, if this isn't becoming a global VPA option, is to create a mutating web hook, but then everybody would have to do that. If VPA would offer that feature out-of-the-box, this would be great. After all, maxAllowed seems to have predominantly this use case: to make the recommender not recommend anything that wouldn't fit. The definition of the cluster and its worker pools is usually always handled by other people than the developers of the workload and their VPA resources - at least that's a pattern we see. As for dynamically following the adjustment/larger/largest nodes, I am doubtful too much intelligence will pay off. In our case it wouldn't as we a.) use worker pool node scale-from-zero, i.e. VPA cannot see the possible available machine sizes at all if it just checks the nodes and b.) it is challenging to understand which nodes are really suitable (taints/tolerations, affinities/anti-affinities, etc.). The last point could be overcome, the first one never. Therefore static configuration for VPA's global maxAllowed sounds simple and reasonable - more intelligence may help, but isn't guaranteed to always help.

@omerap12
Copy link
Member

omerap12 commented Oct 4, 2024

Here’s how I see the options for supporting this:

  1. Iterate through the cluster nodes and find the maximum allocatable resources per node. The downside is that VPA won’t be able to account for "upcoming nodes" (nodes that are expected to join the cluster via the cluster autoscaler). I've started working on this approach here: [WIP] VPA: Takes maximum node size into account (upper-bound) #7345.
  2. Iterate through both the current cluster nodes and the "upcoming nodes."
  3. Implement a mutating webhook on VPA that sets a maxAllowed value per VPA resource (as suggested by the issuer).
  4. Use a fixed value, similar to how pod-recommendation-min-cpu-millicores and pod-recommendation-min-memory-mb are handled.

cc @adrianmoisey

@adrianmoisey
Copy link
Member

Related issue: #6487

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vertical-pod-autoscaler kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

6 participants