-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow setting 'minimum headroom' for autoscaling #148
Comments
We also got this request from a GKE customer recently. So there are at least two people who want it. :) |
I also want this on GKE :D |
We are working on this #77. |
oooo, awesome! Is it being planned to coincide with 1.8? Or later?
…On Thu, Jun 29, 2017 at 5:20 PM, Marcin Wielgus ***@***.***> wrote:
We are working on this #77
<#77>.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#148 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAB23pfNlnVU2u4j5q4IO9qxCDNN0T3Xks5sJD9PgaJpZM4OJ9Pi>
.
--
Yuvi Panda T
http://yuvi.in/blog
|
A nice complement to this feature would be a way to pre-pull images to the headroom nodes so that a pending pod pays neither the node creation overhead (headroom feature) nor the image pull overhead (pre-pull feature) and can start running right away. We'd need to figure out a way the cluster admin or user specify which images should be pre-pulled where. |
This. |
@jonastl Which version of CA are you using? Scaling up shouldn't be serial - CA estimates how many nodes are required and adds them in a single request. And it only waits for request to come back, not for nodes to actually start. |
Sorry, just realised you mentioned GKE - in this case I mean what cluster version are you using (as CA is bundled with cluster version on GKE). |
@MaciekPytel Version 1.6.7 |
@jonastl In that case it definitely shouldn't be serial. That being said your comment #77 (review) suggest you're using a very unusual setup, so perhaps there is a bug somewhere that only manifests for your setup. It may be worth creating a new issue for that with some information about your setup (cluster version, cluster size, number of pods and description of how they're scheduled). Alternatively we can have a chat on kubernetes slack and see if there is something we can figure out quickly. |
@MaciekPytel, it turned out that when we enabled resource constraints (CPU and memory) to a degree that filled a node group member, then scaling speed was much improved, so my remark above about serial scaling can be scratched with this new insight. The solution was non-obvious to us, but now that we've found out about the scalers behavior with the expected (undocumented) knobs turned, we're happy with the scaling speed. |
Is there an ETA on this? |
Next K8S release (1.9). In 1.8 we were busy improving the performance of the current functionality and this feature makes all the computations much more complex. |
Is someone working on this for 1.9? |
After some thinking, I've come up with a scheme (for GKE) involving two nodepools that'll satisfy our use cases, and have written it up at berkeley-dsep-infra/data8xhub#7. If anyone with more knowledge of the autoscaler can take a look at that and lmk how terrible the idea is, I would highly appreciate it. |
Any movement on this? It would be quite useful for ensuring we don't hit ceiling effects before new nodes are requested! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Any idea how we can help get this moving? :) |
This can be achieved using pod priority and preemption, see (How can I configure overprovisioning with Cluster Autoscaler?)[https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler] |
I want to be able to say 'if the cluster is more than X% full, scale up until it is not'. This is important in super dynamic clusters that are very spiky - we run a Kubernetes cluster for a University, and a large spike of pods start up when classes start. If we waited for them to fail Scheduling before adding more nodes, this provides them with a suboptimal experience (since it might take several minutes for a new node to spin up).
One problem would be defining what 'full' is, in a way that doesn't duplicate what's in the scheduler.
The text was updated successfully, but these errors were encountered: