Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

README: add resource recommendation #196

Merged
merged 1 commit into from
Aug 1, 2017
Merged

Conversation

brancz
Copy link
Member

@brancz brancz commented Aug 1, 2017

Based on @matthiasr's comment.

@fabxc @andyxning


This change is Reviewable

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Aug 1, 2017
Copy link

@matthiasr matthiasr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's more generous than what I read out of the tests, I basically meant "start with 200MB, start scaling up if you have more than 100 nodes". but no big harm in being generous I guess?

@brancz
Copy link
Member Author

brancz commented Aug 1, 2017

Agreed. What I wanted to avoid is people giving their kube-state-metric 6mb request and limit. 200mb will get you very far from our findings.

@matthiasr
Copy link

hmm but in the way you phrased it, I would expect that I need 206MiB for a 3 node cluster, so I'd probably add even more, and then it does get a bit excessive. What do you think about

Resource usage changes with the size of the cluster. As a general rule, you should allocate

* 200MiB memory
* 0.1 cores

For clusters of more than 100 nodes, allocate at least

* 2MiB memory per node
* 0.001 cores per node

These numbers are based on [scalability tests](https://github.com/kubernetes/kube-state-metrics/issues/124#issuecomment-318394185) at 30 pods per node.

This makes it easier for people with "small" clusters, and the math checks out the same. The minimum CPU request is higher just to make things match up nicely, if someone really cares about reclaiming those 0.09 cores they will want to see what exactly they need.

@brancz
Copy link
Member Author

brancz commented Aug 1, 2017

Fair enough, I like your version better 🙂. Adapted.

Copy link

@matthiasr matthiasr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reviewception

@brancz brancz merged commit 4d65779 into master Aug 1, 2017
@brancz brancz deleted the resource-recommendation branch August 1, 2017 16:34
@WIZARD-CXY WIZARD-CXY mentioned this pull request Aug 3, 2017
4 tasks
matthiasr pushed a commit to matthiasr/kube-state-metrics that referenced this pull request Aug 23, 2017
Adjust the deployment to match the recommended resources (kubernetes#196). The baseline resources are set to the basic recommendation for a 100 node cluster.

Pod nanny does not support "the baseline includes the first 100 nodes", so instead set the threshold until it needs to adjust wide. This over-estimates resource needs for intermediate cluster sizes, but I'd rather be on the safe side.
matthiasr pushed a commit to matthiasr/kube-state-metrics that referenced this pull request Aug 23, 2017
Adjust the deployment to match the recommended resources (kubernetes#196). The baseline resources are set to the basic recommendation for a 100 node cluster.

Pod nanny does not support "the baseline includes the first 100 nodes", so instead set the threshold until it needs to adjust wide. This over-estimates resource needs for intermediate cluster sizes, but I'd rather be on the safe side.
matthiasr pushed a commit to matthiasr/kube-state-metrics that referenced this pull request Aug 23, 2017
Adjust the deployment to match the recommended resources (kubernetes#196). The baseline resources are set to the basic recommendation for a 100 node cluster.

Pod nanny does not support "the baseline includes the first 100 nodes", so instead set the threshold until it needs to adjust wide. This over-estimates resource needs for intermediate cluster sizes, but I'd rather be on the safe side.
while1malloc0 pushed a commit to while1malloc0/kube-state-metrics that referenced this pull request Jul 2, 2018
…tion

README: add resource recommendation
while1malloc0 pushed a commit to while1malloc0/kube-state-metrics that referenced this pull request Jul 2, 2018
Adjust the deployment to match the recommended resources (kubernetes#196). The baseline resources are set to the basic recommendation for a 100 node cluster.

Pod nanny does not support "the baseline includes the first 100 nodes", so instead set the threshold until it needs to adjust wide. This over-estimates resource needs for intermediate cluster sizes, but I'd rather be on the safe side.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants