Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

Prevent eviction of pods weave-net-* with priorityClassName: system-node-critical #3691

Closed
christian-2 opened this issue Aug 19, 2019 · 7 comments · Fixed by christian-2/weave#1 or #3697
Labels

Comments

@christian-2
Copy link
Contributor

What you expected to happen?

Pods weave-net-* should never be evicted, for this will impact operation of the entire Kubernetes cluster. They should be treated as critical pods.

What happened?

Pod weave-net-* was evicted on a Kubernetes cluster node relatively early (in comparison to other local pods) when local resources (ephemeral-storage) became tight.

Anything else we need to know?

IMO weave-net-* pods should be marked as critical by adding the following to the specification of daemonset/weave-net:

daemonSet.spec.template.spec.priorityClassName: system-node-critical

This would set it at a similar (scheduling) priority as is alredy the case for pods coredns-*, etcd-*, kube-apiserver-*, kube-controller-manager-*, kube-proxy-*, and kube-scheduler-*, which Kubernetes similarly marks as system-cluster-cricical/system-node-critical by default.

I am already using this modified specification, and it seems to work fine.

Versions:

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

$ weave version

weave script 2.5.2
weave 2.5.2
installed with kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
@murali-reddy
Copy link
Contributor

@christian-2 thanks for reporting this. Would you mind raising a PR?

@christian-2
Copy link
Contributor Author

@murali-reddy I would not mind, but I have not yet found where the manifest that is referred to at https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n') resides in GitHub (if it is included there at all). prog/weave-kube/weave-daemonset.yaml looks close, but it is apparently not the same.

@murali-reddy
Copy link
Contributor

@christian-2 Yes, manifests in prog/weave-kube are the right ones to update. There are open PR's #3660, #3674 to tidy up current manifests. I will update once they are merged on the right manifest to update.

@christian-2
Copy link
Contributor Author

@murali-reddy So the proposal is that you merge the open PRs for #3660, #3674 and then I prepare a PR for #3691 on top of these changes, right?

@murali-reddy
Copy link
Contributor

Yes @christian-2

@murali-reddy
Copy link
Contributor

@christian-2 related PR's are merged now. As per

https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/

priorityClassName can be used Kubernetes 1.11 and later. You may have to introduce weave-daemonset-k8s-1.11.yaml on top of weave-daemonset-k8s-1.9.yaml. Please see #3660 for the changes needed.

@bboreham
Copy link
Contributor

bboreham commented Sep 3, 2019

Link to the specific feature requested:
https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/

#3195 requested a different solution but I think the underlying issue there was again disk space so #3691 is better.

christian-2 added a commit to christian-2/weave that referenced this issue Sep 4, 2019
* fixes weaveworks#3691
* `weave-daemonset-k8s-1.11.yaml` is `weave-daemonset-k8s-1.9.yaml` plus `priorityClassName: system-node-critical`
* notice that a Kubernetes cluster created with `kubeadm` has its `kube-proxy`s marked as `system-node-critical` (and several other pods, such as `kube-apiserver` marked as `system-cluster-critical`)
@bboreham bboreham reopened this Sep 4, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
3 participants