Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Respect Node taints with tolerations (if exist) #181

Merged

Conversation

jw-s
Copy link
Contributor

@jw-s jw-s commented Sep 12, 2019

If a node has a taint and is under-utilised, the descheduler will evict pods based on qos, priority etc but doesn't check if these "suitable" pods tolerate this taint, thus evicting and goes into an infinite loop of "reshuffling". This PR adds this check.

If multiple Nodes are under-utilised and both have taints, all the descheduler cares about if pods can tolerate taints of one of the nodes as then the default scheduler will schedule it onto one of the nodes.

@k8s-ci-robot
Copy link
Contributor

Welcome @jw-s!

It looks like this is your first PR to kubernetes-sigs/descheduler 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/descheduler has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 12, 2019
@jw-s
Copy link
Contributor Author

jw-s commented Sep 12, 2019

/assign @aveshagarwal

@jw-s jw-s force-pushed the feature/respect-tolerations branch 6 times, most recently from d90d4bd to a19cd22 Compare September 19, 2019 14:29
canBeEvicted := true
taintNodeLoop:
for _, taintsForNode := range taintsOfNodes {
if len(pod.Spec.Tolerations) < len(taintsForNode) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is taintsForNode guaranteed to be always duplicate items free?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The k8s api rejects taints with the same key and same effect

So technically you can have a slice of Taint with same key but different effect values. Which means a pod needs to either tolerate this key with Exists value or have two tolerations with same key but different effect values i.e NoSchedule and NoExecute

pkg/descheduler/strategies/lownodeutilization.go Outdated Show resolved Hide resolved
@@ -270,6 +278,35 @@ func evictPods(inputPods []*v1.Pod,
}
}

func podToleratesTaints(pod *v1.Pod, taintsOfNodes [][]v1.Taint) bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is podToleratesTaints supposed to return true if there is at least one node that can tolerate the pod? If not, can you more clarify on what the function is supposed to be doing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly what the function does

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, just wanted to be sure. Given podToleratesTaints got quite shortened and it uses helperUtil.TolerationsTolerateTaintsWithFilter, excessive testing of podToleratesTaints is optional now.

I also checked all the unit tests under lownodeutilization_test.go and only individual functions are tested:

  • evictPodsFromTargetNodes
  • sortPodsBasedOnPriority
  • validateThresholds

Go coverage shows the main LowNodeUtilization function is not tested at all. So it's hard to see how your code will interact with it. I would rather refactor/extend the unit tests first so some portion of the tests is ran through LowNodeUtilization so we can see how individual bits interacts with each other from the public API perspective.

@ravisantoshgudimetla what's your PoV?

@@ -233,13 +236,19 @@ func evictPods(inputPods []*v1.Pod,
totalCpu *float64,
totalMem *float64,
podsEvicted *int,
dryRun bool, maxPodsToEvict int) {
dryRun bool, maxPodsToEvict int, taintsOfLowNodes [][]v1.Taint) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point it's hard to say which taints belongs to which node. Can you replace the list of list with a map of lists? So later in the code podToleratesTaints can print a list of nodes whose taints are not tolerated by the pod with e.g. V(5) verbosity.

@@ -270,6 +278,35 @@ func evictPods(inputPods []*v1.Pod,
}
}

func podToleratesTaints(pod *v1.Pod, taintsOfNodes [][]v1.Taint) bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, just wanted to be sure. Given podToleratesTaints got quite shortened and it uses helperUtil.TolerationsTolerateTaintsWithFilter, excessive testing of podToleratesTaints is optional now.

I also checked all the unit tests under lownodeutilization_test.go and only individual functions are tested:

  • evictPodsFromTargetNodes
  • sortPodsBasedOnPriority
  • validateThresholds

Go coverage shows the main LowNodeUtilization function is not tested at all. So it's hard to see how your code will interact with it. I would rather refactor/extend the unit tests first so some portion of the tests is ran through LowNodeUtilization so we can see how individual bits interacts with each other from the public API perspective.

@ravisantoshgudimetla what's your PoV?

@ingvagabund
Copy link
Contributor

ingvagabund commented Sep 25, 2019

@jw-s I managed to put together this to test with LowNodeUtilization for node taints:

func newFake(objects ...runtime.Object) *core.Fake {
	scheme := runtime.NewScheme()
	codecs := serializer.NewCodecFactory(scheme)
	fake.AddToScheme(scheme)
	o := core.NewObjectTracker(scheme, codecs.UniversalDecoder())
	for _, obj := range objects {
		if err := o.Add(obj); err != nil {
			panic(err)
		}
	}

	fakePtr := core.Fake{}
	fakePtr.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
		objs, err := o.List(
			schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"},
			schema.GroupVersionKind{Group: "", Version: "v1", Kind: "Pod"},
			action.GetNamespace(),
		)
		if err != nil {
			return true, nil, err
		}

		obj := &v1.PodList{
			Items: []v1.Pod{},
		}
		for _, pod := range objs.(*v1.PodList).Items {
			podFieldSet := fields.Set(map[string]string{
				"spec.nodeName": pod.Spec.NodeName,
				"status.phase":  string(pod.Status.Phase),
			})
			match := action.(core.ListAction).GetListRestrictions().Fields.Matches(podFieldSet)
			if !match {
				continue
			}
			obj.Items = append(obj.Items, *pod.DeepCopy())
		}
		return true, obj, nil
	})
	fakePtr.AddReactor("*", "*", core.ObjectReaction(o))
	fakePtr.AddWatchReactor("*", core.DefaultWatchReactor(watch.NewFake(), nil))

	return &fakePtr
}

func TestWithTaints(t *testing.T) {
	strategy := api.DeschedulerStrategy{
		Enabled: true,
		Params: api.StrategyParameters{
			NodeResourceUtilizationThresholds: api.NodeResourceUtilizationThresholds{
				Thresholds: api.ResourceThresholds{
					v1.ResourcePods: 0.2,
				},
				TargetThresholds: api.ResourceThresholds{
					v1.ResourcePods: 0.7,
				},
			},
		},
	}

	n1 := test.BuildTestNode("n1", 1000, 3000, 10)
	n2 := test.BuildTestNode("n2", 1000, 3000, 10)
	n3 := test.BuildTestNode("n3", 1000, 3000, 10)
	n3withTaints := n3.DeepCopy()
	n3withTaints.Spec.Taints = []v1.Taint{
		{
			Key:    "key",
			Value:  "value",
			Effect: v1.TaintEffectNoSchedule,
		},
	}

	tests := []struct {
		nodes             []*v1.Node
		evictionsExpected int
	}{
		{
			nodes:             []*v1.Node{n1, n2, n3},
			evictionsExpected: 1,
		},
		{
			nodes:             []*v1.Node{n1, n2, n3withTaints},
			evictionsExpected: 0,
		},
	}

	for _, item := range tests {
		var objs []runtime.Object
		for _, node := range item.nodes {
			objs = append(objs, node)
		}

		for i := 0; i < 4; i++ {
			// populate the first node
			p := test.BuildTestPod(fmt.Sprintf("p1_%v", i), 200, 0, n1.Name)
			p.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
			objs = append(objs, p)

			// populate the second node
			if i < 3 {
				p = test.BuildTestPod(fmt.Sprintf("p2_%v", i), 200, 0, n2.Name)
				p.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
				objs = append(objs, p)
			}
		}

		fakePtr := newFake(objs...)
		var evictionCounter int
		fakePtr.PrependReactor("post", "pods", func(action core.Action) (bool, runtime.Object, error) {
			act, ok := action.(core.GetActionImpl)
			if !ok || act.Subresource != "eviction" || act.Resource.Resource != "pods" {
				return false, nil, nil
			}
			evictionCounter++
			return true, nil, nil
		})

		fakeClient := &fake.Clientset{Fake: *fakePtr}

		ds := &options.DeschedulerServer{
			Client: fakeClient,
			DeschedulerConfiguration: componentconfig.DeschedulerConfiguration{
				EvictLocalStoragePods: false,
			},
		}

		nodePodCount := InitializeNodePodCount(item.nodes)
		LowNodeUtilization(ds, strategy, "policy/v1", item.nodes, nodePodCount)

		if item.evictionsExpected != evictionCounter {
			t.Errorf("Expected %v evictions, got %v", item.evictionsExpected, evictionCounter)
		}
	}
}

With new imports:

	"k8s.io/apimachinery/pkg/fields"
	"k8s.io/apimachinery/pkg/runtime/schema"
	serializer "k8s.io/apimachinery/pkg/runtime/serializer"
	"k8s.io/apimachinery/pkg/watch"
	"sigs.k8s.io/descheduler/cmd/descheduler/app/options"
	"sigs.k8s.io/descheduler/pkg/apis/componentconfig"

I am pretty sure the code can be more optimized.

@jw-s
Copy link
Contributor Author

jw-s commented Sep 29, 2019

@jw-s I managed to put together this to test with LowNodeUtilization for node taints:

func newFake(objects ...runtime.Object) *core.Fake {
	scheme := runtime.NewScheme()
	codecs := serializer.NewCodecFactory(scheme)
	fake.AddToScheme(scheme)
	o := core.NewObjectTracker(scheme, codecs.UniversalDecoder())
	for _, obj := range objects {
		if err := o.Add(obj); err != nil {
			panic(err)
		}
	}

	fakePtr := core.Fake{}
	fakePtr.AddReactor("list", "pods", func(action core.Action) (bool, runtime.Object, error) {
		objs, err := o.List(
			schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"},
			schema.GroupVersionKind{Group: "", Version: "v1", Kind: "Pod"},
			action.GetNamespace(),
		)
		if err != nil {
			return true, nil, err
		}

		obj := &v1.PodList{
			Items: []v1.Pod{},
		}
		for _, pod := range objs.(*v1.PodList).Items {
			podFieldSet := fields.Set(map[string]string{
				"spec.nodeName": pod.Spec.NodeName,
				"status.phase":  string(pod.Status.Phase),
			})
			match := action.(core.ListAction).GetListRestrictions().Fields.Matches(podFieldSet)
			if !match {
				continue
			}
			obj.Items = append(obj.Items, *pod.DeepCopy())
		}
		return true, obj, nil
	})
	fakePtr.AddReactor("*", "*", core.ObjectReaction(o))
	fakePtr.AddWatchReactor("*", core.DefaultWatchReactor(watch.NewFake(), nil))

	return &fakePtr
}

func TestWithTaints(t *testing.T) {
	strategy := api.DeschedulerStrategy{
		Enabled: true,
		Params: api.StrategyParameters{
			NodeResourceUtilizationThresholds: api.NodeResourceUtilizationThresholds{
				Thresholds: api.ResourceThresholds{
					v1.ResourcePods: 0.2,
				},
				TargetThresholds: api.ResourceThresholds{
					v1.ResourcePods: 0.7,
				},
			},
		},
	}

	n1 := test.BuildTestNode("n1", 1000, 3000, 10)
	n2 := test.BuildTestNode("n2", 1000, 3000, 10)
	n3 := test.BuildTestNode("n3", 1000, 3000, 10)
	n3withTaints := n3.DeepCopy()
	n3withTaints.Spec.Taints = []v1.Taint{
		{
			Key:    "key",
			Value:  "value",
			Effect: v1.TaintEffectNoSchedule,
		},
	}

	tests := []struct {
		nodes             []*v1.Node
		evictionsExpected int
	}{
		{
			nodes:             []*v1.Node{n1, n2, n3},
			evictionsExpected: 1,
		},
		{
			nodes:             []*v1.Node{n1, n2, n3withTaints},
			evictionsExpected: 0,
		},
	}

	for _, item := range tests {
		var objs []runtime.Object
		for _, node := range item.nodes {
			objs = append(objs, node)
		}

		for i := 0; i < 4; i++ {
			// populate the first node
			p := test.BuildTestPod(fmt.Sprintf("p1_%v", i), 200, 0, n1.Name)
			p.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
			objs = append(objs, p)

			// populate the second node
			if i < 3 {
				p = test.BuildTestPod(fmt.Sprintf("p2_%v", i), 200, 0, n2.Name)
				p.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
				objs = append(objs, p)
			}
		}

		fakePtr := newFake(objs...)
		var evictionCounter int
		fakePtr.PrependReactor("post", "pods", func(action core.Action) (bool, runtime.Object, error) {
			act, ok := action.(core.GetActionImpl)
			if !ok || act.Subresource != "eviction" || act.Resource.Resource != "pods" {
				return false, nil, nil
			}
			evictionCounter++
			return true, nil, nil
		})

		fakeClient := &fake.Clientset{Fake: *fakePtr}

		ds := &options.DeschedulerServer{
			Client: fakeClient,
			DeschedulerConfiguration: componentconfig.DeschedulerConfiguration{
				EvictLocalStoragePods: false,
			},
		}

		nodePodCount := InitializeNodePodCount(item.nodes)
		LowNodeUtilization(ds, strategy, "policy/v1", item.nodes, nodePodCount)

		if item.evictionsExpected != evictionCounter {
			t.Errorf("Expected %v evictions, got %v", item.evictionsExpected, evictionCounter)
		}
	}
}

With new imports:

	"k8s.io/apimachinery/pkg/fields"
	"k8s.io/apimachinery/pkg/runtime/schema"
	serializer "k8s.io/apimachinery/pkg/runtime/serializer"
	"k8s.io/apimachinery/pkg/watch"
	"sigs.k8s.io/descheduler/cmd/descheduler/app/options"
	"sigs.k8s.io/descheduler/pkg/apis/componentconfig"

I am pretty sure the code can be more optimized.

@ingvagabund I will add this test early next week!

@jw-s jw-s force-pushed the feature/respect-tolerations branch 2 times, most recently from 46dcf7d to 7d462b8 Compare September 30, 2019 19:41
@jw-s
Copy link
Contributor Author

jw-s commented Oct 11, 2019

@ingvagabund I've made the changes however the build is failing due to linting, core.fake

@jw-s jw-s force-pushed the feature/respect-tolerations branch from 7d462b8 to 5bf958a Compare October 26, 2019 12:49
@jw-s jw-s force-pushed the feature/respect-tolerations branch from 5bf958a to bfc365e Compare January 16, 2020 14:15
@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 16, 2020
@jw-s jw-s force-pushed the feature/respect-tolerations branch from bfc365e to 3cbfde6 Compare January 16, 2020 16:41
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Jan 16, 2020
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 8, 2020
@jw-s
Copy link
Contributor Author

jw-s commented Feb 8, 2020

/retest

@k8s-ci-robot
Copy link
Contributor

@jw-s: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.

In response to this:

/retest

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ingvagabund
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Feb 10, 2020
@@ -232,15 +237,22 @@ func evictPods(inputPods []*v1.Pod,
totalCPU *float64,
totalMem *float64,
podsEvicted *int,
dryRun bool, maxPodsToEvict int) {
dryRun bool, maxPodsToEvict int, taintsOfLowNodes map[string][]v1.Taint) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each parameter of evictPods lies on its own line. Please, do the same for taintsOfLowNodes.

@@ -269,6 +281,21 @@ func evictPods(inputPods []*v1.Pod,
}
}

// podToleratesTaints returns true if a pod tolerates one node's taints
func podToleratesTaints(pod *v1.Pod, taintsOfNodes map[string][]v1.Taint) bool {
canBeEvicted := true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a nit though this way it's shorter and easier to read given it's sufficient to find first node where the pod tolerates the taints:

	for nodeName, taintsForNode := range taintsOfNodes {
		if len(pod.Spec.Tolerations) >= len(taintsForNode) && helperUtil.TolerationsTolerateTaintsWithFilter(pod.Spec.Tolerations, taintsForNode, nil) {
				return true
		}
		klog.V(5).Infof("pod: %#v doesn't tolerate node %s's taints", pod.Name, nodeName)
	}

	return false

@jw-s jw-s force-pushed the feature/respect-tolerations branch 3 times, most recently from 4f3a837 to cddcd24 Compare February 21, 2020 07:19
@ingvagabund
Copy link
Contributor

Since enabling go modules by default we can no longer import anything from k8s.io/kubernetes. Instead, we copy-paste what is really required and put it under pkg/utils. Can you do the same for TolerationsTolerateTaintsWithFilter? Also InitializeNodePodCount moved under pkg/utils.

diff --git a/pkg/descheduler/strategies/lownodeutilization.go b/pkg/descheduler/strategies/lownodeutilization.go
index 85d3415..69cbeb0 100644
--- a/pkg/descheduler/strategies/lownodeutilization.go
+++ b/pkg/descheduler/strategies/lownodeutilization.go
@@ -24,7 +24,6 @@ import (
 	clientset "k8s.io/client-go/kubernetes"
 	"k8s.io/klog"
 
-	helperUtil "k8s.io/kubernetes/pkg/apis/core/v1/helper"
 	"sigs.k8s.io/descheduler/cmd/descheduler/app/options"
 	"sigs.k8s.io/descheduler/pkg/api"
 	"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
@@ -285,7 +284,7 @@ func evictPods(inputPods []*v1.Pod,
 // podToleratesTaints returns true if a pod tolerates one node's taints
 func podToleratesTaints(pod *v1.Pod, taintsOfNodes map[string][]v1.Taint) bool {
 	for nodeName, taintsForNode := range taintsOfNodes {
-		if len(pod.Spec.Tolerations) >= len(taintsForNode) && helperUtil.TolerationsTolerateTaintsWithFilter(pod.Spec.Tolerations, taintsForNode, nil) {
+		if len(pod.Spec.Tolerations) >= len(taintsForNode) && utils.TolerationsTolerateTaintsWithFilter(pod.Spec.Tolerations, taintsForNode, nil) {
 			return true
 		}
 		klog.V(5).Infof("pod: %#v doesn't tolerate node %s's taints", pod.Name, nodeName)
diff --git a/pkg/descheduler/strategies/lownodeutilization_test.go b/pkg/descheduler/strategies/lownodeutilization_test.go
index 9b15961..bab1e4a 100644
--- a/pkg/descheduler/strategies/lownodeutilization_test.go
+++ b/pkg/descheduler/strategies/lownodeutilization_test.go
@@ -499,7 +499,7 @@ func TestWithTaints(t *testing.T) {
 			},
 		}
 
-		nodePodCount := InitializeNodePodCount(item.nodes)
+		nodePodCount := utils.InitializeNodePodCount(item.nodes)
 		LowNodeUtilization(ds, strategy, "policy/v1", item.nodes, nodePodCount)
 
 		if item.evictionsExpected != evictionCounter {
diff --git a/pkg/utils/pod.go b/pkg/utils/pod.go
index 4eafdca..4f33e60 100644
--- a/pkg/utils/pod.go
+++ b/pkg/utils/pod.go
@@ -2,6 +2,7 @@ package utils
 
 import (
 	"fmt"
+
 	v1 "k8s.io/api/core/v1"
 	"k8s.io/apimachinery/pkg/api/resource"
 	utilfeature "k8s.io/apiserver/pkg/util/feature"
@@ -179,3 +180,49 @@ func maxResourceList(list, new v1.ResourceList) {
 		}
 	}
 }
+
+// TolerationsTolerateTaint checks if taint is tolerated by any of the tolerations.
+func TolerationsTolerateTaint(tolerations []v1.Toleration, taint *v1.Taint) bool {
+	for i := range tolerations {
+		if tolerations[i].ToleratesTaint(taint) {
+			return true
+		}
+	}
+	return false
+}
+
+type taintsFilterFunc func(*v1.Taint) bool
+
+// TolerationsTolerateTaintsWithFilter checks if given tolerations tolerates
+// all the taints that apply to the filter in given taint list.
+// DEPRECATED: Please use FindMatchingUntoleratedTaint instead.
+func TolerationsTolerateTaintsWithFilter(tolerations []v1.Toleration, taints []v1.Taint, applyFilter taintsFilterFunc) bool {
+	_, isUntolerated := FindMatchingUntoleratedTaint(taints, tolerations, applyFilter)
+	return !isUntolerated
+}
+
+// FindMatchingUntoleratedTaint checks if the given tolerations tolerates
+// all the filtered taints, and returns the first taint without a toleration
+func FindMatchingUntoleratedTaint(taints []v1.Taint, tolerations []v1.Toleration, inclusionFilter taintsFilterFunc) (v1.Taint, bool) {
+	filteredTaints := getFilteredTaints(taints, inclusionFilter)
+	for _, taint := range filteredTaints {
+		if !TolerationsTolerateTaint(tolerations, &taint) {
+			return taint, true
+		}
+	}
+	return v1.Taint{}, false
+}
+
+func getFilteredTaints(taints []v1.Taint, inclusionFilter taintsFilterFunc) []v1.Taint {
+	if inclusionFilter == nil {
+		return taints
+	}
+	filteredTaints := []v1.Taint{}
+	for _, taint := range taints {
+		if !inclusionFilter(&taint) {
+			continue
+		}
+		filteredTaints = append(filteredTaints, taint)
+	}
+	return filteredTaints
+}

@ingvagabund
Copy link
Contributor

@jw-s sorry this PR review takes so long.

Copy link
Contributor

@ingvagabund ingvagabund left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will also help to read the tests more easily and see individual cases:

diff --git a/pkg/descheduler/strategies/lownodeutilization_test.go b/pkg/descheduler/strategies/lownodeutilization_test.go
index 9b15961..9da1512 100644
--- a/pkg/descheduler/strategies/lownodeutilization_test.go
+++ b/pkg/descheduler/strategies/lownodeutilization_test.go
@@ -393,7 +393,7 @@ func TestWithTaints(t *testing.T) {
 		},
 	}
 
-	n1 := test.BuildTestNode("n1", 1000, 3000, 10)
+	n1 := test.BuildTestNode("n1", 2000, 3000, 10)
 	n2 := test.BuildTestNode("n2", 1000, 3000, 10)
 	n3 := test.BuildTestNode("n3", 1000, 3000, 10)
 	n3withTaints := n3.DeepCopy()
@@ -414,11 +414,13 @@ func TestWithTaints(t *testing.T) {
 	}
 
 	tests := []struct {
+		name              string
 		nodes             []*v1.Node
 		pods              []*v1.Pod
 		evictionsExpected int
 	}{
 		{
+			name:  "No taints",
 			nodes: []*v1.Node{n1, n2, n3},
 			pods: []*v1.Pod{
 				//Node 1 pods
@@ -436,6 +438,7 @@ func TestWithTaints(t *testing.T) {
 			evictionsExpected: 1,
 		},
 		{
+			name:  "Taint that is not tolerated",
 			nodes: []*v1.Node{n1, n3withTaints},
 			pods: []*v1.Pod{
 				//Node 1 pods
@@ -453,6 +456,7 @@ func TestWithTaints(t *testing.T) {
 			evictionsExpected: 0,
 		},
 		{
+			name:  "Taint that is tolerated",
 			nodes: []*v1.Node{n1, n3withTaints},
 			pods: []*v1.Pod{
 				//Node 1 pods
@@ -472,38 +476,40 @@ func TestWithTaints(t *testing.T) {
 	}
 
 	for _, item := range tests {
-		var objs []runtime.Object
-		for _, node := range item.nodes {
-			objs = append(objs, node)
-		}
-
-		for _, pod := range item.pods {
-			pod.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
-			objs = append(objs, pod)
-		}
+		t.Run(item.name, func(t *testing.T) {
+			var objs []runtime.Object
+			for _, node := range item.nodes {
+				objs = append(objs, node)
+			}
 
-		fakePtr := newFake(objs...)
-		var evictionCounter int
-		fakePtr.PrependReactor("create", "pods", func(action core.Action) (bool, runtime.Object, error) {
-			if action.GetSubresource() != "eviction" || action.GetResource().Resource != "pods" {
-				return false, nil, nil
+			for _, pod := range item.pods {
+				pod.ObjectMeta.OwnerReferences = test.GetReplicaSetOwnerRefList()
+				objs = append(objs, pod)
 			}
-			evictionCounter++
-			return true, nil, nil
-		})
 
-		ds := &options.DeschedulerServer{
-			Client: &fake.Clientset{Fake: *fakePtr},
-			DeschedulerConfiguration: componentconfig.DeschedulerConfiguration{
-				EvictLocalStoragePods: false,
-			},
-		}
+			fakePtr := newFake(objs...)
+			var evictionCounter int
+			fakePtr.PrependReactor("create", "pods", func(action core.Action) (bool, runtime.Object, error) {
+				if action.GetSubresource() != "eviction" || action.GetResource().Resource != "pods" {
+					return false, nil, nil
+				}
+				evictionCounter++
+				return true, nil, nil
+			})
 
-		nodePodCount := InitializeNodePodCount(item.nodes)
-		LowNodeUtilization(ds, strategy, "policy/v1", item.nodes, nodePodCount)
+			ds := &options.DeschedulerServer{
+				Client: &fake.Clientset{Fake: *fakePtr},
+				DeschedulerConfiguration: componentconfig.DeschedulerConfiguration{
+					EvictLocalStoragePods: false,
+				},
+			}
 
-		if item.evictionsExpected != evictionCounter {
-			t.Errorf("Expected %v evictions, got %v", item.evictionsExpected, evictionCounter)
-		}
+			nodePodCount := utils.InitializeNodePodCount(item.nodes)
+			LowNodeUtilization(ds, strategy, "policy/v1", item.nodes, nodePodCount)
+
+			if item.evictionsExpected != evictionCounter {
+				t.Errorf("Expected %v evictions, got %v", item.evictionsExpected, evictionCounter)
+			}
+		})
 	}
 }

@ingvagabund
Copy link
Contributor

@aveshagarwal @damemi this is a good candidate for merging

@jw-s jw-s force-pushed the feature/respect-tolerations branch from cddcd24 to 4757132 Compare March 2, 2020 13:43
@jw-s
Copy link
Contributor Author

jw-s commented Mar 2, 2020

@jw-s sorry this PR review takes so long.

No worries! I've made the changes you suggested.

@aveshagarwal
Copy link
Contributor

@aveshagarwal @damemi this is a good candidate for merging

will review it today.

@ingvagabund
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 3, 2020
@ingvagabund
Copy link
Contributor

@aveshagarwal ping

@aveshagarwal
Copy link
Contributor

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aveshagarwal, jw-s

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 4, 2020
@k8s-ci-robot k8s-ci-robot merged commit 112684b into kubernetes-sigs:master Mar 4, 2020
@seanmalloy seanmalloy mentioned this pull request May 20, 2020
4 tasks
ingvagabund added a commit to ingvagabund/descheduler that referenced this pull request Jun 4, 2020
Based on https://github.com/kubernetes/community/blob/master/community-membership.md#requirements-1:

The following apply to the part of codebase for which one would be a reviewer in an OWNERS file (for repos using the bot).

> member for at least 3 months

For a couple of years now

> Primary reviewer for at least 5 PRs to the codebase

kubernetes-sigs#285
kubernetes-sigs#275
kubernetes-sigs#267
kubernetes-sigs#254
kubernetes-sigs#181

> Reviewed or merged at least 20 substantial PRs to the codebase

https://github.com/kubernetes-sigs/descheduler/pulls?q=is%3Apr+is%3Aclosed+assignee%3Aingvagabund

> Knowledgeable about the codebase

yes

> Sponsored by a subproject approver
> With no objections from other approvers
> Done through PR to update the OWNERS file

this PR

> May either self-nominate, be nominated by an approver in this subproject, or be nominated by a robot

self-nominating
briend pushed a commit to briend/descheduler that referenced this pull request Feb 11, 2022
…rations

Respect Node taints with tolerations (if exist)
briend pushed a commit to briend/descheduler that referenced this pull request Feb 11, 2022
Based on https://github.com/kubernetes/community/blob/master/community-membership.md#requirements-1:

The following apply to the part of codebase for which one would be a reviewer in an OWNERS file (for repos using the bot).

> member for at least 3 months

For a couple of years now

> Primary reviewer for at least 5 PRs to the codebase

kubernetes-sigs#285
kubernetes-sigs#275
kubernetes-sigs#267
kubernetes-sigs#254
kubernetes-sigs#181

> Reviewed or merged at least 20 substantial PRs to the codebase

https://github.com/kubernetes-sigs/descheduler/pulls?q=is%3Apr+is%3Aclosed+assignee%3Aingvagabund

> Knowledgeable about the codebase

yes

> Sponsored by a subproject approver
> With no objections from other approvers
> Done through PR to update the OWNERS file

this PR

> May either self-nominate, be nominated by an approver in this subproject, or be nominated by a robot

self-nominating
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants