Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime Even Pod Spreading #154

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1,325 changes: 0 additions & 1,325 deletions pkg/api/types.generated.go

This file was deleted.

32 changes: 32 additions & 0 deletions pkg/api/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,9 @@ type DeschedulerStrategy struct {
type StrategyParameters struct {
NodeResourceUtilizationThresholds NodeResourceUtilizationThresholds
NodeAffinityType []string
// TopologySpreadConstraints describes how a group of pods should be spread across topology
// domains. Descheduler will use these constraints to decide which pods to evict.
NamespacedTopologySpreadConstraints []NamespacedTopologySpreadConstraint

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this name unnecessarily long? It feels a bit like adding part of its documentation to the name.

}

type Percentage float64
Expand All @@ -58,3 +61,32 @@ type NodeResourceUtilizationThresholds struct {
TargetThresholds ResourceThresholds
NumberOfNodes int
}

type NamespacedTopologySpreadConstraint struct {
Namespace string
TopologySpreadConstraints []TopologySpreadConstraint
}

type TopologySpreadConstraint struct {
// MaxSkew describes the degree to which pods may be unevenly distributed.
// It's the maximum permitted difference between the number of matching pods in
// any two topology domains of a given topology type.
// For example, in a 3-zone cluster, currently pods with the same labelSelector
// are "spread" such that zone1 and zone2 each have one pod, but not zone3.
// - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 1/1/1;
// scheduling it onto zone1(zone2) would make the ActualSkew(2) violate MaxSkew(1)
// - if MaxSkew is 2, incoming pod can be scheduled to any zone.
// It's a required value. Default value is 1 and 0 is not allowed.
MaxSkew int32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Huang-Wei - I believe this is inline with what you're proposing...

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

@krmayankk if the upstream API is available, I think you can simply vendor that?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently the descheduler is looking at TopologySpreadConstraint's as defined in DeschedulerPolicy. Later these constraints will come directly from the Pod itself. So there is nothing to vendor, just the source of the TopologySpreadConstraint will change @Huang-Wei

// TopologyKey is the key of node labels. Nodes that have a label with this key
// and identical values are considered to be in the same topology.
// We consider each <key, value> as a "bucket", and try to put balanced number
// of pods into each bucket.
// It's a required field.
TopologyKey string
// LabelSelector is used to find matching pods.
// Pods that match this label selector are counted to determine the number of pods
// in their corresponding topology domain.
// +optional
LabelSelector *metav1.LabelSelector
}
Loading