-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support for common labels that don't get set on selectors #1009
Comments
Not possible with code as is :( Could enrich the configuration of the current label applier to accept more rules, or write a new label transformer plugin and use that instead of the commonLabels field. |
If it is not possible to have common labels not apply to the selectors, then you should consider removing support for the common labels feature. Its a trap feature which is almost certain to cause users trouble. In the current state is encourages you to NEVER change your labels and is very anti-ci. |
Either that, or change the name of |
This behaviour should - at the very least - be clearly documented because it's going to leave you with orphaned resources in case you update your In my humble opinion, |
Well it will either cause non spawning pods, OR if your version of kubernetes is new enough, it will simply fail as selectors have become immutable in newer versions. |
I DO have fear that commonUnpropagatedLabels could become RealMysqlEscapeString and I think that's the bigger risk. These sort of "The REAL thing you want is the thing with the different name because we didn't want to break backwards compatibility" is a pretty big gross problem in a lot of projects, and I think can be a big long term concern. |
Because this issue causes failures, marking this as a bug. /sig cli |
@Benjamintf1 @embik thanks for the great commentary. The upside to propagation of labels to selectors is convenience and accuracy. In directory One doesn't have to change labels - one can just add more, leaving the old labels in place, and proceed with some label deprecation scheme across the cluster as time passes. In favor of
but would like more evidence that automatic propagation to selectors is a poor choice for default behavior. |
There is also a possible usecase of having the same thing deployed twice using different labels, but you'd need more then that to make it functional (Including deploying to a different namespace, but also you might have some troubles with cluster resources, so basically you'd have to fuzz the name, or location of everything to get it to work). I'm not aware of a solid way to deprecate selectors at all at the moment. The best solution one Could do something with --prune these days (I think, looks like a new feature), but even then, you'd want that as a common label that doesn't apply to the selectors. Part of the reason I think propagating to selectors is a bad choice is because it isn't clear. I think there IS a usecase for both commonLabels and commonSelectors(Or you could call this "Common Distinguishers", or some other derived term), but I don't think these options should be combined in the same configuration. I think there's more clarity in having them as seperate paths because they are entirely different usecases and goals. |
I might be wrong here, but won't exactly that result in orphaned pods? Because adding additional I'll throw together a short demonstration of that if I find the time. |
the tldr of https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#label-selector-updates How about a new term kustomization term This breaks existing behaviors, so we'd have to increment the version in the kustomization file (which is a pain, since that adds text to all kustomization files that is currently optional since there's only one version). Any other ideas? |
I came across this issue while trying to add a revision-based label to all pods in a Kustomization and then use it as a part of confirming that all Pods of that version were ready later. In this case, I feel like I do want this label to be propagated down into workload templates, but don't want it to be propagated to selectors.
This desired additional dimension of customizability and the above commentary about breaking current functionality make me wonder if it might be worth considering introducing a new commonLabels:
existingLabel: something
# generated by CI tooling via `kustomize edit add label`
revision: decaf123
commonLabelOptions:
defaults:
propagateToSpecTemplate: true
propagateToSelectors: false
labels:
existingLabel:
propagateToSpecTemplate: true
propagateToSelectors: true I do agree that propagation to selectors should probably be off by default, but this approach might support deferring changing the apiVersion for a while if that's desired. |
If the tact is to go down the path of being customized off the same "root" option, why not have commonLabels:
existingLabel:
value: something
propagateToSpecTemplate: true
propagateToSelectors: false It keeps the locality of the options a bit better, especially if you scale the number of labels you want. (You can also keep some degree of backwards compatibility with this by treating a single string as the old way and eventually deprecating it, but having added labels be added in the new way) |
as I explained in thread #157 The commonLabels are meant to "tag" those resources that YOU create with your kustomize setup. As a mind-aid, consider the case that you want to register stored procedures with a database. kind: storedProcedure if this manifest was under the control of kustomize and you had defined commonLabels, then this SQL statement itself is YOUR resource and you might expect that when it gets registered with a database, that it would get tagged with your commonLabels. A sensible default would therefore be to not temper with selectors. |
With this approach it is not obvious how to change the defaults. |
You can presumably leave off the "defaulting" options if you have them separate. You'd presumably learn it the same way, via documentation. |
If I understood correctly, the reason the selectors are modified is because changing the label in a resource will change it's identity in case it is used by a selector. Implementing a new approach will have the same problem,. because you might change a label expected by the selector that didn't change. The selector is updated do add the new labels where in fact it should only change the existing ones that where changed. IMHO this shouldn't be a breaking change, is more like a bug fix, because nobody was expecting this dodgy behaviour. |
A possible solution here may be to introduce the concept of
Would generate selectors of If However, this doesn't address the problem that selectors, by design, are usually meant to be more directed. As such, this should probably be extended to allow the generation of more specific selectors in specific objects, or to allow the exclusion of the generation from specified objects. |
Looks like it's already available. |
Hello, @darwin67 you're right but this can be quite a problem if you have tens of microservices. The personalized configuration file for kustomize (the one your link is pointing to) should be present in ALL the folders. Something like @taha-au suggested is probably a good idea. Or the possibility to specify the paths (metadata/labels, spec/selector/, ...) directly in the kustomization.yaml file.
|
I'm not sure what you mean by We kind of have similar situations but we structure it in a way that we only put the common configurations in each base so we always just reference the base from the overlays and the overlays can go a little crazy with multiple versions of the same base, so let's say I have 10 bases and I want 3 versions for each of the app, the max amount of common configurations I need is only 10. Maybe the I get the idea that people want to set everything in one place but I thought the whole idea of this tool is about composition, so centralizing the configurations in one place doesn't really defeat the purpose but I feel like it's moving away from that idea. |
I guess it depends on how your services are setup. In my case we have a setup that looks something like this, with a repo/monorepo:
My point on my previous message being that if the whole company (or certain teams in charge of multiple micro-services) use the same "custom" kustomize configuration, we would have to "copy" the kustomize configuration in each micro-service-#/base folder. I totally agree on the tool being about composition, but in a certain limit i guess. Maybe our architecture/use of overlays is not optimal, just relating about how we use it. I think the only way to do this currently is to publish a base that has this custom configuration and have all the micro-services use the same base. However only github is supported for remote bases. |
Maybe having |
Another reason why labels should not be applied on selectors by default: defining NetworkPolicy that allows ingress to application's web pods:
Suddenly some labels get applied to podSelector from commonLabels, making it possible for any pod to communicate with eg. database that happens to have that common label (such as client id or project name in our case). I would flag this as potential security issue. |
CommonLabels not updating selectors would be really annoying. Wouldn't adding a labels:
- target:
group: apps
version: v1
kind: Deployment
name: test
labels:
version: v0.1.0 Or follow whatever the new syntax for multi target patches will be. |
Hi all I face a similar issue when I try to update the "version" label in the CI/CD pipeline. cd base
kustomize edit add label version:xyz123
kustomize build ../overlays/production | kubectl apply -f - And receive the following error:
Has anyone found a viable workaround to update the version label during deployment? |
Bringing this back to life as it's still being requested and we may have someone to work on it. Summary: The The convenience here is that one doesn't have to specify the fields that should get labels. The price paid is lack of control, e.g. sometimes getting unwanted labels in selector fields. The default configuration can be overridden by declaring a
We should not change the schema of the Some ideas from above
Suggestions
The simplest thing might be just make it a list of label transformer plugin configs, e.g.
Implicit here would be that It has to be simpler than using the raw |
@Shell32-Natsu wdyt? |
As I've stated in my dupe issue, I think the best solution for this would be to have two separate properties in your Kustomization file, one that will add Labels just as labels, and one that will add them as labels and selector labels. ommonLabels:
app.kubernetes.io/owner: ATeam
selectorLabels:
app.kubernetes.io/name: my-app Although I do understand that this will introduce breaking changes in current deployments and will require manual interaction with resources, I still think this would be the most clear to users of Kustomize. |
@monopole LGTM, will do it when I have time. |
@rcomanne in retrospect, we have a flaw in the API and we're stuck with it. We can add fields, but changing the meaning of fields and imposing migration work on users isn't practical |
We can, as noted above, do a v2 on the kustomization file. But lets gather up some other changes first, not just the labels change. We also want to deprecate the |
Common labels sets a ton of labels by default [1] that are additive and can't be disabled [2]. This PR switches to using the labels field [3] to disable labelling selectors an only label metadata and template/metadata fields. This field is fairly new and isn't documented in the kustomize docs yet, but seems to be the recommended solution moving forward [4] (also docs appear to be incoming). [1] https://github.com/kubernetes-sigs/kustomize/blob/f61b075d3bd670b7bcd5d58ce13e88a6f25977f2/api/konfig/builtinpluginconsts/commonlabels.go [2] kubernetes-sigs/kustomize#817 [3] https://github.com/kubernetes-sigs/kustomize/blob/10026758d314920e8fa3c9c526525d8577d39617/api/types/labels.go [4] kubernetes-sigs/kustomize#1009
Common labels sets a ton of labels by default [1] that are additive and can't be disabled [2]. This PR switches to using the labels field [3] to disable labelling selectors an only label metadata and template/metadata fields. This field is fairly new and isn't documented in the kustomize docs yet, but seems to be the recommended solution moving forward [4] (also docs appear to be incoming). [1] https://github.com/kubernetes-sigs/kustomize/blob/f61b075d3bd670b7bcd5d58ce13e88a6f25977f2/api/konfig/builtinpluginconsts/commonlabels.go [2] kubernetes-sigs/kustomize#817 [3] https://github.com/kubernetes-sigs/kustomize/blob/10026758d314920e8fa3c9c526525d8577d39617/api/types/labels.go [4] kubernetes-sigs/kustomize#1009
I just tested it and it works. Since this pr got merged https://github.com/kubernetes-sigs/kustomize/pull/3743/files apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
labels:
- includeSelectors: true
pairs:
app.kubernetes.io/name: myapp
- includeSelectors: false
pairs:
app.kubernetes.io/verion: v0.1.0 |
Is there a viable solution available to apply a "common" label to all Kubernetes resources, including templates - such as Deployment labels and the pod template of Deployments - while excluding the application of this label to the Deployment's selectors? I found the following solution and I believe that it is valuable for other people who would be confronted to the same question: apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Kustomization
commonLabels:
my-label: my-value
labels:
- includeSelectors: false
includeTemplates: true
pairs:
my-changing-label: my-changing-value Documentation reference: Labels - Official documentation - Kustomize |
This doesn't work for me in v5.1.0 if I try to use the new |
I've tried everything listed in this thread unsuccessfully for a specific use case, any suggestions on how to get this working? I'll start with a brief code sample and then describe the use case. Code sampleImagine having this in your commonLabels:
app.kubernetes.io/name: "hello-app"
# ...
# This works but I'd like to use `patches` instead.
patchesJson6902:
- path: "patch-deployment-worker-common-labels.yaml"
target:
kind: "Deployment"
name: "worker-app" Then you have ---
- op: "replace"
path: "/metadata/labels/app.kubernetes.io~1name"
value: "hello-worker-app"
- op: "replace"
path: "/spec/selector/matchLabels/app.kubernetes.io~1name"
value: "hello-worker-app"
- op: "replace"
path: "/spec/template/metadata/labels/app.kubernetes.io~1name"
value: "hello-worker-app" Use caseYou have the usual K8s configs to deploy a web app (deployment, service, ingress, configmap, job, cronjob, etc.) and you want You have both a web and worker deployment and it's critically important that the worker deployment doesn't have the same label as the web deployment because your load balancer will attach a readiness gate to a deployment that matches a service's selector with the same name so the worker will always fail that check since it's not a web app. To get around that you patch the common label and selector of the worker. Using This is stopping me from being able to use inline Kustomize patches with Argo CD for preview environments since it only supports |
certain labels such as version shouldn't be propagated to selectors, but are common to the entire app.
The text was updated successfully, but these errors were encountered: