Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for common labels that don't get set on selectors #1009

Closed
pwittrock opened this issue Apr 23, 2019 · 56 comments · Fixed by #3743
Closed

support for common labels that don't get set on selectors #1009

pwittrock opened this issue Apr 23, 2019 · 56 comments · Fixed by #3743
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@pwittrock
Copy link
Contributor

certain labels such as version shouldn't be propagated to selectors, but are common to the entire app.

@monopole
Copy link
Contributor

Not possible with code as is :(

Could enrich the configuration of the current label applier to accept more rules, or write a new label transformer plugin and use that instead of the commonLabels field.

@Benjamintf1
Copy link
Contributor

Benjamintf1 commented Apr 29, 2019

If it is not possible to have common labels not apply to the selectors, then you should consider removing support for the common labels feature. Its a trap feature which is almost certain to cause users trouble. In the current state is encourages you to NEVER change your labels and is very anti-ci.

@Benjamintf1
Copy link
Contributor

Benjamintf1 commented Apr 29, 2019

Either that, or change the name of commonLabels to commonLabelsAndSelectors. Because I don't think anyone who's worked on kubernetes for a while to expect it to be added to selectors with that name. Adding to selectors is bad, especially because you're expected to have continuity(there's a reason these are seperate fields rather then just one and the same).

@embik
Copy link
Member

embik commented May 22, 2019

This behaviour should - at the very least - be clearly documented because it's going to leave you with orphaned resources in case you update your commonLabels, e.g. pods spawned by a previous iteration of a DaemonSet will continue running after the label matching has been altered by updating commonLabels. (edit: it is documented that commonLabels will apply to selectors, but maybe we want a warning there becase most people won't see the consequences).

In my humble opinion, commonLabels should not propagate to templates or selectors at all. I guess changing its behaviour is far too late and would break kustomize for a lot of people, but what about introducing a new commonUnpropagatedLabels (please chose a different name, I'm just terrible at naming things) that will ignore templates and selectors?

@Benjamintf1
Copy link
Contributor

Well it will either cause non spawning pods, OR if your version of kubernetes is new enough, it will simply fail as selectors have become immutable in newer versions.

@Benjamintf1
Copy link
Contributor

I DO have fear that commonUnpropagatedLabels could become RealMysqlEscapeString and I think that's the bigger risk. These sort of "The REAL thing you want is the thing with the different name because we didn't want to break backwards compatibility" is a pretty big gross problem in a lot of projects, and I think can be a big long term concern.

@seans3
Copy link
Contributor

seans3 commented May 22, 2019

Because this issue causes failures, marking this as a bug.

/sig cli
/kind bug

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. kind/bug Categorizes issue or PR as related to a bug. labels May 22, 2019
@monopole
Copy link
Contributor

monopole commented May 23, 2019

@Benjamintf1 @embik thanks for the great commentary.

The upside to propagation of labels to selectors is convenience and accuracy. In directory foo, the directive commonLabel foo will associate a Service in that directory with all pods created in that directory with no typo mistakes. Same in directory bar, etc. Some encapsulating overlay kustomization can combine foo and bar by mentioning them as bases, and add a wider-scoped label (which also propagates to selectors).

One doesn't have to change labels - one can just add more, leaving the old labels in place, and proceed with some label deprecation scheme across the cluster as time passes.

In favor of

  • an improved field name (commonLabelsAndSelectors would be clearer).
  • introducing some mechanism to suppress propagation to selectors

but would like more evidence that automatic propagation to selectors is a poor choice for default behavior.

@Benjamintf1
Copy link
Contributor

There is also a possible usecase of having the same thing deployed twice using different labels, but you'd need more then that to make it functional (Including deploying to a different namespace, but also you might have some troubles with cluster resources, so basically you'd have to fuzz the name, or location of everything to get it to work).

I'm not aware of a solid way to deprecate selectors at all at the moment. The best solution one Could do something with --prune these days (I think, looks like a new feature), but even then, you'd want that as a common label that doesn't apply to the selectors.

Part of the reason I think propagating to selectors is a bad choice is because it isn't clear.
In my personal opinion, you should use a absolutely minimal set of labels for your selectors, and have additional labels for querying and understanding (arguably some of them could be annotations for performance reasons, but I havn't seen that be a problem in my experience), because of the absolute need for continuity between earlier and later applied files.

I think there IS a usecase for both commonLabels and commonSelectors(Or you could call this "Common Distinguishers", or some other derived term), but I don't think these options should be combined in the same configuration. I think there's more clarity in having them as seperate paths because they are entirely different usecases and goals.

@embik
Copy link
Member

embik commented May 24, 2019

@monopole

One doesn't have to change labels - one can just add more, leaving the old labels in place, and proceed with some label deprecation scheme across the cluster as time passes.

I might be wrong here, but won't exactly that result in orphaned pods? Because adding additional commonLabels will append them to your selector, which means pods spawned by the previous iteration of your deployment/daemonset won't be selected by your controller object anymore (the existing pods miss the added label and thus are not selected).

I'll throw together a short demonstration of that if I find the time.

@monopole
Copy link
Contributor

the tldr of https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#label-selector-updates
is make a plan (the 'label deprecation scheme' i alluded to above), and understand that changing workload selectors really means creating new workloads and deprecating old ones, and having to delete them on your own. kustomize doesn't create this problem.

How about a new term kustomization term commonSelectedLabels that does what commonLabels does now, and change the behavior of commonLabels to behave as commonAnnotations does, i.e. just add KV pairs to metadata, and don't mess with selectors.

This breaks existing behaviors, so we'd have to increment the version in the kustomization file (which is a pain, since that adds text to all kustomization files that is currently optional since there's only one version).

Any other ideas?

@jnewland
Copy link
Contributor

I came across this issue while trying to add a revision-based label to all pods in a Kustomization and then use it as a part of confirming that all Pods of that version were ready later. In this case, I feel like I do want this label to be propagated down into workload templates, but don't want it to be propagated to selectors.

Any other ideas?

This desired additional dimension of customizability and the above commentary about breaking current functionality make me wonder if it might be worth considering introducing a new commonLabelOptions term that could be used as follows:

commonLabels:
  existingLabel: something
  # generated by CI tooling via `kustomize edit add label`
  revision: decaf123

commonLabelOptions:
  defaults:
    propagateToSpecTemplate: true
    propagateToSelectors: false
  labels:
    existingLabel:
      propagateToSpecTemplate: true
      propagateToSelectors: true

I do agree that propagation to selectors should probably be off by default, but this approach might support deferring changing the apiVersion for a while if that's desired.

@Benjamintf1
Copy link
Contributor

Benjamintf1 commented May 30, 2019

If the tact is to go down the path of being customized off the same "root" option, why not have

commonLabels:
  existingLabel:
    value: something
    propagateToSpecTemplate: true
    propagateToSelectors: false

It keeps the locality of the options a bit better, especially if you scale the number of labels you want.

(You can also keep some degree of backwards compatibility with this by treating a single string as the old way and eventually deprecating it, but having added labels be added in the new way)

@sascha-coenen
Copy link

as I explained in thread #157
propagating labels to selectors is actually wrong thinking, a "trap" as Benjamin put it.

The commonLabels are meant to "tag" those resources that YOU create with your kustomize setup.
The selectors constitute an ID that points to a resource that cannot be assumed to be under your auspieces. Making this assumption is completely arbitrary.

As a mind-aid, consider the case that you want to register stored procedures with a database.

kind: storedProcedure
version: ...
spec:
SELECT * FROM some_table

if this manifest was under the control of kustomize and you had defined commonLabels, then this SQL statement itself is YOUR resource and you might expect that when it gets registered with a database, that it would get tagged with your commonLabels.
However, those commonLabels should never modify the name of the database table "some_table"/
How strange it would be if someone proposed that this should happen.
To rename commonLabels into commonLabelsAndSelectors is besides the point, which is that you would never want to rewrite selectors because this would falsify an identity.
Saying "yeah but some_table might also have been created by me" is a possible usecase, but there is no reason for why this usecase should get any preference over the opposite case which is that the resource has NOT been created by you or is not part of your set of kustomization setups.

A sensible default would therefore be to not temper with selectors.
The case for adding labels to selectors would be to thereby give them another identity.
One can allow for this special case, but then one should make it very explicit and not confuse it with labels which are meant to tag YOUR resources and are not a templating facility for modifying the identity of foreign keys.

@voidengineer
Copy link

If the tact is to go down the path of being customized off the same "root" option, why not have

commonLabels:
  existingLabel:
    value: something
    propagateToSpecTemplate: true
    propagateToSelectors: false

It keeps the locality of the options a bit better, especially if you scale the number of labels you want.

(You can also keep some degree of backwards compatibility with this by treating a single string as the old way and eventually deprecating it, but having added labels be added in the new way)

With this approach it is not obvious how to change the defaults.

@Benjamintf1
Copy link
Contributor

With this approach it is not obvious how to change the defaults.

You can presumably leave off the "defaulting" options if you have them separate. You'd presumably learn it the same way, via documentation.

@guibirow
Copy link

guibirow commented Jun 5, 2019

If I understood correctly, the reason the selectors are modified is because changing the label in a resource will change it's identity in case it is used by a selector.

Implementing a new approach will have the same problem,. because you might change a label expected by the selector that didn't change.

The selector is updated do add the new labels where in fact it should only change the existing ones that where changed.

IMHO this shouldn't be a breaking change, is more like a bug fix, because nobody was expecting this dodgy behaviour.

@taha-au
Copy link

taha-au commented Jun 20, 2019

A possible solution here may be to introduce the concept of commonSelectors that, if present, would be populated into the selectors instead of the commonLabels. For example:

commonLabels:
   app: foo
   environment: dev
commonSelectors:
    app: foo
    release: bar

Would generate selectors of app: foo and release: bar but not environment: dev, while the labels would be app: foo and environment: dev. This would allow for more flexibility.

If commonSelectors is absent, then commonLabels could continue to work as it does currently. This would allow the expected behaviour for current users to remain the same, while giving people an opportunity to do other things if needed.

However, this doesn't address the problem that selectors, by design, are usually meant to be more directed. As such, this should probably be extended to allow the generation of more specific selectors in specific objects, or to allow the exclusion of the generation from specified objects.

@darwin67
Copy link

darwin67 commented Jun 20, 2019

@quentinleclerc
Copy link

Hello,
I've been facing the exact same problem of putting the version in commonLabels.

@darwin67 you're right but this can be quite a problem if you have tens of microservices. The personalized configuration file for kustomize (the one your link is pointing to) should be present in ALL the folders.

Something like @taha-au suggested is probably a good idea. Or the possibility to specify the paths (metadata/labels, spec/selector/, ...) directly in the kustomization.yaml file.

---
commonLabels:
  paths: 
    - kind: Deployment
      paths: [ "metadata/labels", "spec/selector" ]
    - kind: Service
      paths: ["metadata/labels"]
  labels: 
    app: foo
    environment: dev
    version: 1.0.0 

@darwin67
Copy link

I'm not sure what you mean by ALL though.

We kind of have similar situations but we structure it in a way that we only put the common configurations in each base so we always just reference the base from the overlays and the overlays can go a little crazy with multiple versions of the same base, so let's say I have 10 bases and I want 3 versions for each of the app, the max amount of common configurations I need is only 10.

Maybe the configurations setting can be improved but me as an everyday user is actually kind of happy the way it is right now and the reason is because we use Istio and having the version info propagated actually helps with dynamic traffic routing, like mirroring and A/B testing.

I get the idea that people want to set everything in one place but I thought the whole idea of this tool is about composition, so centralizing the configurations in one place doesn't really defeat the purpose but I feel like it's moving away from that idea.
Just my 2 cents.

@quentinleclerc
Copy link

I guess it depends on how your services are setup. In my case we have a setup that looks something like this, with a repo/monorepo:

micro-service-1/
-----------------base/
-----------------develop/
-----------------staging/
        ...
micro-service-X/
-----------------base/
-----------------develop/
        ...

My point on my previous message being that if the whole company (or certain teams in charge of multiple micro-services) use the same "custom" kustomize configuration, we would have to "copy" the kustomize configuration in each micro-service-#/base folder. I totally agree on the tool being about composition, but in a certain limit i guess. Maybe our architecture/use of overlays is not optimal, just relating about how we use it.

I think the only way to do this currently is to publish a base that has this custom configuration and have all the micro-services use the same base. However only github is supported for remote bases.

@darwin67
Copy link

darwin67 commented Jul 6, 2019

Maybe having git supported for remote bases could help?
Like how you can reference a remote terraform module with the git reference, like tags or even the sha.

@paveq
Copy link

paveq commented Jul 8, 2019

Another reason why labels should not be applied on selectors by default: defining NetworkPolicy that allows ingress to application's web pods:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: app-access
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/component: web
      app.kubernetes.io/name: web-app
  policyTypes:
  - Ingress
...

Suddenly some labels get applied to podSelector from commonLabels, making it possible for any pod to communicate with eg. database that happens to have that common label (such as client id or project name in our case).

I would flag this as potential security issue.

@benjamin-bergia
Copy link

benjamin-bergia commented Jul 8, 2019

CommonLabels not updating selectors would be really annoying. Wouldn't adding a labels transformer where you could specify a target (maybe reusing the syntax of jsonPatch target) solve the issue?
Like in

labels:
  - target:
      group: apps
      version: v1
      kind: Deployment
      name: test
    labels:
      version: v0.1.0

Or follow whatever the new syntax for multi target patches will be.

@L-U-C-K-Y
Copy link

Hi all

I face a similar issue when I try to update the "version" label in the CI/CD pipeline.
I execute

cd base
kustomize edit add label version:xyz123
kustomize build ../overlays/production | kubectl apply -f -

And receive the following error:

MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

Has anyone found a viable workaround to update the version label during deployment?

@monopole
Copy link
Contributor

monopole commented Mar 17, 2021

Bringing this back to life as it's still being requested and we may have someone to work on it.

Summary:

The commonLabels directive tells the LabelTransformer to add name:value pairs to fields specified in its default config.

The convenience here is that one doesn't have to specify the fields that should get labels. The price paid is lack of control, e.g. sometimes getting unwanted labels in selector fields.

The default configuration can be overridden by declaring a transformers: field in the kustomization file referring to the LabelTransformer.

  • In this example, the LabelTransformer is told to label
    only
    • Service:spec/selector
    • Deployment/spec/selector/matchLabels,
    • Deployment/spec/metadata/labels
  • This example shows a LabelTransformer that only
    adds pager: 867-5301 to the metadata/labels field of a Deployment
    and nothing else.

We should not change the schema of the commonLabels field. It would create too much confusion. Technically we could do so if we incremented the version number in the kustomization struct/file, but IMO, it's not worth it.

Some ideas from above

Suggestions

  • just go with a new field, called labels.
  • the new field and old commonLabels field should not be allowed to coexist in one kustomization file (avoid ordering confusion)
  • The kustomize fix command should read in a kustomization file with commonLabels and write out the equivalent using the new field, because the new field would be a superset of capability.
  • The new field would allow for a more sophisticated configuration that's still simpler than using the LabelTransformer directly as in the examples above (similar to how the patches field has lots of flexibility, but is still simpler than using the PatchTransformer directly).
  • The new field should still configure the existing LabelTransformer (as opposed to requiring a new transformer with a new configuration format).
  • By default (i.e. with minimal notation) the new field should add labels to metadata, but not add selectors.

The simplest thing might be just make it a list of label transformer plugin configs, e.g.

labels:
  -  pairs:
       bird: eagle
     fields:
     - path: spec/selector/matchLabels
       kind: Whatever
  -  pairs:
       fruit: apple
       truck: f150
     fields:
     - kind: Deployment
     - kind: Service

Implicit here would be that metadata/labels should always get the label, so that particular path doesn't need to be specified here.

It has to be simpler than using the raw LabelTransformer config, because that's the whole point of these abbreviations.

@monopole monopole reopened this Mar 17, 2021
@monopole
Copy link
Contributor

@Shell32-Natsu wdyt?

@rcomanne
Copy link

As I've stated in my dupe issue, I think the best solution for this would be to have two separate properties in your Kustomization file, one that will add Labels just as labels, and one that will add them as labels and selector labels.
I think the cleanest solution for this would be this;

ommonLabels:
  app.kubernetes.io/owner: ATeam

selectorLabels:
  app.kubernetes.io/name: my-app

Although I do understand that this will introduce breaking changes in current deployments and will require manual interaction with resources, I still think this would be the most clear to users of Kustomize.

@Shell32-Natsu
Copy link
Contributor

@monopole LGTM, will do it when I have time.

@monopole
Copy link
Contributor

@rcomanne in retrospect, we have a flaw in the API and we're stuck with it. We can add fields, but changing the meaning of fields and imposing migration work on users isn't practical

@monopole
Copy link
Contributor

We can, as noted above, do a v2 on the kustomization file. But lets gather up some other changes first, not just the labels change. We also want to deprecate the vars field.

wlynch added a commit to wlynch/results that referenced this issue Jan 6, 2022
Common labels sets a ton of labels by default [1] that are additive and
can't be disabled [2]. This PR switches to using the labels field
[3] to disable labelling selectors an only label metadata and
template/metadata fields.

This field is fairly new and isn't documented in the kustomize docs
yet, but seems to be the recommended solution moving forward [4] (also
docs appear to be incoming).

[1] https://github.com/kubernetes-sigs/kustomize/blob/f61b075d3bd670b7bcd5d58ce13e88a6f25977f2/api/konfig/builtinpluginconsts/commonlabels.go
[2] kubernetes-sigs/kustomize#817
[3] https://github.com/kubernetes-sigs/kustomize/blob/10026758d314920e8fa3c9c526525d8577d39617/api/types/labels.go
[4] kubernetes-sigs/kustomize#1009
tekton-robot pushed a commit to tektoncd/results that referenced this issue Jan 6, 2022
Common labels sets a ton of labels by default [1] that are additive and
can't be disabled [2]. This PR switches to using the labels field
[3] to disable labelling selectors an only label metadata and
template/metadata fields.

This field is fairly new and isn't documented in the kustomize docs
yet, but seems to be the recommended solution moving forward [4] (also
docs appear to be incoming).

[1] https://github.com/kubernetes-sigs/kustomize/blob/f61b075d3bd670b7bcd5d58ce13e88a6f25977f2/api/konfig/builtinpluginconsts/commonlabels.go
[2] kubernetes-sigs/kustomize#817
[3] https://github.com/kubernetes-sigs/kustomize/blob/10026758d314920e8fa3c9c526525d8577d39617/api/types/labels.go
[4] kubernetes-sigs/kustomize#1009
@bluebrown
Copy link

I just tested it and it works. Since this pr got merged https://github.com/kubernetes-sigs/kustomize/pull/3743/files

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
labels:
  - includeSelectors: true
    pairs:
      app.kubernetes.io/name: myapp
  - includeSelectors: false
    pairs:
      app.kubernetes.io/verion: v0.1.0

@yassinebelmamoun
Copy link

Is there a viable solution available to apply a "common" label to all Kubernetes resources, including templates - such as Deployment labels and the pod template of Deployments - while excluding the application of this label to the Deployment's selectors?

I found the following solution and I believe that it is valuable for other people who would be confronted to the same question:

apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Kustomization

commonLabels:
  my-label: my-value

labels:
  - includeSelectors: false
    includeTemplates: true
    pairs:
      my-changing-label: my-changing-value

Documentation reference: Labels - Official documentation - Kustomize

@jdmulloy
Copy link

Ah, since commonLabels directive adds labels to spec.selector.matchLabels, I have to add another patch to remove it.

- op: remove
  path: /spec/selector/matchLabels/<mylabel>
- op: remove
  path: /spec/template/metadata/labels/<mylabel>

And in kustomization.yaml

patchesJson6902:
# this is hacky because of https://github.com/kubernetes-sigs/kustomize/issues/1009
- target:
    group: apps
    version: v1
    kind: Deployment
    name: git-apply
  path: patches/rm-deploy-common-label-in-selector.yaml

Honestly, I don't like the behavior to apply common labels to spec.selector at all, it makes the adoption harder(manage an existing deployment via kustomize) since those fields are immutable which is very annoying.

This doesn't work for me in v5.1.0 if I try to use the new patches because patchesJson6902 is deprecated. The patch works for other labels that aren't added with commonLabels so I'm guessing the new patches are applied before commonLabels.

@nickjj
Copy link

nickjj commented Sep 10, 2024

I've tried everything listed in this thread unsuccessfully for a specific use case, any suggestions on how to get this working?

I'll start with a brief code sample and then describe the use case.

Code sample

Imagine having this in your kustomization.yaml:

commonLabels:
  app.kubernetes.io/name: "hello-app"

# ...

# This works but I'd like to use `patches` instead.
patchesJson6902:
- path: "patch-deployment-worker-common-labels.yaml"
  target:
    kind: "Deployment"
    name: "worker-app"

Then you have patch-deployment-worker-common-labels.yaml:

---
- op: "replace"
  path: "/metadata/labels/app.kubernetes.io~1name"
  value: "hello-worker-app"
- op: "replace"
  path: "/spec/selector/matchLabels/app.kubernetes.io~1name"
  value: "hello-worker-app"
- op: "replace"
  path: "/spec/template/metadata/labels/app.kubernetes.io~1name"
  value: "hello-worker-app"

Use case

You have the usual K8s configs to deploy a web app (deployment, service, ingress, configmap, job, cronjob, etc.) and you want hello-app to be applied as a label and selector to everything so you use commonLabels. You also have 8 different applications using the same "web" base and each of those 8 app's kustomization.yaml file uses a different commonLabel such as hello-app and world-app.

You have both a web and worker deployment and it's critically important that the worker deployment doesn't have the same label as the web deployment because your load balancer will attach a readiness gate to a deployment that matches a service's selector with the same name so the worker will always fail that check since it's not a web app.

To get around that you patch the common label and selector of the worker. Using patchesJson6902 works fine but trying to change that to patches fails with missing value.

This is stopping me from being able to use inline Kustomize patches with Argo CD for preview environments since it only supports patches. Any thoughts on an alternative strategy?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

Successfully merging a pull request may close this issue.