-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support composition in kustomize #1251
Comments
@apyrgio Please have look at: #1253 There is still a big issue to address (variable pointing at name) but the behavior seems to come along. apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: p1-my-deployment
spec:
template:
spec:
containers:
- image: my-image
name: my-deployment
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: p2-my-deployment
spec:
template:
spec:
containers:
- image: my-image
name: my-deployment |
@jbrette I saw your pull #1253 yesterday and I tried locally but it didn't fix my issue with the setup that you can see here. The interesting thing is that if you comment out line 6 ( apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
env: dev
name: my-backend
spec:
replicas: 2
selector:
matchLabels:
env: dev
template:
metadata:
labels:
env: dev
spec:
containers:
- image: my-backend-image
name: my-backend It's really when you add another resource that things go wrong. |
Hi @jbrette, thanks for taking the time to look into this issue. I've checked out your PR and built kustomize from it. With your changes, our diamond example builds correctly. However, when we add patches to the mix, I'm afraid it fails like before. You can see this in our updated diamond example, where the two overlays patch the same base resource:
The interesting thing here is that if the same two patches were defined in a single overlay, kustomize would operate as expected. This makes me believe that the error we see here is a side-effect of how kustomize builds YAMLs when the load paths diverge, and not the intended behavior. Is this the case, or am I missing something? Also, I think that the diamond example in your repo refers to a different use case (see here). We explicitly don't use |
Kustomize expects different overlays of one common base are for different apps. Then in the aggregated overlay, you have multiple apps. Adding nameprefix, namesuffix or namespace in overlays are to make them for different apps. @apyrgio, if you want to keep single app, you can apply patches in the same kustomization, rather than using multiple overalys. |
Thanks for the clarifications @Liujingfang1. So, if I understand correctly, multi-base overlays are used to duplicate the resources of a base, either via name prefixes/suffixes or namespaces, to support this use case:
I can see the merits of this use case, but I find it surprising that it can't co-exist with the use cases presented at this issue. If it helps, I think that the surprising part is that people use kustomize with the expectation that, given a K8s resource ID, they can apply one or more patches to it. They can do so within an overlay, and they'd expect to do so from different overlays as well. In the examples that I and @AdmiralObvious have supplied, this can work, e.g., by patching the same resource sequentially, in the order that the patches were defined. And yes, one can define patches in a single place and refer to them in various kustomizations, using If there are no serious semantic issues, I'd urge you to reconsider this limitation. I believe it will be helpful for a lot of kustomize users, without breaking existing use cases. If there are though, can you please explain for posterity what they are? |
@AdmiralObvious @apyrgio Thanks for the additional examples you provided today. I realigned the examples with what you provided:
Please double check. The second set of examples is showing that composition can work to some extend but that structure of the folders and kustomization.yaml is crucial. |
It would be super-useful to support mixins / composition. As example, we currently have structure set up like this:
Considering the architecture, I would prefer to move some of components to higher level, and mix-and-match different components (eg. use different kind of MySQL setup, or not use persistence layer each time). But deployment requires variable reference to MySQL hostname, and references don't work unless MySQL is directly under "app-base". With only inheritance approach, despite it looking modular it's still very difficult to do any changes that should not be applied to every instance, unless entire tree is being split. Persistence layer also contains patches that need to be applied on "app-base", due to them applying to sibling components. |
@jbrette yes, your modifications on my example make it work, even on kustomize 2.1.0. Thank you! Indeed, getting the the right folder/kustomization.yaml structure is crucial and tricky in my opinion. I tried several permutations myself before coming here for help. One thing I can see that might be less desirable with that method is how long the |
@paveq Even so I still working on getting the diamond "import" of files, you don't need it absolutely to achieve a composition and mixin: Have a look at mixin example Each component comes configured with default ports. Those values are updated according to prod and dev. |
@paveq, @apyrgio Love the ascii art above! @jbrette @ioandr Also, appreciate the examples It would be immensely helpful if someone wrote a test-only PR showing the inputs, the current output, and some commentary about what the desired output should be. We can then merge it, and when the bug / feature request is fixed, one can edit the test to prove it. See #1279 which updates a test entered in #978 As a start, someone could copy/paste this: pkg/target/diamonds_test.go The inputs are in YAML, as is the output. The case covered is
with different examples of patch addressing as you go up the stack. So - kustomize supports One could add another layer on top of the above example combining But for that to work, the kustomizations at Anyway, please lets express more of these desires as tests. |
@monopole One thing that would help composition would be overlay Kustomization to be able to define prefix or suffix for base they are using, and be able to request same base multiple times. For example, in my case I'm using Redis two times, but I currently I have to have them in two separate bases as I need different name for them. As example: syntax could be:
With this syntax, it might be possible to pass some other paramaters (eg. pass variables to base?) to base layer in future as well. I'll attempt build example input I would like to see working, and the result I would like to see. |
@jbrette: Nice job adapting @paveq's use case into a kustomization example. I checked out the files and have some minor questions:
Also, one final observation, but @paveq can also weigh in on this. I think his last issue was that moving logic higher up on the hierarchy (in this case, making the persistence layer opt-in) cannot be combined easily with other overlays, when both patch the same resources. Your example side-steps this issue because it does things linearly, but slightly more complex examples would hit this limitation. So, if I understand correctly, your example partially solves @paveq's use case, right? In order to be fully solved, we need some form of composition support higher than the base. |
@monopole Added a go version of the mixin PR Commit @apyrgio @paveq @AdmiralObvious @ian-howell
|
As per @monopole's suggestion, we created a PR that shows the current kustomize limitation, with an example that mimics the diamond test that @monopole quoted. We tried to keep it as simple as possible, but we hope that it adequately describes the functionality that the people interested in this issue want to see in kustomize. |
@apyrgio that's great. lets get that in as an example to point to. can you comment on why you don't want to do this as stacked overlays? |
i think you can use one base, and two overlays that change the name, e.g. |
@monopole You mean why we can't convert the example from this:
to this?
There are two reasons. First, you can think of the example as a simplified version of:
The base layer contains the vanilla app, the middle layer contains some modifications you can apply to it, and the upper layer contains the deployments of the app, each with a different choice of modifications. This example cannot be built with kustomize, if the The second reason is semantics. If you've isolated some common knobs of an app into components/overlays, how can you enforce a dependency order on them? Why should the |
@paveq @monopole @apyrgio @AdmiralObvious @ioandr Please have a look at the new version of the PR. I think we got it. Your new test is now returning the proper values: check here |
Hi @jbrette, thanks for your focus on this, we really appreciate your efforts. We believe this is getting really close to the desired result, however we would like to point out certain things. We recently contributed another PR (#1290) that makes the diamond composition test case slightly harder to solve. With this change, unfortunately, your PR fails to produce the expected output. Here, the reason is that converting a patched resource into a patch itself can re-introduce a changed field. We’d go one step further though, and argue that any type of merging that happens after the patches have been applied is inherently ambiguous, unless you recreate the patches somehow. A simpler approach may be to traverse the overlays depth-first and make patches that target the same GVKN apply to the same resource, thereby side-stepping any conflicts. @monopole We would love to hear your opinion on this, and specifically whether this approach is in the right direction. |
OK, now that we've got a load of tests we're Would it be reasonable to retitle this issue as: Support patch reuse in kustomize.Two ways to do so: Run
|
forgot to mention that the tests referred to in the previous comment are in They have
Thanks for these specific examples. |
Thank you very much for the prompt reply and the abundance of tests. We tried to dig through the suggested solution and we have some comments below. We don't have much experience with transformers, so please correct us if we've understood something wrong. As we've mentioned in a previous exchange with @Liujingfang1, we don't see composition as patch reuse. We need to reuse kustomizations as well, because they allow us to do the following in a single component:
From what we gather, specifying individual transformers is an interesting way to side-step the load restrictions limitation but, unfortunately, they do not cover the above points. For instance, a developer can't define a transformer that will create a As for the title, we're definitely not fixated on it. It's just that we wanted an umbrella term for the act of putting everything kustomize supports into a box and using it from various places. If patch reuse was our sole concern, we'd probably be ok with Does our end goal make more sense now? If so, feel free to change the title to whatever seems more fit. Also, we'd be more than happy to update the test scenario with an overlay that uses a |
This test has many examples generated configmaps working with deployments. It's a core feature. Though we'd say ConfigMaps are generated by generators, not transformers, and I'm not sure if you did a typo or meant to use the word transformer. There are also many examples of kustomization file reuse, as that's the whole point of allowing kustomizations to be specified in the
Yes! Please do; that's ideal. A passing test that exhibits undesirable behavior is the best way to point out the need for (and get) the support, or alternatively, to have someone point out an alternative approach that doesn't require a new feature. Composition is a good idea, which is why kustomization supports it. But the devils are in the details, and Bob's view of how composition is achieved might not match Alices'. You might want to chime in on #1292. It's related, and would be another way to solve this that doesn't work yet. I'd say @jbrette was working in this direction, but it needs a wider scope and some careful thought about backward compatibility. |
Ah, I see now the source of the confusion. I thought you proposed custom transformers as an alternative to the components we are describing in this issue, which is why I pointed out that transformers don't do all the neat things that overlays do. But you suggested them as a solution to the patch reuse problem specifically, which flew over my head.
We have opened another PR with a new test to further clarify our needs and preserve existing test cases intact. Feel free to comment and review.
The proposed That's excellent news! This functionality would be a great addition to kustomize. We'll further comment on #1292 for any implementation details. Also, I'd propose keeping this issue open for a while, so kustomize users in need for components/mixins can comment on whether the approach described in #1292 covers their case. |
Would it also be possible to solve this by changing the way that bases are loaded? My naive understanding was that bases worked the same way as an "include" such that:
Was equivalent to:
This doesn't work right now and fails with:
It seems to that if bases were loaded in order and could reference resources loaded before them then that would be enough to support composition. I have a more complete example of what I was trying to do here https://github.com/steinybot/bug-reports/tree/kustomize/patch-service. |
Right, instructing kustomize to load the resources in order is the gist of @pgpx 's PR. For example in your case, you should be able to convert the |
@apyrgio Is there a reason not to make this new load order for It feels a bit off to me that the kind dictates the load order within the thing that is loading it. This will limit the ways in which things can be composed in that the component has to be aware of the fact that it may be composed. The concern of loading is backwards. Will I be able to change everything to |
Ah, I see what you mean. Ok, I'll try to take your points one by one and comment on them with my understanding and the authoritative responses we have for now:
So, you ask why the resources in this scenario:
are handled in parallel (so It seems that Kustomize has this supported use case, whereby a single base ends up being copied multiple times, with a slightly different The intention is to deploy multiple variants of the same application with a single command. Personally, I have reservations on whether this is a good pattern, or something that is wildly used. In any case, this is further reinforced by @Liujingfang1's comment:
So, in a nutshell, Kustomize must keep the default load order to maintain backwards compatibility.
This is a valid concern. In fact, @monopole had proposed roughly what you're saying, that the aggregated overlay should specify how the resources should be merged (see #1292 (comment)). I liked this approach but, unfortunately, it never took off. Still, this is something that we could revisit. Still, if you think about it, the problematic overlay in the example I mentioned above:
attempts to patch a resource it has never defined. So, it does know that it's a component of sorts and expects that it will be composed. We could even generalize this by saying that any overlay that transforms something it has not defined itself, is aware that it's a component and must be composed. Now, there are components that don't transform resources but create/generate them instead. In this case, you're right, they can be used both as components and as base overlays, so specifying a single kind is limiting. I don't have an answer to that I'm afraid. My gut feeling though is that this is a situation that will rarely, if ever, be encountered in actual cases.
That's a very interesting observation and I think it's something @pgpx should have in mind before their PR is merged. If Sorry for the long comment but, as you see, there's a bit of a backstory to the points you've raised. Looking back, this is an issue that is over 9 months old, has a steady influx of comments and upvotes, yet no engagement from the core team for the past four months. That's a pity because, while it's understandable that there are other top-level priorities, this issue is the second most upvoted and the second most commented ever in this repo. It would help the Kustomize users that stumble onto these limitations, and then on this issue, if there was either an active discussion on how to improve the situation, or a mention of a roadmap item, or a design document, or something actionable in general. Else, the feeling they get is that "this seems like a non-issue for the Kustomize team, I guess I'll have to copy-paste stuff or use another tool". |
@apyrgio Good answer - thanks! Regarding using The basic idea is that a And I agree that any kind of feedback from the Kustomize team would be helpful, even if just to say that they are very busy / have real-world issues. |
@apyrgio catching up on this issue, will try to respond within a week. |
@pwittrock that's exciting, thanks for taking a look at this issue :-). BTW, I took the liberty of putting a discussion topic for this feature on sig-cli's agenda, so that we can answer questions more easily. I'm not sure if sig-cli is the proper place for community meetings regarding Kustomize so, if there's another one, please point me to it and I'll move it there. |
@apyrgio Thanks. That's the right thing to do. |
@apyrgio and @pwittrock thanks for covering this in the sig-cli call - I'm sorry that I missed that. if you are going to create a new documentation-centric PR would you be able to link to it here - I'd like to help if needed? Also, if the Kustomize internals are currently being refactored than it might be worth looking at the pull request implementation because the required changes are minimal (though that would also mean that they could be made later too). |
@pgpx: Great that you are in sync as well. For the others: I haven't gotten around to this yet, but I plan to summarize what was discussed in the call. As for the doc-centric PR, yes, once I have something ready, I'll send here as well. |
As I mentioned before, there was a sig-cli meeting where this issue was discussed. You can find the recording here (relevant bits after the 29:00 mark). The tl;dw is this:
So, given the above, the actionable items are these:
|
I've sent a PR (#2438) with a user story, where it shows a situation that cannot be solved by overlays in a proper manner (i.e., without copy-pasting stuff), but can be solved with components. Hopefully it's simple enough so that newcomers can understand its value. If you feel there's an aspect of this issue that is not covered by this example, or anything else that can be improved, please chime in on #2438. |
Update for those interested on this feature and who have not followed PR #2438; the main user story regarding composition has been ACKed by @pwittrock and @monopole. You can read more about what it entails in #2438 (rendered example here), and more formally on a KEP that will be sent shortly, and will be linked here as well. Any comments on the example or the KEP will help a lot. |
This enhancement proposal introduces components, a new kind of Kustomization that allows users to define reusable kustomizations. Main discussion: kubernetes-sigs/kustomize#1251 User story that spawned this KEP: kubernetes-sigs/kustomize#2438
This enhancement proposal introduces components, a new kind of Kustomization that allows users to define reusable kustomizations. Main discussion: kubernetes-sigs/kustomize#1251 User story that spawned this KEP: kubernetes-sigs/kustomize#2438
We (me, @ioandr and @pgpx) have created a KEP for this issue: kubernetes/enhancements#1803 (rendered document here). In this KEP, we try to explain roughly the following:
For those interested in this feature, please chime in on the KEP. |
This remains an issue for non-namespaced objects such as CustomResourceDefinition (CRD)s referenced from multiple deployments. It means that if
... works fine, but
will fail. To work around this, it's necessary to modify both Components do help, but for simple cases like CRDs, this is something Kustomize itself could greatly help the user out with. Especially since the CRDs may be embedded in upstream sources where the user cannot necessarily factor them out into Kustomize components. For example, The simplest option here would seem to be to ignore duplicate resources where the key matches and the contents are the same. Sample error:
|
Here's a demo. TL;DR: You can't write a single Kustomization that can be used standalone or further composed because of this behaviour. Either your standalone base cannot be applied due to missing CRDs, with errors like:
or if you include the CRDs in each base your combined deployment cannot be generated due to kustomize errors like
It shows that it's not possible to define a single kustomization such that it can stand alone for deployment to a cluster and can be composed into a deployment containing another kustomization that depends on the same CRD. You have to instead split out the CRDs, and build another layer that composes "kustomizations without CRDs" and "just the CRDs" in various combinatorial ways depending on what you wish to deploy. This is painful for maintenance, but also breaks otherwise-logical compositions. Crucially, generating and applying each piece of the Kustomization individually produces a sensible and correct result, even though Kustomize refuses to generate a kustomization that uses only those two as bases. And you can even concatenate the kustomize output of both parts, then Kustomize should really be de-dup'ing identical objects here. Components can help here, but only if you control all the definitions. Common real world deployments like The demo defines the following bases:
and composes them in a few ways:
What convinces me that Kustomize's behaviour is wrong here is that you can sequentially deploy
In fact, you can even concatenate the generated configurations and deploy them (the
kustomize is fine with you generating and applying the CRD then both bases that use it:
but not composing two bases that are complete including their dependencies:
However, the pieces that kustomize will let you compose successfully are invalid when deployed standalone:
so you can't write a single Kustomization that can be used standalone or further composed. Generate the demo with the following mkdir -p kustomization.yaml
cat > ./crd/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- "crd.yaml"
__END__
mkdir -p crd.yaml
cat > ./crd/crd.yaml <<'__END__'
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: democrds.demo.example.com
spec:
group: demo.example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
field:
type: string
scope: Namespaced
names:
singular: democrd
plural: democrds
kind: DemoCrd
__END__
mkdir -p a_crd.yaml
cat > ./a/a_crd.yaml <<'__END__'
apiVersion: "demo.example.com/v1"
kind: DemoCrd
metadata:
name: democrd-a
spec:
field: "a"
__END__
mkdir -p kustomization.yaml
cat > ./a/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: a
resources:
- namespace.yaml
- a_crd.yaml
__END__
mkdir -p namespace.yaml
cat > ./a/namespace.yaml <<'__END__'
apiVersion: v1
kind: Namespace
metadata:
name: a
__END__
mkdir -p b_crd.yaml
cat > ./b/b_crd.yaml <<'__END__'
apiVersion: "demo.example.com/v1"
kind: DemoCrd
metadata:
name: democrd-b
spec:
field: "b"
__END__
mkdir -p kustomization.yaml
cat > ./b/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: b
resources:
- namespace.yaml
- b_crd.yaml
__END__
mkdir -p namespace.yaml
cat > ./b/namespace.yaml <<'__END__'
apiVersion: v1
kind: Namespace
metadata:
name: b
__END__
mkdir -p kustomization.yaml
cat > ./crd_a_b/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../crd
- ../a
- ../b
__END__
mkdir -p kustomization.yaml
cat > ./crd_a/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../crd
- ../a
__END__
mkdir -p kustomization.yaml
cat > ./crd_b/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../crd
- ../b
__END__
mkdir -p kustomization.yaml
cat > ./crd_a_crd_b/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../crd_a
- ../crd_b
__END__ |
Yup, it's totally unusable for anything other than one deployment. |
@pkit Exactly - and |
We (@arrikto) have been using kustomize lately in order to deploy our software on Kubernetes, using a single base and multiple overlays for each deployment site. This is the typical inheritance scenario that kustomize supports.
As the number of deployments grow, however, so do the things that are common between them. For example, consider the following deployments:
This a composition scenario, where common configuration logic is put into components for reusability. If one wants to achieve it with kustomize, they would probably create intermediate overlays for each component, and multi-base overlays for each deployment, e.g.:
Unfortunately, this does not work, since kustomize ends up duplicating the YAMLs of the base. We have an example topology here that showcases this error:
(NOTE: You need kustomize v2.1 to run the examples)
If we change our components to not point to the base:
This works for creating new resources, but fails for patches, since patches must refer to resources that are defined in one of their bases. We have an example topology here that showcases this error:
It seems that currently there's no better way than duplicating code across overlays, which is not maintainable in the long run. Still, we strongly believe that some form of composition support would make kustomize much more usable. This is echoed in other issues too (#759, #942), as well as high-profile projects like Kubeflow, which needed to create a tool on top of kustomize to support composition.
Are there any plans to support composition in kustomize and, if not, does the core team have a better suggestion to tackle this configuration reusability issue?
/cc @monopole @Liujingfang1
The text was updated successfully, but these errors were encountered: