-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
namespace not supported by helmCharts #3815
Comments
The helm program does many many things, including (so I'm told) arbitrary execution of shell commands via The basic chart inflation kustomize performs is intentionally rudimentary to prove out the concept of performing "last mile" chart customization. Maintaining kustomize long term as a helm runner with full helm flag and argument knowledge as helm evolves would be toilsome and imply kustomize inherits whatever security risks helm presents. So - anything that looks like what kustomize does should be done by kustomize, rather than delegated to helm, e.g. setting namespace (#3838). The safest, most performant, and completely unlimited way to use kustomize + helm to generate config is
From time to time, capture the latest chart, and repeat steps 1 and 2. It's dangerous to set up a production stack that relies on kustomize to freshly download a helm chart with each build. |
I think the
Totally agree. Ideally you don't do that, but it comes down to scheduling prioritization and resources to implement. For me, submitting the PR was less resources to get kustomize in the door than setting up the rest before I could even start with kustomize and our many other usecases that don't usere helm where kustomize would help me. |
That's interesting, and would avoid the easy contradictions one can imagine by not doing it that way. |
This is an issue with Helm's --post-renderer option in particular and the scary part was that someone running kustomize didn't have to consciously opt-in to run Helm. Given that we now have to explicitly elect to run Helm when we run kustomize, the user is consciously choosing to run Helm and accept the risks associated with it.
This is a partially workable solution to me. The main places I could see it causing issues is with charts like Jenkins that create resources in other namespaces intentionally (in this case, creating a role in the agent namepace to allow Jenkins to spin up agent pods there).
Perhaps if you're pulling the Helm chart from an uncontrolled source. We pull specific versions of Helm charts into a private Helm repo and enforce tag immutability from there. We deploy multiple instances of a given application with different configurations and storing the fully templated code for each deployment adds a ton of noise and places for user error. |
This comment has been minimized.
This comment has been minimized.
This is a situation where one could use a transformer to change the configuration of another transformer before that generator/transformer is applied. https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/#transformed-transformers |
This comment has been minimized.
This comment has been minimized.
Reopening to track the broader concept of having only one place to specify a namespace change. |
I have tried to define the namespace twice. Works with https://github.com/grafana/helm-charts/tree/main/charts/grafana
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
If I'm not wrong, this wouldn't work with many multi-namespace charts as it would try to use the main namespace for every resource. It doesn't for me with this [0] fission, ending up with this [1] error. It would be great to make [0]
[1]
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
bump |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
bump |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
please reopen. just to again underline an already made point. atleast i would. |
/remove-lifecycle rotten |
@wibed: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
could you or one of the contributors reopen this issue |
Bumping again as this breaks our flow of specifying |
The community position on some of these flaws are baffling. I cannot inline edit multi-line string values for some of my YAML objects after flattening a helm chart, so I go towards the helmChart generator for an exception basis. I cannot use the helmChart generator effectively, because it does not honor the namespace. What are my remaining options? |
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: XXXX
helmCharts:
... sets the namespace correctly, if the chart makes use of the |
@wibed My kustomize file looks like this: apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: the-bug
helmGlobals:
chartHome: ../../_charts
helmCharts:
- name: namespace
releaseName: namespace
valuesFile: values.yaml My Helm Chart looks like this: apiVersion: v1
kind: ResourceQuota
metadata:
name: namespace-quota
namespace: {{ .Release.Namespace }}
annotations:
release-namespace: {{ .Release.Namespace }}
spec: {{ .Values.resourceQuota | toYaml | nindent 2 }} And building this looks like this: $ kustomize build . --enable-helm
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: namespace
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: default # <- {{ .Release.Namespace }} is still "default"
helm.sh/chart: namespace-1.0.0
name: the-bug
---
apiVersion: v1
kind: ResourceQuota
metadata:
annotations:
release-namespace: default # <- {{ .Release.Namespace }} is still "default"
name: namespace-quota
namespace: the-bug # <- {{ .Release.Namespace }} correctly set
spec:
hard:
pods: 100 I suspect that kustomize just replaces the namespace values with the correct values. This is an issue for us since we dynamically generate namespace and therefore cannot set |
I'm having exactly the same issue. |
I've retested with kustomize v5.3.0 since the included version in kubectl is quite old, but the issue persists. |
i confirm that my workaround is more than flawed. to set a overall namespace overrides internal set namespaces disregarding the variable |
I created a new issue dedicated to this. You are welcome to comment and upvote #5566 :) |
The new
helmCharts
feature (added in #3784) has no way to pass in a namespace.This is problematic with many common charts where the namespace is used in e.g. arguments or annotations
e.g. try rendering the cert-manager helm chart:
Actual output
it results in annotations that contain the namespace, e.g.
cert-manager.io/inject-ca-from-secret: default/cert-manager-webhook-ca
it also has the argument:
- --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.default,cert-manager-webhook.default.svc
Kustomize version/Platform
{Version:4.1.2 GitCommit:$Format:%H$ BuildDate:2021-04-19T00:04:53Z GoOs:linux GoArch:amd64}
CC @monopole
The text was updated successfully, but these errors were encountered: